#87 - Smarter Oceans, Smarter Capital
- henry belfiori
- Oct 24
- 4 min read

I’ve been thinking about how much time and energy investors spend trying to make sense of early-stage data, especially in ocean tech, where it's scattered, inconsistent, and/or difficult to measure.
For example, due diligence in this space has always been messy. Impact metrics don’t line up, pilot results are buried in PDFs, and every startup seems to report differently. There’s innovation everywhere, but the signal gets lost in the noise.
Lately though, I’ve noticed a gap. Smarter tools are beginning to clean, connect, and even interpret this data, turning hours of manual research into patterns and insights that make more sense. This could be the start of a more efficient, more intelligent kind of data structuring, one that understands the complexity of the ocean economy a little better.
AI as an Efficiency Layer
AI entered the chat. Not a replacement for human due diligence (yet), but as an efficiency layer sitting between data and decision.
Many funds are already experimenting with tools that automate parts of the early-stage process. Natural language models can scan pitch decks, research papers, and impact filings to extract key metrics or highlight potential risks. Pattern-recognition algorithms can cluster startups by technology readiness level, patent activity, or impact potential. Even basic machine learning models can flag anomalies in reported impact data or missing financial fields, the kind of small inconsistencies that often go unnoticed until much later in a deal cycle.
For early stage (and beyond) ocean-focused investors, this matters. Marine data tends to be highly fragmented, scattered across monitoring systems, pilot projects, or government portals. When AI systems are trained to pull from these sources, they can start connecting things: tracking emissions reductions from port electrification, mapping biodiversity improvements around restoration sites, or benchmarking vessel energy efficiency across fleets.
It’s not about replacing investor intuition. It’s about reducing the drag created by poor information flow, letting capital focus more on what’s viable, and less on what’s verifiable. This being said, it is the responsibility of the venture to have these signals and data points structured and readily available for investors. AI can only interpret what it can access, and right now, many startups still store their most valuable evidence in static pitch decks or scattered folders.
Smarter, not colder
Due Diligence Pain Point | AI Application / Tool Type | What It Enables | Example Use Case |
1. Unstructured or inconsistent data | Natural Language Processing (NLP), Document Parsing | Converts decks, reports, ESG filings, and pilot results into structured data fields. | Extracts CO₂ metrics or project KPIs from multiple formats for side-by-side comparison. |
2. Limited visibility on early-stage traction | Pattern Recognition, Entity Linking | Identifies traction signals across fragmented data (e.g. patents, grants, pilots, media). | Flags startups with growing academic citations or repeat partnerships despite limited revenue. |
3. Manual verification of impact claims | Machine Learning (ML) Cross-Validation | Matches startup claims against open datasets (satellite, emissions, port, biodiversity). | Verifies if reported fuel savings align with vessel telemetry data. |
4. Sector benchmarking gaps | Predictive Modelling, Cluster Analysis | Benchmarks cost, efficiency, or impact per technology category. | Compares cost per tonne CO₂ saved across marine propulsion systems. |
5. Long diligence timelines | Automated Workflow Pipelines | Reduces repetitive admin (document collation, scoring, red flags). | Auto-generates readiness reports or diligence checklists. |
6. High uncertainty in impact valuation | AI-Driven Life Cycle Assessment (LCA) Tools | Quantifies environmental and social impact through modelled projections. | Estimates biodiversity uplift or avoided emissions from reef restoration or port electrification. |
7. Knowledge loss across diligence teams | Knowledge Graphs / Retrieval-Augmented Search | Links datasets, insights, and commentary across multiple reviewers or sources. | Creates a dynamic map of due diligence findings accessible to all team members. |
For Founders — Making Data Machine-Ready
If investors are using AI-assisted diligence, founders can make their ventures AI-legible by ensuring:
Data Hygiene
Use consistent formats (CSV, XLSX, JSON) over PDFs when possible.
Tag files clearly (e.g. impact_metrics_Q2_2025.csv).
Maintain version control in your data room.
Transparency
Quantify impact where possible: emissions avoided, energy saved, community reach.
Provide sources and verification links (e.g. MRV reports, third-party validation).
Keep historical and projected data separate but traceable.
Discoverability
Publish key metrics or achievements on public datasets and platforms.
Align with frameworks (UN SDG, EU Taxonomy, Ocean Impact Navigator) to aid automated parsing.
Store data in cloud-based repositories compatible with diligence APIs.
The cleaner and more structured the data, the faster it can travel — and the easier it becomes for capital to find, verify, and back it. When data is structured, AI can surface opportunities that once got buried in the admin. And when that intelligence reaches the blue economy, due diligence starts to look less like a gatekeeping exercise and more like a shared map, one where ideas, evidence, and capital finally move at the same speed.
Looking Ahead
The more I see AI creeping into the edges of due diligence, the more it feels like a quiet systems upgrade, not just for investors, but for how the entire early stage ventures structure.
The reality is that most impact and climate data still lives in silos. One startup’s biodiversity metric doesn’t match another’s. One port’s emissions dataset uses a completely different structure to its neighbour’s. It’s no wonder (along with other issues) capital has struggled to move efficiently across the sector.
If AI is helping to close that gap, even a little, it’s because it forces standardisation. Structured, comparable, machine-readable data is what lets insight travel faster.
In time, that could mean:
Shorter deal cycles, as investors verify claims through automated sources.
More discoverable innovation, as clean data makes hidden ventures visible.
Better capital allocation, as patterns emerge across restoration, mobility, and materials.
When data is cleaner, diligence becomes easier. When diligence becomes easier, capital becomes braver. And when capital becomes braver, ideas that once sat at the margins, ocean repair, low-carbon ports, regenerative aquaculture, etc. start to look like investable futures, not distant ideals.
Hope you enjoyed this post and have a good weekend!
OTI
H




Comments