Where we are, now: an investor’s deep dive into the opportunities in AI
In 1947, shortly after the Second World War, Alan Turing, the godfather of modern computing, gave a lecture. “What we want,” he told the crowd, “is a machine that can learn from experience.” Nearly 80 years later, that vision has materialised.
Today, we’re standing at the threshold of what might be one of the most profound shifts in value creation since the industrial revolution. And this time, it’s intelligence itself that’s being industrialised. With frontier models now capable of reasoning, coding, summarising, searching and creating at levels that were unimaginable even five years ago, we suddenly find ourselves with nearly unlimited, on-demand intelligence at our fingertips. Of course, it’s early days. The models will evolve, the capabilities will deepen and the edges will sharpen. But the foundation has been laid.
The question is no longer if AI will transform industries. That debate has been settled. The question is: what will we choose to do with it? As investors, founders and business leaders, working across every sector, we collectively face an extraordinary moment of agency. An opportunity to leverage this world-changing technology to build companies, solve impossible problems and unlock entirely new markets.
And there may be no better place to do it than London. Unlike the US, where sectors are fragmented across the country (politics in D.C., finance in New York, etc.) London is the capital of everything in the UK: finance, professional services, government, defence and the creative industries. It’s also Europe’s centre of gravity for Big Tech. The UK government’s recent pledge to invest directly into AI infrastructure, helping position the country as an ‘AI maker, not an AI taker’, is incredibly encouraging. With the right ambition, coordination and support this ecosystem has a once-in-a-generation chance to build AI-native companies that can lead on a truly global stage.
After all, the UK was the birthplace of AI — why shouldn’t it be the place where this revolutionary technology comes to realise its full, world-changing potential?
What to expect
2. A historic moment to found a business
3. Where we are: reasoning era, early agents emerging
4. This time is different – but there are challenges to be overcome
5. So where will the value accrue?
- Application AI
Over the last 18 months, advances in foundation models have driven unprecedented velocity in AI’s commercialisation. We’re long past needing to explain what large language models (LLMs) are; names like ChatGPT, Claude, Gemini, and Mistral are part of the everyday lexicon for anyone building or investing in tech.
But the infrastructure layer (where these models live) remains capital-intensive, highly consolidated and largely off-limits to most startups and VCs. The sheer cost of training frontier models (GPT-4 reportedly cost over $100m to train; Gemini cost even more) creates an arms race only a handful of players can realistically compete in.
For almost everyone else, the real opportunity lies in the application layer: embedding these powerful models into specific workflows to solve vertical specific problems with high return on investment (ROI). It’s here that we think we as investors stand to make the most difference.
At Octopus Ventures, most of the AI-oriented start-ups we back are building in this applied layer, where models don’t replace human capability – they give it superpowers. Definely, a start-up in the legal tech space, exemplifies this: their AI platform accelerates drafting, clause analysis and cross-referencing, freeing lawyers from hours of repetitive contract work to expand their client-facing capacity in an industry where time is (literally) money.
Healthcare offers another powerful example. We recently backed Lyrebird Health, a company whose AI scribe is being deployed in clinical consultations to automate medical note taking. In doing so, they’re engaging with one of the major, hidden, pain-points in a clinician’s professional life. Typically, Clinicians spend up to 35% of their time on documentation. Automating even half of this translates into hours reclaimed per clinician every week, with patients feeling the benefits.
Elsewhere, Vyntelligence is unlocking entirely new datasets in infrastructure, utilities and field operations – sectors where structured video data has, historically, been under used. Its AI-powered platform turns short, guided videos from field workers into tagged, analysable data streams, shortening inspection cycles, improving compliance and reducing revisit rates.
And most recently, we backed Altura, an AI-first bid management platform transforming how companies respond to complex tenders. By automating qualification, drafting and insights, Altura helps teams move faster, win more and focus on high-value decisions rather than admin.
None of these are pie-in-the-sky hypotheticals: they’re real businesses, in operation today, which we have backed. And each of them highlights where, exactly, application AI is able to deliver durable, compounding value: in areas where (a) the data involved is very specific to that industry or workflow and requires deep understanding to use effectively, (b) the cost of human labour is high and often wasted on non-core administrative tasks, and (c) small improvements in productivity can generate very large financial or business benefits – holding especially true in high-cost, people-intensive industries.
- A historic moment to found a business
Against the backdrop of this AI revolution, this industrialisation of intelligence, we’re seeing a new generation of founders emerge. These founders operate with extraordinary agency, pace and leverage, so that it almost seems we’re watching them reinvent company formation itself in real time.
Historically, even the most talented founders had to navigate three fundamental constraints: access to customers and product-market fit, access to capital and access to people. Well, AI enables rapid prototyping, customer discovery and outreach; founders who employ AI solutions find themselves building their start-ups in far more capital efficient ways. And the superpowers application AI gives individuals means far more can be done by far fewer people. AI is systematically reducing friction in all three of the historic pain points, making this an exceptionally exciting moment for anyone wishing to found a business.
Product development cycles that used to demand multiple engineers and six-figure budgets are being compressed into days, sometimes hours. Tools like Replit, Loveable, and Cursor help individuals or very small teams to prototype full-stack applications that would have required months of engineering resourcing and £10,000–£30,000 of burn, even just a couple of years ago. In parallel, AI-powered no-code platforms, design assistants and co-pilot tools allow founders to iterate at speed, validate customer demand earlier and conserve precious early capital while still making meaningful technical progress.
But this shift isn’t just about product, it’s about a new operating model for entire companies. We’re seeing early-stage teams run highly automated customer onboarding, AI-augmented customer support, outbound sales sequences driven by AI agents, and automated financial reporting and forecasting (all this often with single-digit headcounts with Series A metrics). What does this mean? That the founders embracing this new technology are far more capital efficient, able to execute faster and perhaps most importantly, able to mitigate the early-stage risks that brought so many start-ups low before they found product-market fit in the past. Of course, it’s a two-way street: even as these advances make life easier for individual founders, they foster enhanced competition. After all, these tools are available to anyone with the dynamism to leverage them. That means that the sustainable, differentiated product-market fit pioneers need to build a world-changing business is no longer the same beast as it once was.
It’s also important to note that making the most of these benefits is no longer optional. From an investor’s perspective, we increasingly view AI-native operating leverage as table stakes. Founders who grasp the opportunity these tools offer will simply outcompete on pace, burn and precision. And we expect capital markets to reward that discipline.
This shift isn’t just for early-stage companies. Every company, at every stage, should be actively educating their teams on how to embed AI into their day-to-day work. The opportunity isn’t simply to ‘use AI’ — it’s to continuously reskill organisations to keep pace with developments and embrace the competitive advantage this extraordinary technology offers. Easy enough to say, of course, but how should that reskilling happen? Here are a few resources leadership teams looking to upskill might find useful:
- Free courses: For practical, foundational AI literacy I highly recommend DeepLearning’s AI For Everyone by Andrew Ng (Coursera), which offers a great strategic overview. Google’s free AI Essentials is good for hands-on application and I recommend the practical content offered by OpenAI Academy.
- YouTube channels for continuous learning: Subscribe to channels like Matt Wolfe for regular digests of new tools and practical demos, Two Minute Papers for approachable breakdowns of cutting-edge research and AI Explained for insightful analyses of major AI trends and breakthroughs.
- Essential newsletters & blogs: Keep your finger on the pulse with newsletters such as Ben’s Bites, TLDR AI and The Rundown AI for concise daily updates
It’s important to remember that the companies that embrace this learning curve fastest will create disproportionate advantage, not because they simply added AI tools, but because they built AI-native cultures across their entire business.
We’re still very early in this platform shift, but already it’s clear: those who lean in now will be best positioned to compound its benefits over the long arc of this market.
- Where we are: reasoning era, early agents emerging
To frame where we are in this AI journey, it’s helpful to borrow from OpenAI’s own staging of progress towards AGI (Artificial General Intelligence) — not because AGI itself is the immediate focus, but because the framework provides a simple way to map the maturity curve of today’s models.
- Narrow conversational AI (where we’ve been)—think chatbots, models capable of performing single tasks with high competence, but limited generalisation (e.g. early GPT-3, basic image generation, text summarisation).
- Reasoning AI (where we are now) — models able to engage in multi-step reasoning, chain-of-thought logic, structured problem solving, and code generation. This is where much of GPT-4o, Claude Opus and Gemini 1.5 now sit. For example: planning a multi-city trip, generating software code from a product spec, or analysing legal documents to suggest revisions — tasks that require more than just predicting the next word.
- Agentic AI (where we are beginning to see early experimentation) — AI systems that can autonomously pursue objectives, break problems into subtasks, make decisions and interact across tools, APIs and systems, all with limited human involvement.
- Autonomous AI & superintelligence (far future, not today’s topic) — theoretical stages where AI systems operate with strategic autonomy, cross-domain expertise and self-improvement capabilities far beyond human-level general intelligence.
Today we find ourselves in the reasoning AI phase, with the very earliest forms of agentic AI just starting to emerge at the foremost edge. I’ll leave the forecasting on superintelligence timelines to others — for founders, there is already more than enough complexity and opportunity in the current phase to build against, even if there are important, broader societal questions still to be answered.
Agentic AI has been the subject of huge excitement, but also a ton of hype: autonomous digital workers designed to plan, reason and execute multi-step tasks towards a defined goal with minimal human intervention. True agentic use cases don’t just involve generating an output, but breaking down objectives into subtasks, taking sequential actions, integrating across multiple tools or APIs, dynamically adapting based on feedback and handling exceptions when things don’t go as expected.
Naturally, there’s a tonne of excitement about this. But the truth is, real-world deployment is still highly constrained. Most of what we’re seeing in the market today is best described as either AI co-piloting or AI-powered workflows. AI co-pilots are tools which might assist humans in completing specific tasks faster and more accurately, but they still require human oversight, decision-making and supervision. This concept of Human-in-the-Loop (HITL) remains critical: AI may handle much of the heavy lifting, but humans are ultimately responsible for steering, reviewing and validating outputs.
Often, what gets described as “agentic” today is better understood as AI-powered workflows, where AI optimises specific steps within a broader process, but humans or traditional systems still orchestrate the end-to-end flow. A simple way to assess this is to ask: Is the AI autonomously running the entire process, or simply improving individual tasks within it?
In simple terms:
- Co-Pilots = AI assists the human by helping complete specific tasks faster, but humans remain in charge of decision-making.
- AI-Powered Workflows = AI optimises individual steps, but humans or systems still control the full workflow.
- Agents = AI autonomously plans, sequences and executes multiple steps across tools or systems to pursue a defined goal, while dynamically adapting as new information arises. Think of it like a Digital Worker with a well-defined job.
This year may have been heralded as the year of AI agents, but scaling true agentic AI into more complex enterprise environments remains difficult. Sophisticated business functions are made up of ambiguity, fragmented data, regulatory constraints, interconnected systems and edge cases — all of which require reasoning, judgment and error tolerance that today’s systems cannot (yet) handle reliably.
Where we are seeing early forms of true agentic AI emerge is mostly within narrow, less complex (for now) and well-defined activities such as:
- Software development agents: take tickets, write code, run tests, handle pull requests.
- Sales prospecting agents: source leads, enrich data, draft outbound sequences.
- Customer support agents: triage tickets, retrieve knowledge, generate responses, escalate where needed.
- Finance ops agents: reconcile transactions, generate reports, assist month-end close.
- Compliance agents: monitor filings, scan contracts, flag emerging regulatory risks.
- This time is different – but there are challenges to be overcome
Beyond the astonishing technological breakthroughs, and the breakneck speed of change, there are good reasons AI feels so powerful. There are structural tailwinds behind it that previous technology waves simply didn’t have. For example:
- Global cloud compute is universally accessible, and with 54% of the world’s population now owning smartphones (according to a 2023 GSMA report), distribution and data capture are near-ubiquitous
- Post-Covid digital transformation has created enormous volumes of structured data and workflow digitisation across every industry
- The global workforce is increasingly AI-aware and already comfortable working alongside software augmentation tools
- A tidal wave of capital is fuelling AI globally, with $455bn in data centre investment in 2024 (around 50% AI-focused), $320bn in Big Tech AI capex planned for 2025, and ~$2.5tn of private markets dry powder — including ~$500bn of venture and growth capital that can directly fund AI-native startups and scale-ups
But despite this momentum, enterprise and institutional adoption still faces real headwinds. Large organisations, banks, insurers, healthcare providers, governments — all move cautiously for good reason. They face open questions around AI safety, data privacy, regulatory compliance, security, hallucination risk, liability exposure and, perhaps most fundamentally, data quality.
Because the uncomfortable truth for most large enterprises is this: their data is (still) a mess.
AI systems are only as strong as the data they ingest. High-quality, structured, clean data means a supremely effective AI software tool; poor data yields unreliable, brittle outcomes. The “garbage in, garbage out” problem remains one of the biggest practical barriers to scaled enterprise deployment. For many large companies, solving for foundational data architecture has become an urgent problem to be fixed, if these organisations hope to capture AI’s full promise.
For founders, this unlocks a parallel opportunity. Infrastructure that enables or protects, helping enterprises get AI-ready, must be built. This opportunity is why we’ve backed companies like ValueBlue, which helps organisations map, clean and govern their enterprise architecture and data flows to prepare for AI integration, work that is now mission-critical for CIOs globally. Similarly, companies like ThreatMark are leveraging AI to continuously adapt to evolving cyber threats, where traditional rules-based systems increasingly fall short.
In spite of the challenges, and the tough economic conditions of the past few years, AI budgets are growing, with the field a key focus area of investment for companies. According to Foundry, roughly half (49%) of organizations have dedicated budgets for AI projects, up from 36% in 2023, allocating an average of 23% of their IT spend to AI initiatives. Almost half (47%) of organizations cite IT integration, governance, and security as key hurdles, while 37% struggle with the lack of in-house AI expertise.
In the coming years, the real winners won’t simply be those who build the most sophisticated models, but those who help enterprises operationalise AI. There’s a huge win for start-ups that help simplify deployment, build trust, map ROI, and solve real business pain points inside complex organisational environments. This is where product positioning, customer understanding and enterprise sales — all human skills — will remain absolutely central.
- So where will the value accrue?
While much of the early value in AI has flowed towards the foundational layer (the LLM providers, cloud platforms, and compute infrastructure players), we believe the most investable and scalable venture opportunities over the next decade will sit within the application layer.
As foundational models become increasingly commoditised, with costs falling rapidly (OpenAI’s GPT-4 API costs falling ~80% year-on-year), differentiation at the application layer will come from how businesses leverage these models to solve very specific customer problems. In this layer, it’s not a question of who has the biggest model, but who can:
- Embed AI into critical business workflows to address problems that software itself has struggled to solve
- Build deep vertical or domain expertise that general models either can’t replicate or lack the context depth to replicate at the same quality level
- Capture and generate proprietary data loops that reinforce product performance and generates customer lock-in
- Neatly solve for workflow integration, compliance, security and trust inside complex enterprise environments that can support faster adoption rates
- Deliver and demonstrate clear ROI justification for buyers where productivity, cost savings, or revenue growth are demonstrably tied to adoption
In many ways, we’re entering a period much like the SaaS explosion of the 2010s: thousands of niche vertical AI applications will emerge, but very few will scale unless they build real moats. We’ve already seen how, with each Google I/O or OpenAI release, dozens of thin “GPT wrapper” startups — essentially point-and-shoot interfaces built on top of foundation models, can have their core value proposition commoditised overnight. Our own investment approach at Octopus Ventures is increasingly focused on backing companies that go much deeper: embedding AI into highly contextual, domain-specific workflows, leveraging proprietary data advantages, building integration depth and achieving real economic defensibility as general-purpose LLMs continue to improve.
While the application layer may fragment in the short-term, we expect consolidation to follow, favouring those businesses that either integrate deeply into enterprise ecosystems, or specialise so uniquely that incumbents will seek to acquire or partner rather than compete.
- The opportunity for the UK
The promise of agentic AI may not have been fully realised just yet, but we’re closer to Alan Turing’s vision than we’ve ever been – and it’s drawing nearer by the day. LLMs are reaching ever greater levels of refinement and efficiency, and as they do the scope of possibility for solutions in the application layer is widening.
The UK is the birthplace of theoretical AI. It’s also where some of the first, and most dramatic movers in the AI space were founded (DeepMind founder and CEO Demis Hassabis cut his teeth in the UK games industry). Symbolic significance aside, there are concrete reasons founders should seriously consider scaling their AI-powered start-up in this country. London remains the largest VC market in Europe, while the UK’s talent pool, stocked by its world-leading universities, is second to none.
The UK has also been taking a global lead on AI governance. The AI Safety Summit in 2023 was just one example of its forward-thinking, innovation-friendly approach to regulation, and the country is building a regulatory framework to match. There was the recent launch of the London AI Hub and this month we’ve seen the inaugural London AI Summit, held during London Tech Week.
Here, across Europe and in the US and Asia, too, there remains an enormous, wide-open space, ready to be populated with world-changing AI application businesses. While new start-ups naturally need to be AI native, established software businesses bring an enviable advantage in distribution, customer relationships and embedded workflows. Ultimately, this becomes a race: which companies can move fastest to combine deep AI innovation with meaningful customer scale. Those who emerge at the front will be extremely well positioned.
From our perspective, we’re on the hunt for AI application businesses with paradigm-shifting ambitions. We want to hear from product obsessed founders building vertical specific solutions where they have a distribution or domain-specific advantage. The greater the impact, the better. We also continue to see outstanding SaaS companies successfully embedding AI into their existing platforms. As one of the largest software investment teams in Europe, we believe this wave of proven businesses turbocharging their offerings with AI represents one of the biggest opportunities in our industry’s path.
In return, we can offer founders a global network, industry-leading people and talent support and deep expertise across a range of sectors. We’re excited to see how AI will disrupt all of them, from white-collar work, to health, fintech to deep-tech and, critically, climate.
The machines are starting to learn from experience, but we hope founders can still learn from ours – and vice versa. We invest from the very beginning at pre-seed, all the way through to Series B. If you’re looking for a partner who’s all in on the historic opportunity that lies ahead of us all, a smart and long-term funding source, driven by values and equipped with the expertise and resources to build a world-changing team – start a conversation. You can learn more about what we do on our website, or reach me on [email protected].