Resources

The AI Diaries: week 1

AI has entered our lives at unprecedented pace. As my colleague Adam Said wrote in his recent investigation into the opportunities in AI, the technology represents a new industrial revolution – industrialised intelligence.   

Given the promise it holds, it’s little wonder that it’s attracting extraordinary investment. By some experts’ reckoning, AI capital expenditures in the US could hit 2% of the country’s GDP this year. From a more local, VC-oriented perspective, many now view a proactive engagement with AI solutions as essential in any scaling start-up.   

Ben Newsome

As Adam wrote in his blog, the technology creates extraordinary opportunities for the formation of new, AI-centric companies. But it also creates incredible new efficiencies for the founders and start-ups willing and able to embed its solutions into their operations.   

But as the whirlwind of hype continues, contrasting viewpoints are being made. These make the case that AI is just another bubble, similar to the dot-com boom of the late nineties. Its true capabilities, use-cases and valuable applications, they argue, will only be uncovered as the technology matures.  

So, what’s the truth? Is this a paradigm-shifting technology, set to fundamentally reshape how we do business – or are the efficiencies it seems to promise radically overstated?   

And, as importantly, how should founders building cutting-edge, world-changing solutions, be leveraging it?   

To find out, we turned to a couple of people who know best. Patrick Van Deven is the CEO at Octopus Ventures backed VaultSpeed, which supports enterprise data teams in making their data products AI-ready.   

Oliver Crowe is the Technical Product Manager at Flock, a company backed by Octopus Ventures that provides insurance to commercial fleets, from courier and telecom fleets to taxis.   

With each operating in a completely different vertical, they’re well placed to offer a frank and clear-eyed appraisal of how their efforts to integrate AI solutions into their organisational workflows are unfolding – and if it’s working for them. Over the next few blog posts, we’ll be sharing their AI diaries: a record of a month spent exploring the application of a technology that seems bound to change the world. This is Week One.  

Oliver Crowe, Technical Product Manager at Flock 

We’re a UK-based insurtech, specialising in data-driven commercial fleet insurance. The problem that we’re looking to solve is that we’ve got huge amounts of customer insights, which are scattered across many different tools. When making decisions, we’ve had to query each individual tool independently; the information is very fragmented. We’re looking to break these silos down. 

The primary tool we’re setting out to solve this challenge with is Claude, Anthropic’s AI assistant, which we’ve integrated with a model context protocol (MCP). This is a framework that standardises how large language models (LLMs) integrate with external tools. It allows you to connect with multiple different apps, all from within Claude’s chat interface.  

We’ve included tools like our product analytics platform, so we can analyse customer data and their usage of the product, and Linear, for our product and project management. We’ve also integrated our internal databases, for example telemetry data, drawn from across the different vehicles in all the fleets we insure, which expands millions of miles of data points. 

We can now query 600 million miles of telemetry data using natural language. ‘Show me driving patterns that correlate with high claim frequency,’ for example, instantly returns insights that would have taken weeks of analysis before.  This enables proactive risk intervention – identifying dangerous fleet behaviours before claims occur, not after.  

We’ve also connected Granola, which is our tool that we use to transcribe our customer calls and meetings. We’re moving from using AI tools to AI workflows, connecting four to five different tools in sequence.  

The MCP process with Claude has made this much easier to do. It allows us to ask why customers are dropping off from a specific flow within the product, or why a customer loves using one feature over another.  

With the help of Claude and MCPs, we’re able to correlate these different tools and bring together a really strong and informed customer sentiment. In the long run, we’re hoping to eliminate guesswork and use this to identify the key factors that actually move customer metrics.  

My first impressions of Claude are that it’s very similar to ChatGPT in terms of the interface. When you start getting MCPs involved it gets a little bit more technical with the set-up, but once you’ve got these integrated in the app, the ability to query these different sources of data is very much just a matter of prompt engineering. Which does require a level of skillset – it’s not just like searching in Google. But it’s something that we’ve really been pushing at Flock, bringing people up to speed.  

We’ve got a fortnightly AI breakfast session, where teams can bring new discoveries. It’s very much a show-and-tell culture, so anything’s on the cards. Teams can show what they’ve been working on, what’s worked well and what hasn’t. We run AI hackathons, testing new tools and creating proof of concepts (POCs).  We also welcome regular speakers from external AI companies, such as Intercom.  

We’ve got an AI slack channel, and a very open-door policy with it, so people can share insights; nothing is off the cards, so it’s really pushing that open culture. We also have a ‘buy it’ mentality – teams have blanket approval to trial any AI tool.  

We’re very much on a journey, so every single week new tools are coming out – there are always updates. In terms of the subscriptions, we’ve got the Claude team plan, as well as the enterprise plan from the likes of Granola and some of the other tools.  

So far, we’ve reduced analysis time from weeks to hours for complex customer behaviour questions. We’ve also empowered non-technical teams to self-serve data insights that previously required analyst support. We’ve got here with our culture of experimentation, where everyone’s empowered to experiment with different tools. It will be interesting to see how this continues to develop.  

Patrick Van Deven, CEO of VaultSpeed  

We use specialised ChatGPT assistance to support the entire customer lifecycle. Feeding it with internal documentation, transcripts of real customer conversations and deep technical insights from our home team, we’re seeking to enable faster onboarding, sharper messaging, scalable customer support and continuous product feedback.   

We use a combination of tools: GPT, of course, but also Leexi, a call recording tool, as well as Gamma. We have professional licenses and team subscriptions, so the whole team has access to everything.   

Using internal content, we continuously enrich our GPT; we don’t insist on inputting final or polished materials – in fact, early drafts and rough ideas are welcome. These can include draft blogs, positioning statements or internal strategy documents, as well as product documentation or customer use cases that haven’t been shared publicly yet.   

The goal is to turn the GPT into a reliable, evolving source of truth. Teams across different departments in the company will all be able to interact with it, using it to extract relevant information or generate new assets.   

One of the challenges we’ve long faced is that our product is highly technical. Really, it takes an engineering degree to really understand the differentiators. I conducted structured interviews with our CTO to surface the most important technical ideas and decisions that impacted the design of the product; everything that sets our platform apart.   

We processed these inputs through GPT to produce clear, accurate messaging. So far, we’ve found that it preserves the original technical intent without oversimplification. It has been able to generate material our sales, marketing and customer success teams can use in their work.   

What is really striking is that the onboarding of new people has become a lot easier, because we have trained the AI assistant to answer any question that newcomers may have. Our core product differentiators are clearly articulated and accessible to the entire organisation. Queries such as, ‘How did another customer integrate with our platform?’ or ‘What is our positioning against a specific competitor?’ yield VaultSpeed-specific results, which hasn’t just accelerated onboarding – it’s helping us ensure messaging is consistent across departments and regions.  

We’re bolstering the GPT assistant’s knowledge base further with Leexi. Every customer-facing call, from demos to support and upselling discussions, is recorded and transcribed. We’re aiming to create a complete record of the customer journey, with zero reliance on manual notes or fragmentation; full context will become accessible to all at any point in the relationship. 

So far, these tools have shown real promise – but, of course, we’ll be interested to see how they develop further and what, if any, challenges they throw up. 

Many thanks to Patrick and Oliver. Keep an eye out for the second and third instalment in our AI Diaries series here on our website. And if you’re building a world-changing solution you think you should know about – get in touch

See more blogs