Resources

AI: striking the balance

Jorn Jansen Schoonhoven unpacks some of the concerns around this world-changing technology – and asks how we can unlock its full benefit while mitigating risk. 

In an article for the FT earlier this month, Ian Hogarth, AI investor and co-author of the annual (and influential) State of AI report, issued a warning. ‘The contest between a few companies to create God-like AI has rapidly accelerated,’ he wrote. ‘They do not yet know how to pursue their aim safely and have no oversight. They are running towards a finish line without an understanding of what lies on the other side.’ Just a few weeks earlier, AI research laboratory, OpenAI, launched GPT-4, an AI model capable of passing the bar exam (in the 90th percentile) and conning a human into doing its bidding.

As with so much in the field, the precise definition of Artificial General Intelligence (AGI – what Ian describes as ‘God-like AI’), is up for debate. Consensus emerges around the word ‘autonomous’. Ian Hogarth isn’t the only expert in the field with the jitters: in March, an open letter signed by hundreds called for a pause in AI research (generating its own controversy in the process); Geoffrey Hinton, considered by many to be the ‘Godfather of AI’ estimates the risk of AI-associated human wipe-out to be non-zero. Even Sam Altman, CEO of OpenAI, acknowledges there are valid reasons to proceed with care, while still maintaining an optimistic outlook.

Is the worry about AGI justified?

The short answer, unfortunately, is yes. At the same time, recent developments have pointed towards AI’s potential as an agent of positive change.

Some of today’s experts, echoing the cautions of Cassandras of the past (including the late Stephen Hawking), might point towards two sets of risk: the first are the consequences of using strong (currently available) AI. These include, but aren’t limited to, job displacement, unintended biases in decision-making, and the potential for misuse by malicious actors.

The second, more apocalyptic form of risk, pertains to the long-term implications of AGI: an autonomous system capable of surpassing humans at any intellectual task – such as developing AI. The consequences of an AI surpassing human intelligence might include, at the least, a profound power shift and potential loss of control.

It’s the first set of risks that current systems, like GPT-4, carry, even as they show promise for many applications with seemingly limited risk. Still, the somewhat obscure nature of GPT-4’s underlying mechanisms (even to its developers) and the rate of improvement of these models have experts worried. As GPT-4 approaches something more akin to Artificial General Intelligence (AGI), and raises the very real possibility of an actual AGI emerging within our lifetimes, experts have started to worry about the discrepancy between the fast-paced improvement of model performance and the more gradual progress in comprehending how to safely constrain such models.

So, the longer-term worry is justified. At the same time, however, it’s crucial to recognise the transformative potential AI systems have across areas like healthcare, education, and communication, where they can contribute to increased efficiency, accessibility, and improved outcomes. The AI research community and responsible tech companies are working diligently to address the challenges and mitigate risks associated with these models. As long as there is a collective effort to create transparency, accountability, and ethical AI development, we can reap the benefits of foundational models while staying vigilant against potential downsides.

Still, technical safety protocols, and public sector regulation, have yet to catch up with the remarkable advancements that have been made in foundational models. This is the risky situation Ian Hogarth sets out to highlight.

To mitigate the risks, policymakers, industry leaders and experts need to work together, and prioritise the development of robust AI regulations and safety mechanisms to ensure whatever comes next is safe – and ethical. While making sure not to stall innovation and deployment of this incredibly valuable technology.

Should research be paused – and is it even feasible?

One potential solution, proposed by the signatories of the controversial open letter, is a temporary halt in AI development. By jamming open the fast-slamming window, some argue, the proper time and thought can be spent on safety concerns and new regulatory frameworks. In this time, researchers, policymakers and regulators might dedicate their resources to studying the risks and ethical implications, and establishing a universal code of best practices. The pause also offers a chance to bring the public up to speed, promote healthy debate and build a consensus around inclusive policies.

But in reality, a pause isn’t practical. It faces several challenges, with perhaps the largest amongst them being the very nature of the tech industry itself. Underpinned by a highly competitive culture, it incentivises innovation – at speed. No one is likely to want to halt their research, for fear that they might fall behind. AI development is also a truly global endeavour; co-ordinating a pause across different countries, cultures and legal systems would be a highly complex problem – the sort that an AI might be best suited to solve…

What’s the current state of regulation & technical safety?

EU Regulatory Push

Another key issue from the regulation end is the speed of AI development: it’s happing fast, and regulation is hard-pressed to keep up, with lawmakers struggling to make the right decisions in the timeframe available to them.

Earlier this spring, Italy banned Chat GPT citing privacy concerns; other European nations looked to the country for tips, while the EU has been pushing for greater regulation around AI. But with the European Parliament still debating legislation that was proposed by the European Commission two years ago, regulation is only expected to be enforced around 2025.

Technical Progress

Whilst AI development powers on at full steam, it would be unfair not to highlight the vast amounts of work being done around alignment and interpretability of models. AI alignment is a critical part of the conversation, and describes the process of ensuring that the goals and behaviours of AI systems are aligned with human values and intentions. Great work is being done to ensure that AI systems are designed to understand and act in the best interests of humanity, avoiding unintended consequences and closing down opportunities for nefarious use.

Companies such as UnlikelyAI, Conjecture, Anthropic, Lakera, Tenyks and Giskard are making incredible progress in the field, but with the world’s eyes (and funding) fixed on development of AI models, businesses concerned with alignment are having to work hard to keep up.

Benefits and Use cases

Pending an AI-enabled mass-human-deletion event, these models offer incredible opportunities. At the very least, they stand to offer a tremendous boost to the global economy: by automating repetitive and low-skilled tasks, they liberate time and resources and stand to usher in an economic boom with widespread benefit. From healthcare operations to personalised education and research, foundational models harbour the potential to create enormous, positive societal impact.

At Octopus Ventures, we’re enthusiastic about use cases across our focus areas:

  • Consumer: Note taking, personalised learning, gaming, video and image creation and editing, audio generation, and writing. 
  • Healthcare: Medical diagnosis, mental health, medical research, and health education. 
  • B2B Software: Document/information extraction, legal assistance, coding automation, data augmentation, customer service, cyber security, copywriting, and sales enhancement 
  • Deep Tech: AI Safety, redesigned AI architectures, route to AGI, interpretability, alignment, computing optimization, and research enhancement. 
  • Biotech: Drug discovery, protein folding, research enhancement, and research synthesis. 
  • Fintech: Document processing, underwriting automation, personal finance, and relationship management. 

So, where do we stand?

Overall, we at Octopus Ventures are incredibly excited about investing in AI-focussed companies. We believe that deep tech holds the power to change the world for the better, and with the world facing unprecedented challenges, there are incredible benefits to be won. Still, we’re committed to supporting responsible tech; clearly, it’s vital that any potential benefits of new technology are weighed against the implications.

That’s why we’re primarily interested in supporting businesses operating in areas related to safety and explainability, vertical applications with a positive impact, and AI infrastructure. For instance, we invested in UnlikelyAI because of this: the team is doing extraordinary work to develop AI models that can be used safely and provide transparency. We’ve invested in Apheris, which stands to revolutionise research by using federated learning to unlock the potential of data ecosystems – securely.

Ufonia automates routine telephone consultations, lessening the burden on clinical professionals and freeing them up for more vital work; Papercup uses AI to dub videos, effortlessly opening access to global audiences. 

These are just a few of the AI-powered businesses we’ve supported: we’re sure there’ll be more. AI will change the world – but we must heed the warnings of experts, and make sure it’s for the better.

See more blogs