Resources

If AI isn’t trained on the edge, is it any use?

Recently our Deep Tech pod hosted an evening at Google’s Startups Campus entitled ‘AI on the edge’. We heard from a prize collection of speakers including Eddie Seymour from NVIDIA, Michael Huth from XAIN, Hoi Lam from Google and Gajinder ‘Gadge’ Panesar from our portfolio company, UltraSoC. Each threw their valuable penny’s worth of insight into the AI pot. Our first speaker, Ofri Ben-Porat, co-founder and CEO of Edgify (an Octopus Ventures portfolio company) had a particular way of describing artificial intelligence (AI) on the edge that we thought would be of value here on the less esoteric end of our communications network. In other words, AI for non-geeks. We pulled Ofri aside to get his in-depth view.

AI is ALL about the edge

For clarity, the ‘edge’ in question involves the device on which you’re reading this article. If server farms are at the centre, then individual connected devices, such as the smartphone in your hand, the IoT devices in your kitchen, the self-checkout at your supermarket or your Tesla in the driveway, are out there on the ‘edge’. Any AI is only as clever or useful as its existence out there in the real world.

Data is the fuel

In the same way that knowledge feeds intelligence, AI’s lifeblood is data, harvested on the edge, out there in the real world. Data informs AI’s learning and the accuracy of the learning – its practical intelligence – is down to the quantity and quality of the data gathered. But here’s the problem: the quantities are vast and far too big to be transferred in full from the edge to the central points (the server farms that form the cloud) where they can be distilled into wonderful, accurate, ever-adapting learning. Looking at Tesla, since it takes 10 hours to upload the data of just one hour’s driving from one car, we can safely assume that the company is only using a fraction of all the data being gathered by its fleet of four wheel edge devices. The bottle neck is just too tight. Andy Jassy, CEO of Amazon Web Services (AWS), estimates that using conventional means of transferring data, it will take you 26 years to move an exabyte of data (that’s a billion gigs) to the cloud. Companies as big as Walmart produce something like two exabytes a month. No wonder AWS has resorted to transporting data for their biggest customers via truck-sized memory sticks or ‘snowmobiles’. Yes, road and diesel are the fastest means of transporting the volumes of data currently fuelling the AI revolution. So if data is AI’s lifeblood, it’s being drip fed.

Labelling

There’s another major kink in the AI supply chain: labelling. Much of the usefulness of AI is machines’ ability to recognise the objects and situations in their environment. Environments change and most objects, even of the same category, are not identical. Automated supermarket checkouts, for example, need to be able to recognise one type of apple from another. A Braeburn apple may retail at 35 pence, whilst a Pink Lady, from the same store, costs 50 pence. If the two are confused by the self checkout’s camera system, the supermarket could lose a lot of money over time. The problem is that the algorithms that underpin the technology are as naive as newborn babies and need to be fed millions of labelled examples to teach them to ‘see’. So how do these quantities of raw images get labelled? The answer is humans. Intelligence doesn’t just come out of thin air, it has to start with the human brain and hundreds of thousands of them are employed in call centre-like offices for just this task. For a self-driving car algorithm to be taught the meaning of road signs, or to tell the difference between a child and a dog, hours of footage have to be watched and objects labelled, frame by frame.

Labelling at the edge

Food shoppers can inadvertently do the labelling job themselves by confirming, on a touch screen, the identity of the fruit or vegetable they just placed in front of a self-checkout’s camera. The machine’s algorithms suck up the shoppers’ collective recognition until they can do it for themselves.

It’s possible to accelerate this process away from the shop floor. Edgify recently shipped in some self checkout machines and hired a team of people to scan a range of thirty fruits and vegetables, labelling each time. In just three weeks the checkout’s accuracy improved from 60% to 99.8%. Accuracy, for supermarkets and for AI in general, is critical. You might remember this the first time you sit in a self-driving car. For supermarkets it means loss prevention running into billions of dollars.

However, the laborious issue of labelling tightens the bottle neck even further. A McKinsey report from 2018 listed it as the biggest obstacle to AI adoption within industry. The future therefore has to be a world where machines can teach themselves, doing away with the need for clouds and labellers altogether.

Learning at the edge

When labelling at the edge is cracked, then learning at the edge can take off. All the data is being generated out there on the edge devices, so it makes sense for all that fancy artificial intelligence and machine learning to happen directly on the edge devices too. This brings us to Edgify’s sweet spot and it can be described (you heard it here first), as collaborated learning.

Collaborated learning on the edge

Edgify’s collaborated method means that each edge device trains on its own data and creates a new model. It’s this new model, rather than all the raw data, that is sent off to the central controller (which is itself one of the edge devices). No data has to leave the edge and those tons of zettabytes (a thousand of those exabytes) can stay exactly where they are, keeping the gas-guzzling AWS trucks off the road. The controller then combines all the different models from all the edge devices and optimises it into one ‘supermodel’, which is then shared back out to the other edge devices.

Every edge device now has the most updated model and this becomes the new starting point for the training, so the next time all the edge devices share their models, they only need to share the delta from the previous iteration.

This collaborated training between edge devices is not limited to the type of data being used. It can also deal with the most complex of data sets, from an array of signals, such as image or video, to natural language processing (NLP).

Conclusion

AI at the edge is becoming the only way to run and train AI models. When machines learn how to learn, without the need for human brain-power, the power of AI will be set free to reveal its full potential.

Sign up to our newsletter

Get the latest on the people, businesses and ideas that will change the world.

See more blogs