Managing data and security at the edge: UltraSoC’s CTO, Gajinder Panesar, talks us...
Managing data and security at the edge: UltraSoC’s CTO, Gajinder Panesar, talks us through the challenges.
*This blog first appeared on our deep tech newsletter. If you’d like to hear more from our Deep Tech blog, sign up here: https://octopusventures.com/deep-thoughts-newsletter/)*
Gajinder Panesar (better known as ‘Gadge’), is CTO at UltraSoC. Based in Cambridge, UK, UltraSoC creates silicon IP and software that provides intimate visibility and analysis of the operation of hardware and software in any electronic system. Here, Gadge talks to us about the growing complexities of managing data and security at the edge.
Everyone’s talking about edge and the future of software, what does the future of hardware look like?
Here’s the challenge: hardware is now generating and having to deal with several orders of magnitude more data than ever before. In an edge and cloud based system, this is a poor use of hardware resources, exacerbated by the need to shift larger and larger amounts of data on and off-chip. So it’s a balancing act between how much data is handled at the edge – i.e. on your chip – and how much on a centralised decision-making entity (which may be ‘the cloud’, although it could be any connected device).
I tend to not to talk about CPU-centric debug, because it’s the system that matters. It’s not about the core processor. The IP that we provide helps with visibility of what’s going on inside the system. We can also use this visibility to provide a layer of security and safety. We can look for things that should happen that haven’t happened, or things that have happened that shouldn’t happen, and all this observability is done completely in hardware.
Systems are becoming more and more complicated. Let’s take cars, which now have several hundred chips (SoCs – system on a chip) inside them. A lot of the processing, such as the monitoring of the systems, happens at the edge, locally, in the hardware. That processing is transformed into metadata, some of which is passed up to a central entity or an overseer.
A great example would be in vehicle-to-vehicle communication. Say there’s congestion, a car in itself cannot know about an obstruction on the road. However when metadata gets transferred from the car to the SoC to the monitoring software in the cloud, that metadata can trigger information to tell other cars approaching that they should slow down or take an alternative route.
This kind of information comes from hardware-centric monitors from within the SoCs themselves, with some local processing which then gets transferred to an overseer. This then informs the other participant entities (in this case, cars).
Tell us more about chip analytics.
Let’s take medical electronics. In an extreme case you might be monitoring the operation of a particular SoC in a medical system. If something fails, you need to capture enough information about the state of the system, so that you can do the equivalent of a post-mortem to make sure that failure doesn’t happen again. I’m thinking of things like forensic trace, i.e. the modern-day equivalent of aircrafts’ black boxes. But here, you’ll have them locally inside the SoC, allowing you to understand the behaviour and operation of the system.
The idea is that on-chip analytics software would provide all this at a runtime close to the hardware, so failures can be averted before they become catastrophic.
What are your current challenges?
Running the hardware analytics alongside the software on the target system. Our customers put our silicon IP into their SoCs creating a non-intrusive monitoring infrastructure. So alongside the hardware monitors, an independent communication path is placed inside the SoC, completely orthogonal to the target communication system. This means we can observe and move data about without affecting the behaviour of the target system. It also means we can bolt on an analytics subsystem within the monitoring infrastructure. This means that while we’re smartly monitoring part of the SoC, the data that is generated is then consumed by an autonomous analytic subsystem, which we can adapt for different deployments. This is where UltraSoC is now investing a lot of development to come up with the appropriate algorithms.
In the slew of hardware attacks that have recently hit the press, what are the security implications of introducing an intricate hardware infrastructure as above? How do you figure out the vulnerabilities when analytics doesn’t tell us exactly what is happening?
For UltraSoC, there are two aspects to security. One is how do we prevent ourselves from becoming a back door to the target system. The second is how can we help augment existing security mechanisms in the target systems. For the former, each of our modules has a lock. The key to that lock is handled by the secure enclave either in the target system or a secure enclave that is part of our infrastructure. The secure enclave can perform mutual authentication using a public/private key combination. That would then control the locking and unlocking of accesses to our monitors.
For the second part, conventional security is usually just done in software. Very rarely is it done completely in hardware. But software is always vulnerable to attack. All you need to do is get access to memory somewhere or change what the CPU executes. What we’re doing is providing information. Say a particular hardware block is trying to access the security engine on the CPU on an SoC via an abnormal access pattern (maybe to maliciously leak information from the engine): this type of activity can be flagged at the hardware level. Unless the system is designed to let that happen, there’s no way that the software can get access to it.
On top of this, we’re developing a new set of IP blocks right now which will prevent that access from taking place upon detection.
Is there the potential for a man-in-the-middle attack at the hardware layer?
On-chip you have a secure boot process, where the private key is embedded in the system and everything works from that. We’re not passing any secret information in the clear. In that case, the secure enclave is either ours, or our customer provides their own. The enclave is what controls the locking/unlocking of parts of the SoC. We provide the locks and they provide the mechanism to unlock them. We can also augment existing security mechanisms, translating rules into the appropriate configuration for our monitors.
Detection and prevention in the hardware doesn’t replace existing security mechanisms, it augments them. It’s a layer that’s becoming increasingly important.
What is your philosophy on functional safety? How is it different from security?
I know it’s heresy if you’re a safety or security person, but to me, safety and security are the same. At the very minimum they overlap. When I’m driving my autonomous car, security and safety are one and the same thing. In both cases, we’re looking for things that should happen that haven’t happened and things that have happened that shouldn’t happen. Even the simplest of IoT systems will require hardware monitors. They may not be the same ones that we produce today but we can be sure that they will be mandated.
Say you detect a breach and sound the alarm that you’ve identified a vulnerability on the chip, what are the tools to respond to a security threat at the hardware level?
There isn’t a single answer. It’s certainly system dependent. With autonomous driving, you don’t just want to just slam on the brakes when a vulnerability’s detected. So this is why there will be this analytics subsystem, which is autonomously observing stuff and will intentionally say: “I can’t stop this car, but I do need to slow it down.” An intelligent CCTV camera outside your flat, on the other hand, knows that you do want to be told when something changes in the images it detects. You might want it to ring the police and raise or sound an alarm. Maybe the same sensors are used in both scenarios, but in the car’s case it needs to take a lot more intelligence from across the car to determine what needs to be done.
How is the semiconductor landscape changing with current US sanctions on Chinese chip makers and Arm chips?
A very good question. Regardless of sanctions, China has been investing and building its future in semiconductors and complicated SoCs for some time now. At first they were a bit like India, focusing on the back-end/low-end, and only building and executing on what they were told to build. Now, they’re doing it themselves without any reliance on foreign IP – this was definitely in their plans, the sanctions simply accelerated it.
How do the large incumbents (e.g. Softbank-owned, Arm) impact your strategy around RISC-V architecture?
We have lots of traction with people who use the RISC-V ISA. But as I said earlier, it is not about the core, it’s about the system. That is what we do: provide understanding of the system, because we’re completely independent of any particular processor architecture. The term we use is ‘processor agnostic’.
Our customers like that we’re completely agnostic. They can have something built around Arm CPUs and use our monitoring infrastructure. For the next generation, they may take out the Arm cores and replace them with RISC-V cores, but the monitoring infrastructure stays the same. They don’t have to tear that up and go somewhere else.
In fact the more likely scenario is that they replace only some of the existing cores, creating a hybrid system that traditionally would have been extremely difficult to monitor. But even in this very complex case, our infrastructure can stay the same, and that’s a really powerful capability to have.
How are hardware players like UltraSoC thinking about different business models, compared to software businesses?
As I said earlier, traditionally CPUs were debugged, that IP was never used again, and perhaps the systems worked fine. But now systems are much more complex. Remember, it’s not about the core, but about the system. Observing continuously what’s going on is important. I think not too far off in the future, there will be ways of monetising the data that these monitors generate, whether it’s processed at the edge by the analytic sub-system on-chip but in other operation centers, maybe in the cloud. These then may be able to cross-correlate the information so that, for example, autonomous cars in California and Zurich share better safety and security information. We’re not far off providing safety and security as a service. We are working with several partners on proof of concepts in this area now.
From lab to commercialisation – any advice for aspiring founders?
You need to have fun while you’re doing it. For sure, you need the idea and the team (there are lots of books telling you this). You need to have a plan on how you’re going to execute it. But once it stops being fun, there’s a problem – even in the dark times!