Home / Blog / Security in the IoT Era

Security in the IoT Era

In this, our third post on EETimes series on trends around Embedded in the IoT Era, I’ll be summarizing some of the articles devoted to security. (The other categories they consider the most critical ones facing designers are connectivity and lower-power design, and edge intelligence.)

In his article, What’s Driving the Shift from Software to Hardware in IoT Security, Majeed Ahmad notes that, with the tremendous growth of IoT devices and applications, there’s a parallel growth in interest in security. And much of that growth is coming on the hardware side, with technologies that include “secure elements, hardware security modules (HSMs), and physically unclonable function (PUF) capabilities.” Why use the hardware approach rather than rely on software to provide security?

Ahmad quotes Michela Menting of ABI Research, who says:

“Hardware-based security offers better protection from manipulation and interference than its software-based counterpart because it’s more difficult to alter or attack the physical device or data entry points,”

And the ability to withstand attacks takes on more and more importance as more and more IoT devices and applications are literally life and death (someone taking control of an embedded medical device, or control of a car) or have the potential to result in widespread, serious economic and physical damage if assaulted (the electricity grid).

Hardware security includes numerous safety features and components. That includes true random number generation (TRNG), secure boot mechanisms, secure update, secure debug, cryptographic acceleration, and isolation of sensitive and critical functions with security subsystems. Then there are tamper resistance and protection of secrets, tamper detection, on-the-fly memory encryption, process/functions isolation, and run-time integrity protection.

Increasingly, security IP subsystems are integrated in SoCs. But this approach isn’t viable for all types of applications, and even when they are, there is such a wide range of SoCs (and MCUs) out there, all with different configurations, that integrated security (vs. deployment of discrete security elements) won’t happen overnight.

None of this is to say that software doesn’t matter when it comes to security. For the foreseeable future, it will play a complementary role, working hand in hand with hardware security measures.

Another article in the series addresses an area that’s directly relevant to consumers who are increasingly embracing smart technology in their homes: security systems, thermostats, slow corkers, baby monitors…

In her piece, Sally Ward-Foxton discusses Hardware RoT, and asks the question “can we trust AI in safety critical systems?”

The answer is apparently yes, but not 100%.

That’s because AI is a black-box solution.

While neural networks are designed for specific applications, the training process produces millions or billions of parameters without us having much understanding of exactly which parameter means what.

For safety critical applications, are we comfortable with that level of not knowing how they work?

CoreAVI specializes in systems – defense, aerospace, automotive, industrial – that are higher-end than what we have in our homes. CoreAVI’s Neil Stroud suggests that there are ways that can render AI inference “deterministic enough” for their systems and, by inference, the applications found in smart homes. The burden is in part on those creating the AI algorithms/models.

Another technique is to test run a particular AI inference many times to find the worst-case execution time, then allow that much time when the inference is run in deployment. This helps AI become a repeatable, predictable component of a safety critical system. If an inference runs longer than the worst-case time, this would be handled by system-level mitigations, such as watchdogs, that catch long-running parts of the program and take the necessary action—just like for any non-AI powered parts of the program, Stroud said.

Still, the black box nature of AI makes certifying “safety critical” systems somewhat problematic.

There’s also an article in there series, “Security Proliferation Vexes IoT Supply Chain” by Barbara Jorgensen that’s worth a read.