A few weeks back, I came across a brief article that piqued my interest. I can’t remember all the details, but it was about some researcher who’s exploring the idea of embedding sensors in humans to help prevent autonomous vehicles from running them over.
Sensors in humans aren’t all that new. They’ve been used in devices like pacemakers for years. And while it’s not widely in practice, humans can get microchipped in the same way that they chip their pets. Still, embedding sensors in a human so that they don’t become roadkill to a self-driving car chocked full of sensors, well, that was something different.
I made a mental note to look up the article, but can’t seem to find it. I don’t think I dreamed that I saw this story. But my Google searches came up empty when I went hunting for it.
What I did find was a piece by Colin Barnden on EE Times that asked the provocative question: “Are We Prepared for Killer Robo-Drivers?“
My short, personal answer is, no, I’m not. But as someone who’s interested in both technology (sensors, AI, robots) and the automotive world, I went along for the ride.
And as someone who’s been watching the news about autonomous vehicles over the years, I know that the idea that they’ll becoming more of the norm is always just around the corner. The corner, however, always seems to turn into another corner. So far, purely autonomous vehicles have proved to be problematic other than under perfect conditions: fine weather, the road clear as far as the eye can see, no heedless pedestrians stepping off the curb while texting…
Sooner or later, self-driving vehicles will arrive. But one of the promises has been that when they take to the road, the number of people killed in road-related accidents will be reduced. (Worldwide, more than 1.3 million people are killed in traffic accidents each year.) And that’s not going to happen any time soon.
Thus, Barnden’s question(s):
Are we really prepared for killer robo-drivers? Has anyone seriously thought through the legal implications of software killing other road users? How does public opinion of robo-drivers change as the body count rises? How will lawmakers and regulators respond to that change in public opinion? How is justice seen to be done when robo-drivers kill? We don’t know.
Looks like the automotive and legal/political worlds will have a while to work on resolving these issues.
Last month, AAA issued a report entitled, “Today’s Vehicle Technology Must Walk So Self-Driving Cars Can Run” which detailed the results of their annual survey. One of the key findings was that, when it comes to where drivers want research and development money to go, they’d rather see improvements in vehicle safety systems than self-driving cars.
…Only 22% of people feel manufacturers should focus on developing self-driving vehicles. The majority of drivers (80%) say they want current vehicle safety systems, like automatic emergency braking and lane keeping assistance, to work better and more than half — 58% — said they want these systems in their next vehicle.
One of the problems that automakers will have to solve is that, when more complex technologies are introduced in “regular” vehicles on the way to fully autonomous ones, it takes too long for the human driver to take over from automation as needed. As of now, it takes abut 40 seconds. That’s a long time when life and death are at stake.
Five years ago, some experts were predicting that we’d be seeing autonomous vehicles on the road by this year. Now they’re saying that the realization of this vision is still a decade or two away. Plenty of time to figure out the legalities of the situation. But with that long time horizon, it’s no wonder that some folks are trying to figure out to help things along by embedding sensors in humans so that the self-driving cars don’t mow them down. Wonder where they’ll find their guinea pigs?