Site icon mindset3.org

Autonomous Lifesavers

Photo © scharfsinn86/stock.adobe.com

Autonomous vehicles are saving lives – every day. Silently. We don’t notice. Actually – most of us don’t know. What we do know is that selfdriving cars are dangerous. According to the news, they are frequently involved in accidents and kill people. Two very different pictures of the same reality. Which one is right?

Maybe unsurprisingly – both are right and both tell only part of the story. Confusion. What’s the ‘big’ picture? And why aren’t selfdriving cars all over the place as some of us predicted 5 years ago?

Part of the answer was discussed in The Autonomous Nightmare a few months ago: We don’t want them. We have a hard time accepting that they’re better drivers than we are. Not rational, not factual, mostly emotional. Which is why we so willingly embrace and promote stories about accidents involving autonomous cars. And don’t even try to place them in the bigger picture.

Technology is easy to blame. We know it isn’t perfect, we know it has been involved in many accidents and not the least: It doesn’t fight back. It doesn’t ask what it’s being compared to. Maybe WE should ask that question – or are we reluctant because we might not get the answer we’d like?

Our hearts go out to those involved in any accident, but seriously – are we taking the easy way out and blaming the easiest target instead of looking for the real answer? Leading question, I know, but important nevertheless. Because there are lies, damn lies and statistics. If we just place the blame where it’s most convenient and move on, we learn nothing – and make the statistics (even more) useless.

Case in point: Numbers frequently referred to in this context are 9.1 and 4.1 – allegedly representing accidents per million miles driven with autonomous and human drivers respectively. That doesn’t look good at all – but there is a catch: The numbers are from 2013 and from a statistically small sample (there weren’t all that many autonomous cars in operation at that time). And by the way, what was autonomous technology like in 2013, except in its infancy? 

What are the numbers like today? ‘Hard to get at’ is the answer – for a number of reasons, one being the fact that ‘semi-autonomous’ is much more common than fully autonomous. And ‘semi-autonomous’ means that a human is involved, which complicates the picture: Who was driving, the man or the machine? And while it’s a given that the driver will blame the technology, what do the logs say? And if the logs tell a different story than the driver, who should be believed? (See also Hybrid Driver? Gimme a Brake …)

The news have plenty of those stories too, many of them make us angry, but rarely at the driver. Even if the driver failed repeatedly to respond to warnings, we tend to blame the ‘autopilot’ – and the manufacturer. Possibly because that’s where the money is? In any case, no wonder both car-makers and authorities hesitate to release the actual data – and the numbers. The logs may reveal technology weaknesses and driver ‘malfunction’, but the jury – that’s you and me – will most likely still side with the driver. (Which brings up the legal issue: Our legal frameworks need to be aligned with this new reality too – and fast. To be revisited.)

One number that seems indisputable though, comes from the National Highway Traffic Safety Administration (NHTSA): “Human error accounts for 94% of all accidents.” I’ve seen a similar number for accidents at sea (commercial shipping). There are no reliable statistics available for autonomous vehicles and ships yet, but it’s an obvious conclusion that we really need to get humans out of the equation. Why am I making the implicit assumption that autonomous ‘pilots’ will be an improvement? Because they’re not human. They’re not tired, not intoxicated, not preoccupied, always experienced, always attentive, don’t eat or drink or talk on the phone or argue with the other passenger etc. Yes, I even said ‘experienced’ – the autopilot has (or should have) gigabytes of experience under their belt before being ‘let loose’. This is the kind of experience that more than 10 years of practical use with tons of sensors have built.

Illustration © Gorodenkoff/stock.adobe.com

The ‘not human’ advantage continues on other levels – all contributing to safety. You can tell the autopilot to optimize energy use, drive carefully (like no yellow lights, no speeding), observe wet roads, always keep safe distance etc. If you’ve tried to tell yourself or your (human) driver the same, you know it doesn’t work.

Bottom line: There have always been and will always be automobile (and shipping) accidents. People are not perfect drivers, things happen, roads aren’t perfect, cars may fail in thousands of ways (many of which may be the driver’s or owner’s fault), etc. Of course there will be some clear-cut cases where the reason is obvious: The driver fell asleep. A wheel fell off. The brakes malfunctioned. The driver – autonomous or human – made a wrong choice.  For example. If the latter is the case – if the autopilot was to blame, we’ll see an outcry for better regulations, requirements, testing, laws etc. If the driver made the error, we’ll shrug and go on – as if a ‘bad hair day’ of some sorts suddenly is acceptable. Is this ok? Is this how we move forward?

Here’s the thing: We need to evaluate the picture, not just a few pixels. The big question is whether non-human drivers improve the statistics (save lives) or not. On the road, at sea, in the air. 

Given the (real) statistics and the current state of technology, it’s becoming very hard to argue against the autonomous alternative. It may hurt for a while, but lost lives hurt more.  

Exit mobile version