What AV Safety Isn't - Part 1

 
 

At Retrospect, we have worked to advance AV safety research, inform industry practices, build the technology, and evangelize all in the name of safe autonomous vehicles.  

There are certain ways to present AV safety so that the typical person can form a logical position on whether AVs are safe enough.  The average person’s understanding is critical because soon we will all have AVs either in development or in deployment near enough to us that a safety opinion will be formed one way or another based on the interactions and perceptions we have. We all know the importance of making a good first impression, and establishing the context, first, for AV safety, is going to be key for a good first impression from the public.

But rather than talk about what AV safety *is,* we will start by very simply explaining what AV safety *isn’t.*  Understanding what AV safety isn’t is important so that those of us who are not AV developers can distinguish between true evidence of safety and not merely clever advertisements.  

There’s so much to unpack in AV safety that we are going to split the topic into multiple posts - each addressing a handful of key topics. Be sure to follow us on LinkedIn for our latest post. 

Calling out Human Safety Failures

Calling out human safety failures is a very common tactic to demonstrate AV safety when it is not safety.  It usually starts like this: “95% of all accidents are caused by human error.”

The move here is to cast negativity on a competitor (humans in this case) and hope that you as the audience confuse something negative about human driving as evidence of something positive towards the developer’s product (autonomous driving).  

The problem with this logic is that casting doubt on one alternative does not provide evidence that another is any better or any good at all.  A clever comment in a related social media post put it this way: “Would you feel safer if monkeys were driving the cars?”  Of course we wouldn’t.  Every solution has propensity for failures and pointing out one solution’s flaws is *not* evidence of another solution’s benefits, such as safety.

Deeper still, the 95% number comes from a NHTSA paper that in the same article states (emphasis is mine):

“Although the critical reason is an important part of the description

of events leading up to the crash, it is not intended to be interpreted as the

cause of the crash nor as the assignment of the fault to the driver, vehicle,

or environment.”

As this paper points out, not all of those crashes are necessarily caused by humans.  All in all, calling out human errors is an incorrect interpretation applied as a logical fallacy.  It is certainly not evidence of any given autonomous vehicle’s safety.

Videos of people taking autonomous rides

Videos of people taking autonomous rides are also very popular at the moment.  These advertisements - I mean, innocent social media posts - are typically one to three minutes long and extol the magic of what turned out to be a very natural, normal-feeling ride in an autonomous vehicle.

It goes without saying that hand-chosen, pre-recorded, edited videos should not be confused with any statistically significant or random sample of driving data. So implying something like vehicle safety which, due to low rate of incidence, only shows up over extremely high accumulations of miles, is being demonstrated in a short video is misleading.

Let’s put it in perspective - The U.S. national average for automotive fatalities is one in every 70 million miles.  If I asked you if I was a ‘safe driver’ after having you watch a three minute video of me driving around, how good of an evaluation could you really make? How about if you knew I had the opportunity to hand-pick the three minutes from hours of video I was collecting and cut out the rest of the footage? That would be even less believable.

Also note, videos like these have been published for over five years now by AV developers.  The videos don’t ‘look’ any more or less safe now than they did back then.  I’ll give due credit - I’m sure they’ve drastically improved functionality and yes, even safety has likely improved in that time.  So the fact that today’s videos don’t look any safer than five-year-old videos speaks for itself that these videos have not represented safety.  In short - these videos of autonomous vehicles in operation are not evidence of safety.

Features labeled “beta”

There’s a trend in technology to deploy unfinished features and add the word “beta” to them.  In the automotive space, the first time I saw this was with Tesla’s beta features - most notable were the features related to driver assistance like AutoPilot and Full Self-Driving.  But it’s not just Tesla and it’s not just automotive.  Apple has started to do this with features on iOS and macOS marked as beta… but shipped directly to customers in non-beta releases.  

The game is this - Use terms like ‘beta’ to set the end user up to be more accepting of failures, bugs, and underperformance. For those unfamiliar with the traditional definition of “beta” - it is a term in the software world meant to indicate the status as pre-release and thus prone to issues, bugs, or failures that would be ironed out before final release, which should have removed those bugs.  Note, there are even some “alpha” releases which have even more risk of instability or crashes than beta.  I haven’t seen anyone in any space push unbaked features labeled “alpha.” Yet.

Here’s a question to consider:  How can a production release consisting of one or more parts still in ‘beta’ be a production release?

Now moving to a subset of ‘beta’ features:  Safety-critical features.  ADAS systems have been shown to be commonly misunderstood by those in- and out-side the vehicle and typical driver over reliance on these features has also long been documented.  With that amount of risky confusion already in the mix, it is beyond me how one could consider any such feature labeled ‘beta’ as safe.

Number of Miles between Interventions

There was a time - way back in the 2010’s, half a decade ago or so- when even people inside the industry thought that the statistic describing “how many autonomous miles per intervention” was a clue into maturity or even safety.  What a reversal since then!

What is it:  An intervention is when a safety driver (the person behind the wheel in charge of monitoring the vehicle driving autonomously) takes manual control back from the autonomous vehicle.  The simple layperson’s inclination is to take these two numbers:  # of miles driven and # of disengagements and turn them into a (not-) meaningful single value = miles per disengagement. Notably, California has legal requirements to report disengagements on an annual basis - giving the public and media outlets a chance to run with these statistics.

In those old, backward times people would look at this and say, “Oh, so Umbrella Corporation is doing better than Cyberdyne Systems because they had fewer disengagements per mile…”  Although the industry backed off on the value of this metric it has taken time for news outlets and other non-insiders from realizing how meaningless this number is.  

Let me dwell on this one as it is a way to share an insight so far unmentioned.  Miles can be very different.  So one intervention in 100 miles of snow driving should not be compared to one intervention in 100 miles of clear daylight driving.  Nor should it be compared to 100 miles of driving in a sand storm.  Sure in all three cases you can say “100 miles per disengagement” but that’s not telling you anywhere near the full story.

But deeper still is a notion that took a few years to get attention and it’s related specifically to AV safety.  When you have a safety driver in the vehicle, their one and only job is to maintain the safety of the vehicle and anything around it.  Safe operations of a test deployment involving a safety driver need not focus on anything else.  And should a safety driver intervene and take control back from an autonomous vehicle it should be seen more like a triumph of your safety processes rather than a ‘failure’ of the autonomous system. 

Safety drivers should be overly cautious.  They should err on the side of safety.  So this means a proper deployment with safety drivers will likely have many interventions that weren’t absolutely necessary!  And that’s good.  It’s better than good - it’s great!  Why?  Because this is a system in test and we should always err on the side of caution with a system being tested!!  

So there’s the full circle of it.  An intervention once seen as negative for affecting the ‘miles per intervention’ statistic is now a positive point for a test deployment AV’s safety case.

More to come

What constitutes AV safety is a complicated subject.  But what *is not* AV safety - especially among the information being put out nowadays - is easier to explain. I want to emphasize that I’m not ‘against’ autonomous vehicles, but excited to see them develop.  I've spent the better part of the last ten years bringing them to market through work in the business, product, validation, system design, and software engineering of autonomous vehicles.  It is not only possible to safely deploy autonomous vehicles - it is possible to inform the user base and the public that autonomous vehicles truly are safe, which is the most critical part.

In the upcoming parts of this series, we’ll talk about things like L4 Autonomy disguised as L2, Scenario Testing, AV driver’s licenses, and simulations, and more. Have a concept you’d like us to discuss in the series? Speak up in the comments below or reach out directly.




Guest User