Tesla Model 3: A Need for Safer Solutions

 
 
 

Believe it or not, the Tesla Model 3 shown in this recent traffic camera video from Taiwan is actually making the right decision to run directly into the overturned truck.

Contrary to what others have said about the Tesla needing to be more safe, or that Tesla’s ADAS is lacking in some capability, we believe this is the right behavior for Level 2 ADAS technologies like Autopilot, and this view is held by other automotive functional safety experts.

When we do risk analysis, and if we do them correctly, we realize many unexpected risks associated with ADAS from things like false object detection or “ghost” object detection. Ghost object detection could potentially cause more cars on the highway to suddenly slam on the brakes or try to swerve without warning and create many more crash scenarios. 

Designers of ADAS systems, like Tesla’s Autopilot designers, avoided creating these new crash scenarios by making sure the sensors can, without a doubt, detect the object before issuing the emergency braking command. You can see in the video the Tesla does not appear to detect the man, who, trying to warn the driver, steps into the lane in front of the Tesla. It’s likely that some sensors did detect the man, but not all. Later, once all the Tesla sensors clearly see the large truck, the automated emergency braking does engage and the impact speed is lowered. Thankfully this Tesla crash was a very soft impact and the driver reportedly had no serious injuries.

So, while the Model 3 is making the right, and truly safe decision to not brake until absolutely convinced of an impending collision, there was, nonetheless, a series of unsafe decisions that lead to this crash, which we can trace. Going back, there must have been loss of information between Tesla’s developers and the owner. According to the article in Autoblog, the owner, "’thought that the car itself would detect the obstacle and automatically brake…’”

Because of this gap in information, many people in automotive safety have taken it upon themselves to try to warn Tesla drivers that the Tesla Autopilot does not actually drive itself. This is just like the man in the video trying to warn the driver before it’s too late.

What we are excited about doing at Retrospect is helping teams of autonomous developers to recognize risks ahead and design safe solutions, before it’s too late. We are happy to help organizations quickly and correctly design these systems safely and use the right analytical techniques to identify the lurking risks in their products. For example, we use System Theoretic Process Analysis (STPA) to identify the potential sources leading to confusion in traffic participants. We will periodically be posting examples of how these methods identify risk and lead to safe solutions. Subscribe to our newsletter and follow us on our LinkedIn page for our next update on assuring autonomous safety.