Is Safe AI a Lie? | Autonomous Vehicle Applications and Concerns
In recent years, the term “Safe AI” has been used more frequently regarding autonomous vehicle applications. However, one particular use has caused growing confusion and concern in the field of safety certification in autonomy. The implications of getting this wrong are enormous, and we’d like to take the opportunity in this blog to shed some light on this.
We want our readers to have a clear understanding of what safety certified software is and how to use AI the right way in safety-critical applications--particularly if technical investments or even innocent lives are at risk.
What is Safe AI?
“Safe AI” has no single meaning. There are industry groups focused on “Safe AI.” There is “Explainable AI” or “XAI,” “Trustworthy AI,” “Rigorous AI,” etc. There is a company named SafeAI that helps enable new autonomous solutions in mining and construction.
We applaud and commend all these various organizations and efforts. Using AI software to empower technology improvements in dangerous situations is a great benefit, but none of these are the subject of this blog which asks: “Is Safe AI a Lie?”
Our focus on “Safe AI” will be in the context of the verification of safety-critical software within industry functional safety standards, such as IEC 61508, ISO 26262, and ISO/DIS 19014, etc. First, let’s start with the answer, up front, then go into the explanation:
Is Safe AI a Lie?
The Answer: Yes.
Claiming AI software is certifiably safe is a lie.
This will not come as a shock to about half of the readers. These readers have known for a while that AI software cannot be qualified as safety certified.
For the other half of readers, this may come as a shock. Conflicting statements over the years has created the confusion that exists today. But at the point of being on the threshold of autonomy commercialization, this on-going confusion should be a major red flag for development teams and their stakeholders.
How can we clear up the confusion and help come to the right understanding, quickly?
Our plan for detecting and correcting information on AI safety is outlined in three points:
Safety is not an intrinsic property of a component or product.
AI can do the safe thing, without bearing any of the safety burden.
The very nature of AI’s value is great for non-safety use cases.
1. Safety is not an intrinsic property
“Intrinsic” is defined as “relating to the essential nature of a thing; inherent.” The density and hardness of steel are intrinsic properties of the material and manufacturing method. Is the part hard enough? You can increase the heat treatment or carbon content until the part simply is just hard enough. You can test it. You can prove it. It exists entirely on its own. Certain components or products have intrinsic properties that make them the right part: their color, weight, etc.
“Safety” can never be an intrinsic property of a component or product. This is such a counterintuitive concept, but is usually one of the first “Ah ha!” or “Eureka!” moments in functional safety training and why functional safety is inseparably linked to rigorous system engineering processes.
Safety is always contextual. The high-level context, in which personal harm can occur, must be carried down through the various subsystem levels to the atomic elements of the system. The lowest level elements are the final software units and hardware components, and these possess intrinsic properties. They simply are what they are and exist as such. They can be inspected. Their properties are self-evident and provable, just like a particular type of steel. They are only “safe” if their intrinsic properties are suitable for fulfilling the safety context of the higher-level system.
Safety is always contextual. While “safe products” are made up of subsystems of atomic elements, “safe products” also always have accompanying context. Consider elevators. Elevators seem inherently safe today. Unless there’s a fire. That context is clearly written on every single elevator entrance that we would consider “safety certified” anywhere in the world. An elevator is a very dangerous place to be in in the event of a fire because power may be lost, smoke and heat may permeate the small space, and the occupants may be unable to get out of the building. An elevator can have an intrinsic volume, and weight, but it can’t have an intrinsic “safety” property.
Over the last decade, auto OEMs, suppliers, and component providers have encouraged universal functional safety practices throughout the industry, but the idea that safety is an intrinsic property has been a very hard idea to correct. There are common debates within a design team about what “ASIL” a certain module or microprocessor should be, as if they have varying quantities of safety in them. They don’t. Inevitably these decisions are always resolved once the context of requirements is understood, and the actual design tradeoffs considered. Safety is always contextual. Safety is not intrinsic.
Since we know safety is not and cannot be an intrinsic property of a product or component, we should reject all concepts, for example certain test methodologies, that are used for identifying intrinsic properties when certifying safety. An example would be exhaustive scenario testing. Those tests are great for identifying intrinsic properties of an automobile, such as its 0-60 / 0-100 time, stopping distance, fuel efficiency, ride and comfort, etc. Those tests tell you self-evident information about the vehicle, but they tell you nothing about the robustness or integrity levels of all the various subsystems and components.
You would need to test and inspect the self-evident qualities of the multitude of atomic components which make up the whole vehicle, and assemble all that information into a cohesive, interrelating argument to know what the robustness or integrity level of the vehicle’s behavior is in certain conditions.
This is impossibly expensive to do after development, but this is why all safety standards require those activities take place before and during their development, because then it is quite feasible and actually aids tremendously in the overall automobile development effort and eases the engineer’s burden in sorting out the “must have” versus “nice to have” tradeoffs. It is a very welcome opportunity if complete safety requirements are provided up front, and that starts with the autonomous driving developer.
2. AI can do safety without the safety burden
Decomposition is the functional safety concept of distributing the safety burden of a single requirement across multiple subsystems or components. It is often recognized simply as redundancy.
Interestingly, one of the most common decomposition schemes in software architectures does not follow the redundancy principle. An example of a common software decomposition is to distribute an ASIL D requirement – which is the highest integrity level - into an ASIL D(D) assigned to one component, and a QM(D) assigned to another component – where QM is Quality Managed and outside Safety’s scope.
Contrary to the redundancy idea above, this helps in no way to solve the robustness problem by distributing across different components, since all of the “integrity level” of the original ASIL D goes to the component with ASIL D(D). The QM(D) component performs the same, identical safety function, but without any of the safety burden. If we can’t claim safety in redundancy, is there any benefit in this?
In software, this ASIL / QM decomposition helps distribute robustness across independent process streams: where one process is focused exclusively on the basic safety performance and building up all the safety robustness and evidences, while the other process is focused on rapid feature function enhancement and design tradeoffs, such as: quality, maintainability, etc. so long as it does not interfere with the safe software. It is much easier to separate these two processes: where the “safe software” is developed, contextualized, and documented in one process, and the other process focuses on making rapid changes to all other software, that is aligned with safety, but above all, it never interferes with the “safe software.”
The driving task is extremely complex. However, there is no reason to think that a simple, crude driving task can’t be developed with all the necessary robustness, while a more complex and continuously improving driving task is developed and maintained to meet evolving quality goals and never interferes or overrides the safe driving task. These quality goals merely align with the safety goals, rather than fulfill the safety goals.
Does AI have to be “safe?” No, not really. AI should still do the safe behavior, but no, it does not have to be robust, inspected and contextualized like the safety-certified software does. AI will be continuously improving and building upon prior training data as data is constantly collected and the desired performance more refined.
The fact that AI still has to do safe behavior with some degree of consistency, in this author’s opinion, is why so many groups are working on these problems called “Safe AI,” which is a great contribution to the field of AI. The difficulty arises when attempts are made to use AI methods to also fulfill the contextualization necessary to fulfill safety’s robustness requirements. These attempts lead to define away safety and effectively rip holes in well-trusted safety principles by using methods that do not ensure or take proper care to avoid intrinsic risks in component selection and design, particularly when alternative and affordable design alternatives exist.
3. AI is great for non-safety use cases
The tremendous value of AI is in the discovery process within enormous amounts of data that are too voluminous for people to review and make sense of. Writing requirements for 20 million pixel signals in order to detect 100 trillion potential pedestrian images 30 times a second is likely not fathomable any time soon or ever. AI has already achieved impressive performance in this area, and will continue to be the solution to these types of problems for the foreseeable future.
Whenever the desired performance is not completely understood or fully articulated, and the number of input-to-output relationships are very high, then AI software is probably a candidate. These are use cases that try to attain a performance with a very high payout in an environment when no other method could achieve the same performance.
If the desired performance has a lower order of input-to-output relationships, and is fully known and is a fairly simple, one-sided behavior, such as: “Avoid contact with all objects,” or “Detect all objects at a 40 m range;” then AI may not be the right fit. Even if no other technology readily exists today, AI technology will be out-competed in a race to the bottom in terms of simplicity and repeatability in this non-differentiating functionality. This is especially true when the value or reward is in: never doing the wrong thing, rather than striving to do the right thing. You don’t get a million bucks simply for not colliding into things.
This examination of AI is best summarized by the quote that, upon researching the source is attributable to many reputable authorities over the past 100 years, that is essentially this:
Amateurs work until they get it right. Professionals work until they can’t get it wrong.
AI is developed until it gets it right. Safety-certified software is developed until it can’t get it wrong.
AI is always attempting to give the highest level of performance that has never been achieved before. This author does not belittle AI when comparing it to an amateur, because AI is doing things that no other method can do. But AI does not guarantee success – precisely because it is intended to perform in such exceptional ways. If the focus was on guaranteeing a particular outcome, AI’s performance would be limited, and correspondingly, value reduced. Furthermore, the human effort in contextualizing those guarantees in AI would amount to traditional software development. That’s a terrible use case for AI.
Many of the “Safe AI” efforts mentioned at the top of this blog are very similar to traditional quality control methods in software. These methods help balance competing tradeoffs in AI and control the data and training processes in order to guarantee the achievement of some minimum level of performance, all while continuously improving and building on the core AI architecture.
One litmus test we’ll equip our readers with is this: The next time you hear some method to ensure AI software safety, ask yourself how similar is that method compared to current hand-coded software quality control today? Would the hand-coded software pass the “Safe AI” methods?
Thinking about traditional developers, when asked for an inspection of the software properties, do developers typically allow anyone from safety to read their latest code, or is it pretty tightly accessed? Sometimes developers will say, “Oh sure, yeah the software does that. It’s safe. I could show you but it’s pretty complicated. We don’t need to go through all that. Trust me, that’s what it does and it’s safe.” When this author hears such statements, it is clear this software is not safety certified software. It must be inspected. It must be tested for intrinsic properties, and those properties need to be contextualized to the safety requirements.
AI, of course, is, by definition a big, black box of impossible-to-inspect code. If someone came up with a method to claim that AI was safe, through exhaustive testing, then that method could, theoretically, also work to “certify” poorly-written or even well-written hand code that no one ever looks at. But this could never replace the safety assurance of inspecting traditional software and ensuring freedom from interference that is well-established today. And let’s not forget our very first rule: safety cannot be an intrinsic property of some component, AI or otherwise.
In conclusion
We should all be aware that many, many people are talking about “AI Safety” and for good reason. These people are not liars! Listen carefully, and chances are the topic of “AI Safety” being discussed is probably a good one. But not everyone hears the same message, and sometimes the audience can think the topic of “AI Safety” can mean they have a way to certify that AI is safe.
As you listen, if universal mistakes are being made and safety definitions are being ignored, such as trying to define safety out of the context of requirements, or assuming safety is an intrinsic property of a product that can be inferred, or they describe a safety use-case that is poorly-suited for the true power of AI, then please use some of the information we’ve provided here to help detect the underlying errors and, if possible, to try to constructively correct those errors.
And remember our three-step plan for detecting and correcting bad information on AI safety:
Safety is not an intrinsic property of a component or product.
AI can do the safe thing, without bearing any of the safety burden.
The very nature of AI’s value is great for non-safety use cases.
It’s far too late for stakeholders in autonomy to tolerate lingering, existential questions such as whether or not their core technology is or is not certifiably safe. If you found the information provided here useful and would like to discuss any of these topics further, we would welcome the opportunity and you may check out our Contact Us form or Initial Consultation page. You can even leave comments below for others. Thank you for helping making truly safe autonomous products a commercial success.