Improving Safety Standards for AV

Article By : Junko Yoshida

There are many examples of vehicles with “perfect software and hardware compliant with ISO 26262” failing due to performance limitations of sensors or systems, unexpected changes in the road environment, or the misuse by the driver.

PARIS — The developers of the functional automotive safety standard ISO 26262 aren’t resting on their laurels. They’ve embarked on the creation of a separate standard (ISO 21448), described as “Safety of the Intended Functionality (SOTIF).”

ISO 21448 is said to complement ISO 26262, picking up where ISO 26262 has left off.

The group was motivated to develop SOTIF to avoid unreasonable risks for ADAS and autonomous vehicles (AVs) — even in the absence of malfunctions by hardware and software in vehicles — that might encounter trouble on the road.

Indeed, even ADAS and AV systems otherwise deemed safe — because their hardware complies with ISO 26262 and their software is bug-free — could still fail in some instances. “We don’t think ISO 26262 is enough” to guarantee safety, Riccardo Mariani, Intel Fellow and chief functional safety technologist, told EE Times. The industry has seen the “Uber accident and other events in which autonomous-driving technologies were misbehaving.”

There are many examples of vehicles with “perfect software and hardware compliant with ISO 26262” failing due to performance limitations of sensors or systems, unexpected changes in the road environment, or the foreseeable misuse of the functions by the driver. Or, simply, machine-learning algorithms might not interpret reality correctly.

EE Times talked to several experts in the automotive field on SOTIF. They all agreed that SOTIF is indeed a “very ambitious standard.” They also observed that it’s been a bumpy road for both camps in the AV industry — traditional car OEMs/Tier Ones and “detractors” (tech companies new to the autonomous vehicle market) — to come together and hash out a definition of what exactly SOTIF should cover and how it should grow.

The effort to develop SOTIF revived last November in an ISO “pre-meeting” in Pisa, Italy, hosted by Intel. The group, a variety of national representatives and various industry players in the automotive technology field, meets again in Shanghai next week.

A host of additional input — on subjects ranging from “use cases” and “definitions” to “requirements of AI” and “HD maps” — has been sent to the group prior to the China meeting. The group will review all of this information and define a scope for the standard. “The goal is to accept no new input after China,” explained Mariani.

SOTIF isn’t ‘prescriptive’

Automotive experts consider SOTIF a hot topic — important but also fraught with challenges.

Michael Wagner, co-founder and CEO at Edge Case Research, sees SOTIF as an important development for assessing the “safety risks [of autonomous vehicles] when no components fail.”

Where SOTIF can get tricky, though, is when the group attempts to define “unknown, unsafe” scenarios. Such a debate can devolve into a cycle of theory and philosophy.

“As a practitioner of safety, not as a philosopher,” said Wagner, “we must understand where the risks exist, prioritize risks, and develop a plan to mitigate risks and verify the plan.”

Compare SOTIF with UL electrical safety certifications, noted Wagner. “UL is very prescriptive.” UL safety certification was fomented to assess the compliance of products to recognized requirements. UL can tell you that if you do this, the following (bad) things could happen. Therefore, don’t do that.

Put simply, the challenge for SOTIF is a dearth of experience, said Wagner, in autonomous driving. Making it even tougher is the fact that the group must also chase rapidly changing technologies.

‘You don’t know what you don’t know’

Asked about SOTIF, Phil Magney, founder and principal at VSI Labs, pointed out, “The saying ‘you don’t know what you don’t know’ applies to active safety systems and automated driving features. This is the premise behind SOTIF. It is all about identifying the unknown and unsafe areas of operation and containing it to an acceptable level of risk.”

known unknown
The Known/Unknown and Safe/Unsafe scenario categories (Source: SOTIF)

Chances for exposure of ADAS and AVs to unsafe situations are real, even when hardware and software are operating correctly.

Magney said that some examples include:

  • The inability of an AI-based system to comprehend the situation and operate safely — i.e., the algorithm was not diverse enough to account for operating conditions
  • Insufficient robustness of the function within the existing sensor configuration — i.e., not enough sensors to handle diverse operating conditions
  • Poor HMI, leading to driver misuse of the automated function — i.e., a driver not paying attention to warnings or advisories

SOTIF is “a framework for identifying hazardous conditions and a method for verifying and validating the behavior until there is an acceptable level of risk,” Magney said. But he also acknowledged that it’s hard “to reduce the area of the unknown, unsafe conditions.” After all, “like edge cases, for example,” he said, “it is extremely difficult to say you have accounted for all edge cases. SOTIF specifically calls out the need for simulations for practical reasons.”

Close vote

Kurt Shuler, vice president of marketing at Arteris, said that it was a “close vote” at the ISO 26262 meeting when the group decided to develop SOTIF as a separate standard. Skeptics questioned the need, he noted. Citing “known unknowns” and “unknown unknowns,” Shuler acknowledged, “We are getting into the realm of Donald Rumsfeld,” the former United States Secretary of Defense.

Calling SOTIF a framework, Shuler explained that SOTIF provides “a way to think about safety” and to “suss out” how certain things could hurt autonomous driving.

In his view, SOTIF is a starting point. “Is SOTIF useful? Yes. Is SOTIF necessary? Yes. But is it sufficient? No.”

Pending issues of SOTIF

As Shuler noted, an industry group like ISO 26262 and SOTIF includes delegates from various geographies and engineers from companies with diverse automotive experience. Some have done pioneering work on ECUs for automotive. Some are disrupters like Tesla and Waymo barging afresh into automotive. Sitting in the middle are traditional car OEMs and Tier Ones from countries accustomed to a slower, highly regulated environment, he explained.

The art of managing standards groups with diverse participants is to find common ground. The group developing SOTIF, for example, has already managed a few critical decisions, Mariani noted.

First, they decided to make SOTIF not part of ISO 26262 but to move it to a separate standard as ISO 21448.

Second, although the group split on whether SOTIF should cover only L2 cars (preferred by traditional OEMs) or should include L3, L4, and L5 cars (heavily lobbied by tech companies), the group opted to include L3 through L5 cars.

Third, there has been hot debate over SOTIF application only to final AV systems in production. Some advocates insisted that AV test vehicles already on public roads should be exempt from SOTIF. The OEMs’ fear was palpable because a lot of test vehicles are not yet fully industrialized, and their technologies are certainly not aligned with an unfinished standard. The group compromised by applying only “part of ISO 21448” to test vehicles.

Click here for larger image
Cybersecurity will be outside the scope of SOTIF (Source:
Click here for larger image

Cybersecurity will be outside the scope of SOTIF (Source:

What about machine learning?

Then there’s the big issue of machine learning. The topic is currently being handled in an annex of the SOTIF proposal.

As SOTIF says:

Autonomous vehicle technology typically involves some type of machine learning, especially for object detection and classification. Machine learning training has the potential to introduce systematic faults. As this process can be of critical importance to the safe operation of the vehicle, this can lead to the need for the data collection and learning system to be developed according to safety standards, with attention given to reducing hazards such as unintended bias or distortion in the collected data.

Algorithms aside, the make-or-break factor in AI is the database from which it learns. Wagner told us the example of an open-source neural network applied to an autonomous vehicle. The AI system used video — collected by a dash cam installed in a car — for learning. The “student vehicle” ended up mowing down every green-vested construction worker that it saw (or apparently did not see). Wagner said that the AI car’s instructional database did not include any sightings of day-glo green vests, so the machines did not recognize the workers as human.

Machine learning is the elephant on the loveseat in any discussion of safety in autonomous driving. The problem is that the non-deterministic nature of machine learning can present different results, Shuler noted. The industry isn’t quite sure how to deal with it.

As Mariani explained, “Some members [in SOTIF] want to discuss machine learning at later dates, while others wanted to discuss safety ‘with an elephant present.’” More specifically, some group members insist that machine learning is still too new and too early for standardization. Rigid guidelines could eventually hinder innovation, they say. On the other hand, there are those who want immediate guidelines for verification and validation of AI-driven vehicles.

While there is a big divide between the two camps, “All agreed that it is necessary to use AI as a safety monitor to detect any anomaly,” added Mariani.

For now, the group has had only informal discussions — with no decisions — on how to handle safety-critical AI.

Mariani said that it is imperative for the SOTIF group to collect incidents in which AI broke the [autonomous] system. A database that details and archives all such AI-driven vehicle incidents should be common information among AV developers.

Within the group, however, there are those who are presumably way ahead of others in autonomous-driving experience and would rather keep such juicy data to themselves.

Rather than dealing with this impasse, Mariani said that SOTIF may want a separate standard for AI. One option includes working on the AI issue in collaboration with an independent entity like Partnership on AI.

Mariani, just appointed as the IEEE Computer Society vice president for Standards Activities for 2019, added, “The IEEE Computer Society is also discussing some new standards on AI for autonomous driving, but it is still to be decided.”

Speaking of other elements of SOTIF, Mariani pointed out that although verifying and validating highly automated vehicles will need many hours of testing, “infinite testing is impossible.” The [AV] community needs to agree that the new standard must embrace a combination of “testing” and “a formal approach” — a set of rules that vehicles must obey. Typically, such a formal approach includes responsibility-sensitive safety (RSS), one of the first such proposals proposed by Intel/Mobileye, said Mariani.

Click here for larger image
Overview of safety-relevant topics addressed by different ISO standards (Source: ISO 21448)
Click here for larger image

Overview of safety-relevant topics addressed by different ISO standards (Source: ISO 21448)

Know the limitations

The bottom line for SOTIF is that the industry must know the limitation of ADAS and AV systems in reference to vehicles’ understanding of reality. “A combination of sensors,” for example, could result in AV misbehavior, even if neither software nor hardware failed.

Asked what he expects SOTIF to do, Edge Case Research’s Wagner said, “First, establish a way to actively manage uncertainty. You have to watch carefully how the system is being used and how technology is being applied. Second, I want SOTIF to focus on perception features.”

In his opinion, “Of course, you can talk about driving behavior, discuss how it should follow common sense and how accurately low-level actuators must function. But none of these can mitigate risks if perception is blinded.”

VSI Labs’ Magney believes that SOTIF, like functional safety, is an expensive proposition. “Practicing functional safety to achieve an ASIL D is an order of magnitude more expensive than ASIL B. Similarly, practicing SOTIF to contain most unknown unsafe conditions for a robo-taxi would be tedious and expensive.”

SOTIF could become an effective standard. But the SOTIF process could also present as “a high barrier to adoption,” Wagner said.

Nonetheless, SOTIF is “about best practices,” Magney noted. “Why would you not want to practice state of the art when it comes to safety? This also lends itself to simulation and calls out the importance to test under the most diverse matrix of operation conditions and scenarios.”

Mariani is one of the authors of the ISO 26262 standard and particularly of the ISO 26262-11 chapter about application of ISO 26262 to semiconductor technologies.

A long-time participant in ISO 26262 and now in ISO 21448, Mariani can testify that the standards group is not unfamiliar with tension among members. Some want to keep secret what they have learned in autonomous driving. Others are eager to share. But Mariani is confident that the group can establish common ground at its upcoming meeting in Shanghai.

When asked specific topics to be covered in the Shanghai meeting, Mariani noted:

  • Functional Improvements to reduce SOTIF risk
  • Definition of a SOTIF “metric” / acceptance criteria
  • Verification and Validation (V&V) strategy in SOTIF
  • Use of Fuzzing for V&V
  • Guidance on Simulation and Scenario Based Testing
  • SOTIF extensions for machine learning
  • SOTIF impact of off-line creation of HD-maps
  • Qualification of the simulated environment

Leave a comment