Uber Accident: Preliminary NTSB Report Released

Article By : Junko Yoshida, EE Times

The National Transportation Safety Board highlights issues with Uber's autonomous program

The National Transportation Safety Board’s preliminary report on Uber’s fatal crash in Arizona gave us a few insights, and surprises, about what went wrong with Uber’s AV.

Least surprising was Uber’s decision to disable Volvo’s factory-equipped ADAS features, including the Automatic Emergency Brake (AEB). Phil Magney, Founder of VSI Labs, told us, “This may be somewhat routine for robo-taxi development, as you don’t want the OEM ADAS systems to conflict with the Uber AV Stack.”

What puzzled me, and others as well, is why Uber disabled its own AEB during testing on public roads. Of course, this assumes the Uber’s own AV stack had an AEB that worked properly. According to the NTSB report, Uber stated that its “developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing.”

As we went through the NTSB preliminary report, two issues stood out. One is the immaturity of Uber’s AV software stack. Another is the absence of an Uber safety strategy in creating its AV testing platform.

First, let’s talk about Uber’s AV software stack.

We call it “immature,” because it appears that too many false positives from its sensors worried Uber to the point the company trusted a human vehicle driver more than its own robotic car. Uber told NTSB, “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action.”

Mike Demler, a senior analyst at The Linley Group, suspects that Uber’s sensor fusion, more accurately its software, “was broken.”

But the key information everyone had been looking for in the report was the data obtained from Uber’s self-driving system. The NTSB’s report said,

…the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision.

View of the self-driving system data playback at about 1.3 seconds before impact, when the system determined an emergency braking maneuver would be needed to mitigate a collision. Yellow bands are shown in meters ahead. Orange lines show the center of mapped travel lanes. The 
purple shaded area shows the path the vehicle traveled, with the green line showing the center of that path.(Source: NTSB)

View of the self-driving system data playback at about 1.3 seconds before impact, when the system determined an emergency braking maneuver would be needed to mitigate a collision. Yellow bands are shown in meters ahead. Orange lines show the center of mapped travel lanes. The
purple shaded area shows the path the vehicle traveled, with the green line showing the center of that path.(Source: NTSB)

This robocar procrastination — from six seconds to 1.3 seconds — is an eternity in an automotive emergency. The question is why the Uber vehicle waited for so long to react. Demler believes, “There was something there, so the software should have taken action at six seconds to avoid hitting it.” But it didn’t.

To be fair, the test vehicles’ hesitation makes a sort of robotic sense, considering the non-deterministic nature of machine learning.

As Magney pointed out, “While the sensor detected an object it had a hard time classifying what it was. A person walking a bike with bags of stuff… probably a scene in which the system was not trained.”

Further, he added, “You also had issues related to the object being outside (initially) the trajectory of the vehicle. Nevertheless, the computer did not originally see the object as a threat because the predictive movement was not accurate.”

Safe AV testing platform?
Uber’s software certainly had technical issues. While a software glitch in any car is unnerving, the deeper issue is how seriously did Uber take the responsibility of driving yet-to-be-perfected autonomous vehicles on public streets, and what specific safeguards did Uber install in its AV testing platform.

Demler said Uber software without proper sensor fusion is a “technical” issue. But Uber turning off the AEB because its software is prone to false positives “reveals irresponsible management.”

Important here is not the fact that Uber didn’t have a perfect autonomous vehicle. As Phil Koopman, safety expert and professor at Carnegie Mellon University, once wrote in his blog at EE Times, “it was a test vehicle that killed Elaine Herzberg, not a fully autonomous vehicle.” Let’s face it, we all know that autonomy is immature. But that’s not to say that we don’t care about safe test vehicles.

We should ask Uber how far the company went to ensure the safety of its testing AV. No one should die to make the world safe for robotics.

After the preliminary report came out, Koopman reiterated: “It’s important to find out what went wrong with the autonomy to learn lessons and improve safety. But this mishap isn’t really about an autonomy failure at all. It’s about a public road test safety failure.”

In short, he said, “I think it is a misconception to say that any autonomy malfunction was the reason for the fatality. Rather, the fatality happened because of a failure of the testing safety approach.” Further he added, “Consider: Uber knew the autonomy wasn’t reliable, because it was still under development.  So why should it be a surprise if the autonomy malfunctions?  Malfunctions are expected. That’s why they had a safety driver.

Make the safety case
Koopman is a big proponent of the idea that every AV company should make its safety case before getting permission from local government to test vehicles on the unsuspecting public. The AV supplier should submit “a structured written argument, supported by evidence, justifying system is acceptably safe for intended use,” he argued.

Today, no state or city is asking AV vendors to make this safety case. Sure, Uber had to suspend testing everywhere after the accident. It even shut down AV operations in Arizona and fired 300 drivers this week. But this is a case of closing the garage door after the horsepower has already escaped.

Koopman said a written safety argument need not be onerous for AV aspirants. Without revealing technology secrets, companies should be able to say, “Here’s safety reason 1, and here’s the evidence for 1,” he noted.

While Uber’s decision to turn off its AEB and depend solely on a “safety driver” to manage the brakes sounds flippant, even more damning was Uber’s decision to silence the warning system that alerts the human driver to take control in an emergency.

Why do that?

‘We have a safety driver’ doesn’t cut it
Last month, when he presented at Pennsylvania Automated Vehicle Summit, Koopman pointed out, “‘We have a safety driver’ doesn’t cut it.” At minimum, he explained that safety authorities must be informed of the safety drivers’ training, how AV companies plan to ensure that drivers are alert and awake, as well as how they will monitor their on-road performance.

In the case of Uber’s accident, the issue wasn’t even the fundamental difficulty of emergency handover between a machine and a human driver that many experts often talk about, when discussing L3 cars. If Uber’s statement to the NTSB is correct, that dreaded handover was a non-issue, because Uber’s “developmental” self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately.

Magney said, “You would think that that Uber briefs its test drivers when special things are being done and that extra attention needs to be made. Perhaps they did, we don’t know.”

Bad optics
The kindest interpretation here, as Magney pointed out, is that “Uber may have disabled the AEB in order to test other elements of the system. Perhaps the AEB was overly sensitive and leading to too many false positives…I could see where they could be testing associated features and not wanting the emergency system to intervene. Perhaps checking to see how the algorithms could handle common tasks like non-emergency obstacle avoidance.”

Koopman agreed. He called Uber disabling the AEB “bad optics for public perception, but not necessarily an engineering mistake.” He explained, “It is not necessarily a problem if the rest of the test strategy was properly designed and implemented. I’d probably want it turned on if I could, but I can imagine scenarios in which turning it off is an acceptable design choice.” He summed up: “My point isn’t really to decide whether disabling autonomy emergency braking is good or bad. The answer is it all depends upon the overarching safety approach.”

Maybe so, but for many of us, what happened looks like “Ubers out on the road navigating autonomously, but the mechanisms to ensure safety are turned off,” as Demler noted. Bad optics, indeed.

At the end of the day, though, the more important question is “why the safety driver didn’t avoid the collision,” Koopman noted. “Thus far we have no concrete reason to actually blame the driver. Rather, the factors described indicate it’s important to really take a close look at Uber’s on-road testing approach and policies.”

Safety drivers are after all already tasked with many jobs. As the NTSB report noted, “The operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review.”

In making a safety argument, vehicle operators could make its case, for example, by listing “training and qualification of drivers, installing a real-time driver alertness monitoring and sharing the information on the review of the driver performance data,” according to Koopman. They could also present a system that comes with a margin for recovery broad enough that it doesn’t put a human driver on a collision course with jaywalkers and cyclists.     

Of course, we don’t know if Uber took any of these safety measures because the company was never asked to present the safety case.

— Junko Yoshida, Chief International Correspondent, EE Times

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Leave a comment