“Making autonomy safe” is perhaps one of the hardest nuts for designers of AI-based autonomous vehicles to crack.

Assume your self-driving vehicle didn’t hit a mother and a small child crossing the street. This, however, shouldn’t swell your bosom with the confidence that you’re tooling around in the safest vehicle on the road. It only means that you don’t know what exactly your neural networks recognized.

For all we know, the survival of mother and child might be sheer luck. Your AV might have identified the mother, but the neural nets might have been confused about the small object bouncing around on the end of her arm. The sensors might have thought the kid was a Prada bag.

The question becomes: Do you have a tool that can efficiently identify and fix flaws in your neural nets’ algorithms, ensuring that your AV can learn the difference between a toddler and a handbag?

Chris Urmson of Aurora and Ansys to invest in 'Hologram'
Edge Case Research, a Pittsburgh-based startup founded by autonomy and safety software experts from Carnegie Mellon University, has developed a safety assessment platform called Hologram to identify ‘edge cases’ in the perception stack.

Edge Case, one of few companies tackling head-on the thorny nexus of safety and autonomy,  got a major boost this week. It secured a $7 million investment led by Chris Urmson, co-founder and CEO of self-driving vehicle startup Aurora, and Ansys, a global leader in engineering simulation for rockets, airplanes and cars.

Others joining in the round include Lockheed Martin Ventures, Liberty Mutual Strategic Ventures, Trucks VC and Blue Tree Allied Angels.

Aurora’s Urmson and Ansys’ vice president Matt Zach both will join the Edge Case Research Board of Directors. Urmson, who previously worked with Alphabet on its self-driving car project, has a background in autonomy. Ansys (Canonsburg, Penn) is known for its prowess in engineering simulation software.

Noting that his firm’s board has snagged the foremost authorities in autonomy and safety fields, Michael Wagner, CEO, Edge Case Research, said, “Our investors completely agree with us that safety of autonomous systems is paramount. We have collected investors who think a lot like what our team thinks.”

Edge Case will use the $7 million infusion– its first round of funding – to expand the development of Hologram. He called Hologram “an emerging tool for the emerging world” in which AI is playing a key role in autonomous systems. While Edge Case’s Hologram software already being tested, the company plans to deploy additional pilot programs, achieve scale, and make Hologram commercially available by the end of 2019, Wagner explained.

Edge Case Research's Hologram desktop. Hologram automatically finds scenarios in which autonomous vehicles fail to detect pedestrians, vehicles, and other important road users. This image shows an example of the kinds of scenarios that Hologram might show a user. The red bounding box on the screen shows where the objects that were not detected. The continuous risk analysis platform helps vehicle developers learn about safety hazards before accidents happen. 
(Source: Edge Case Research)

(Source: Edge Case Research)

Edge Case Research's Hologram desktop. Hologram automatically finds scenarios in which autonomous vehicles fail to detect pedestrians, vehicles, and other important road users. This image shows an example of the kinds of scenarios that Hologram might show a user. The red bounding box on the screen shows where the objects that were not detected. The continuous risk analysis platform helps vehicle developers learn about safety hazards before accidents happen.

Can you simulate the unknown unknowns?
Simulation is widely recognized as a way for autonomous vehicle companies to build up virtual miles to see how their perception software, AI and deep learning algorithms handle situations.

The exception is Tesla. Tesla’s Elon Musk isn’t a big fan of simulations. At Tesla’s Autonomy Day in Palo Alto in April, Musk famously said: “The simulation programs engineers use to run their virtual self-driving cars through millions of ‘edge case’ scenarios to rack up billions of driving miles are good but not good enough.” He added, “We have quite a good simulation, too, but it just does not capture the long tail of weird things that happen in the real world.”

Is Musk wrong?

Edge Case’ Wagner acknowledged Musk’s point, but added that it understates the nuances of simulation.

It could take months or years for fleets of self-driving cars to build up the millions — billions, really — of miles that AV companies will need to demonstrate their systems’ safety. So, simulations are critical, said Wagner, but it’s also true that “simulation won’t be able to tell us all the unknown unknowns,” he said.

More important is the continuous monitoring and analysis of neural network reactions to real-world data, according to Wagner.

Returning to the example of the street-crossing mother and a child, Wagner explained that Hologram determined that a bounding box created by the neural nets around the mother was solid. The box assigned to the small child flickered. By showing the toddler’s box appear and disappear, Hologram helps AV developers grasp the situation’s ambiguities and see that the child is confusing to the AV system.

Wagner does not descibe Hologram as simulation software.  Instead, by leveraging every mile recorded by an AV company, Hologram acts as a tool to analyze how neural networks react to real-world data, he explained.

Today, one could presumably do the same by comparing what neural networks saw with labels assigned to each object. But that would be hard to scale, given the many objects to be labeled and compared to one another. Hologram, in contrast, offers a platform that “intelligently tests perception software against adversarial examples,” Edge Case said. AV companies can use their own sensor data, enabling Hologram to identify risks. Hologram, in essence, gives developers the information they need to retrain their algorithms to operate more reliability.

Where does Hologram run?
So, where exactly does Hologram software apply? Presumably, Hologram will be integrated into a data center, said Wagner, to analyze how a given perception software reacts to real-world data. That analysis will be used for future learning.

In the long term, said Wagner, “We want to run Hologram in a camera, or inside a car itself, so that it can capture and detect whatever surprises real time.” That data will be uploaded to the data center once the AV is parked.

UL4600
Along with the development of Hologram, Edge Case is heavily involved with Underwriters Laboratories (UL) to develop the UL 4600 standard for the safety of autonomous products. Wagner said that Edge Case already crafted a 200-page draft, which is now rigorously vetted and discussed by leading industry players and UL in Chicago this week.

Michael Wagner, CEO, Edge Case Research (left) and Phil Koopman, co-founder and CTO of Edge Case Research (Photo: Edge Case Research)
Michael Wagner, CEO, Edge Case Research (left) and Phil Koopman, co-founder and CTO of Edge Case Research (Photo: Edge Case Research)

UL 4600 is expected to be the first comprehensive standard to address the safe deployment of autonomous vehicles and mobile robotic products.

In a previous interview with EE Times, Phil Koopman, CTO of Edge Case Research, said, “We will go after full autonomy head on.” UL 4600’s authors intend to specifically cover validation of any machine learning based functionality and other autonomy functions used in life-critical applications.

The Underwriters Laboratories Standard Technical Panel (STP) will revise the draft to produce a consensus-based industry standard, with a planned completion date of late 2019.