Tesla fatality: Autopilot or driver error?

Article By : Stephan Ohr

We look at the Tesla as if it is supposed to perform miracles, when in fact the crash may have been a consequence of driver error.

The recent fatal crash of a Tesla Model S into a tractor trailer has raised a dozen questions and re-invoked uncertainty about the viability of autonomous vehicles. The Tesla was in a semi-autonomous control mode when it smashed into the side of the tractor trailer. The truck abruptly turned left in front of the Tesla, giving neither the driver—nor the car’s camera-based collision-avoidance system—little chance to react.

Tesla said in a blog post that its autopilot system did not recognise the white side of the tractor trailer against a brightly lit Florida sky, and the brake wasn't applied. The blog also noted that this is the first known fatality in a Tesla vehicle with the autopilot activated.

Commentators, including EE Times’ own Junko Yoshida, were quick to assume there was a hardware/software failure—especially with Tesla’s rapid assertion that collectively its autonomous vehicle systems had already logged 130 million miles. First among the failure suspects was the forward-facing vision system: Was it a CMOS image sensor? Junko asked; a high-GHz radar antenna? Or a specialised vision processor?

[Tesla 01]
__Figure 1:__ *The Tesla did not actually broadside the tracker trailer; it passed UNDER the tractor trailer, where its windshield, roof and windows were sheared off. The remainder of Tesla continued to drive itself for about a half-mile, losing speed and coming to a stop against a chain link fence. Picture Source: The New York Times (from Florida Police Report)*

Fingers were quick to point to Telsa’s vision system provider, Israel-based Mobileye, whose visions systems are now widespread among car system suppliers—projecting optimism about the future of autonomous vehicles where ever it goes. Automotive suppliers believe that self-driving cars will commonplace within the next five years. Google, a supporting voice, says it has already tested self-driving vehicles over 1.7 million road miles.

Followers of machine vision technology, including me, came from May’s Embedded Vision Summit impressed with the demonstrated systems’ ability to distinguish cyclists and pedestrians on a contested roadway.

If the automotive vision system could spot individual pedestrians and cyclists, how could it possibly miss a house-sized tractor trailer?

The tragedy will likely result in a lengthy lawsuit. The victim’s family has already retained a legal firm with expertise product defect litigation. And the National Transportation Board, which also investigates train and airplane crashes, is assisting the National Highway Safety Administration (NHITSA) in the investigation of Tesla's autopilot system.

But it is not entirely certain that the crash was the result of a product failure.

Causes for the accident are entirely speculative: We look at the Tesla as if it is supposed to perform miracles, when in fact the crash may have been a consequence of driver error—i.e., a freak accident—for which there could be no preparation. Had the driver of the tractor trailer, for example, turned left before spotting the approaching Tesla? Or had he seen the approaching vehicle and assumed the driver would likely slow down or stop, giving the 18-wheeler right of way? (Junko’s commentary was wise to identify the crash as a “corner case” for Tesla, a rare confluence of extreme conditions.)

But if it is not human error we’re searching for, then the problem is a bit more complex. It is not a matter of identifying what component failed. It is a matter (I believe) of what the system failed to offer. It is entirely possible the Tesla’s front forward vision system failed to prevent the collision because it was looking at the wrong thing(s). The Tesla did not actually broadside the tracker trailer; it passed UNDER the tractor trailer, where its windshield, roof and windows were sheared off. The remainder of the Tesla vehicle continued to drive itself for about a half-mile, losing speed and coming to a stop against a chain link fence.

What do we know about where the Tesla’s radar/vision system was pointed? Conjecture suggests the Tesla’s camera sensors were pointed horizontally and tilted slightly toward the ground. If that is the case, the radar might have been aimed at the gap on the road beneath the trailer—identified as a big white space—and failed to recognise it as an obstacle. According to Tesla, in fact, the sedan did not automatically brake to prevent the crash because of the high ground clearance of the trailer and its white reflection against a brightly lit sky. In fact, the Model S tunes out what looks like overhead road signs to avoid braking unnecessarily, Tesla acknowledged.

Failure to capture overhead data does not necessarily constitute a component failure: It means the front-facing radar must be aimed upward as well as forward and to the sides. The technical problem that the Tesla would need resolve is the contrast between the metal of the trailer and the whiteness of the sun-lit sky (not an easy one, as anyone whose pupils have been dilated by an eye doctor for a vision check). The bigger problem would be a massive data processing requirement—memory and processors—if in fact you needed to take and analyse sensor data along vertical Y-axis. It terms of the moral decisions the autonomous car has to make, you don’t have the dramatic child-runs-into-the-street dilemma. It’s more like, What do you do when a broken tree limb is overhanging the roadway ahead?

One solution to problem of white space under a truck bed could come from the trucking industry itself: The installation of metal side skirts on the tracker trailer would have presented a more radar-detectable image to the Tesla’s forward-facing image processors. While not required by state laws, truckers believe that side skirts promote better gas mileage with a more efficient air flow under the trailer.

Leave a comment