Unexpected uses in use-case scenarios and counter-intuitive interfaces in man-machine interfaces often lead to the premature death of new products.

It’s not unusual for system designers to release their products into the wild, only to have users play with a product in ways that were never intended, resulting in accidents. Sometimes consumers send thousands of the new product back to manufacturers simply because they couldn’t figure out how to use it.

Predicting the future is hard. It’s easy to see where things went wrong after they go wrong, but 20/20 hindsight is rarely a comfort.

Call it a “foresight gap.” The engineering community can be blindsided when it did not anticipate how the completed system will be used and what could happen once it’s installed in the real world.

The person who reminded me of this lesson — and summed it up succinctly — is Phil Koopman, CTO of Edge Case Research and professor at Carnegie Mellon University. He makes his case simply by asking one question: #DidYouThinkOfThat?

dog with glasses

Recently, Koopman wrote a blog, Car Drivers Do More than Drive. In his post, he asked: “How will self-driving cars handle all the non-driving tasks that drivers also perform?” More specifically, he asked: “How will [self-driving cars] make unaccompanied kids stop sticking their heads out the window?”

Koopman is neither a Luddite nor a naysayer on autonomous vehicles (AV). If anything, he is an avid advocate for safe AVs.

Take a school bus for example. No one who has ever been on a school bus really believes that loading one with kids, unsupervised by an adult, is such a great idea. The movie version would be called “Kids Going Wild.”

Yet, at the Geneva Motor Show in 2018, the Volkswagen Group unveiled its own self-driving “school bus” concept, called SEDERIC (SElf DRiving Car). Maybe German kids behave differently from all other kids? Seriously, what were they thinking?

Sederic school bus

Self-driving school bus, SEDERIC (Photo: Volkswagen Group)

This sort of engineering naiveté is the tip of the iceberg. There are many AV use cases that can simply go horribly wrong if they aren’t thought through.

A robotaxi, for instance, must be prepared to deal with all kinds of passengers – adults, kids, the elderly and handicapped, men behaving badly, obnoxious customers with unreasonable demands, or just people who get suddenly sick in the back seat.

Further, self-driving cars, with no human operator, must be able to communicate with the external world. How do pedestrians, unable to make eye contact with a driver, discern the robocar’s next move? Will the robocar be able to understand, let alone hear, what a human driver in the next lane is yelling about?

I asked Koopman to lay out specific self-driving use-case scenarios for which AV system designers should be asked the fundamental question: “Did you think of that?”

In our interview, Koopman limited his scope to AV/human interaction. “To be clear, not every car needs to handle all of these situations. But at least some self-driving cars will encounter at least some,” he added.

So, the question is whether designers have anticipated how to handle these. In other words: #DidYouThinkOfThat?

Communicating inside the vehicle

Koopman first laid out certain complications that could arise from basic passenger-robotaxi interaction.

  1. Basic operations – entering and leaving a robotaxi
  • How do passengers know it is safe to get out?
  • Will the robo-taxi stay put, and not take off with luggage still in the trunk?
  • What about passengers who need help with their luggage?
  1. Selecting — or changing — a destination

Setting aside basic operations, the first #DidYouThinkOfThat moment hit me in 2016 when I was interviewing retired pilot Richard Hartman. I was asking him about differences between flying and driving.

He told me then that pilots usually spend a lot of time on flight plans before taking off. Once a plan is loaded into a computer, another pilot double-checks the information. One pilot reads it aloud, the other confirms. Then, the air traffic control system sets course. If rough weather appears on route, controllers might alter the course away from the more direct path charted by GPS.

But when a person drives a vehicle, Hartman said, the trip is rarely so orderly. Assume an ultimate destination — home. On route, you might stop for coffee, meet a friend, pick up groceries. “Is it likely people will spend time pre-planning and doing data entry to chart the course?” Probably not. Besides, human drivers change their minds all the time along the way.

The most likely scenario in the real world is that people, more often than not, will “disengage the autopilot mode,” Hartman predicted.

This conversation took place well before the “robotaxi” became popular brainstorm.

Even today, Koopman believes, “how do you select a destination” is the basic system requirement for AV designers. That basic operation — destination choice — must be flexible enough to accommodate the passenger’s change of mind. People should be able to change their plans after they ordered the cab.

  1. Passenger orders AVs to take certain actions (sometimes illegal) in emergency

Koopman is particularly concerned by emergency situations in which passengers might issue unexpected orders. What does an AV do?

  • “I feel sick, you need to stop NOW”
  • Passenger has a medical emergency such as a heart attack but does not communicate this verbally (a human driver is more likely to notice)
  • Passenger ordering the AV to violate traffic laws, such as proceeding through a red light late at night because of a credible and immediate fear of carjacking in a bad neighborhood
  • Passenger ordering the AV to forego safety rules and/or operational design domains (ODD) limits due to extreme situations (e.g., cars that drove through the California wild fire to try to escape).
  • Whether the AV should accept potentially dangerous orders [from passengers] such as driving through a flooded road surface
  • The AV must decide what to do when ordered to "follow that police car providing an escort to the hospital emergency room"
  1. Do Robocars understand what I said?

verbal input

The emergency orders posed above assume that AVs can comprehend passengers’ verbal input. But do AVs really get me? “Just think of all the Siri and other voice command system malfunction stories,” said Koopman, “…except now it’s controlling a car.”

Koopman said, “what about mistaken verbal commands to the vehicle due to misunderstanding a human, misinterpreting music being played by a passenger as a command, advertising hack commercials that order the vehicle to take the occupants for drive-through food at a specific restaurant chain, or even the odd parrot that has learned a problematic vocabulary?”

OK. I’m not sure about the back seat-driving parrot scenario, but the drift is clear. Nobody can be sure that voice control technology can ever operate at 100 percent accuracy. People don’t do nearly that well. AV must be taught to ask, “Say what?”

Communicating outside the vehicle

goahead smart

(Photo: Daimler Corp.)

We’ve all heard this question: “How do pedestrians know that the car sees them in a crosswalk? Acknowledging that this has been discussed before, Koopman cautioned: “Where things get a lot more complicated are at the next level down.” Consider for example:

  • Signage is in English. The pedestrian can't read English.
  • Visual indicators. The pedestrian is vision impaired.
  • The vehicle somehow indicates "I see you" but there are multiple pedestrians, and it sees all but one. Squish.
  • Does the vehicle assume a pedestrian locked into a cell phone trance will see the signal? Can the car recognize that its indications haven't been seen?
  • How do people outside the vehicle know it is "live" and might potentially move? This includes other drivers. Is the car parked, or just waiting? Normally we look for a human inside the car to provide hints, but an AV could be running empty.
  • This also includes pedestrians and others who need to know if it is safe to walk out in front of a vehicle that might or might not be parked. Electric vehicles, which run silently, offer no aural clues. In such instances, Koopman said, “Possibly consistent use of running lights might be enough to handle this, but again #DidYouThinkOfThat?”

Passengers’ expectations

Koopman added, “We'll have to think about what expectations there are of passengers.” Even for a completely self-driving vehicle, he asked, “Will there need to be a ‘rider's license’ to ensure some form of adult supervision is present inside any vehicle?”

Consider, for example:

  • Making sure all passengers and cargo are properly restrained for safe motion
  • Making sure all passengers do not remove restraints improperly while in motion
  • Avoiding hazardous use (e.g., not carrying hazardous cargo, ensuring child restraints are used properly, not leaving pets/children in hot or cold car)
  • Nobody sticking heads, arms, etc. out the window or sunroof.
  • Managing passengers in the event of a vehicle fault (e.g., battery fire)
  • Detecting the need for evacuation in a dangerous situation (e.g., stranded on railroad grade crossing, stuck on road with rising flood waters, stuck in tunnel with fire/smoke, handling interactions with law enforcement if stopped)
  • Don't forget: Not all passengers will be able-bodied responsible adults.

Dealing with foreseeable potential misuse

Designers should never underestimate consumers’ ability to misuse a new product. Koopman asked:

  • Should a parent put a child alone in a robotaxi to go to day care, school, a friend's house, grandma's house a four-hour ride away?
  • Should the car notice if a skateboarder has grabbed hold to hitch a tow?
  • Is the ever-popular movie command "Follow that car!" an acceptable command?

In collaboration with Edge Case Research Labs, Underwriters Laboratories is currently developing a UL Standard, dubbed UL 4600, to cover autonomous product safety. Koopman told us that a lot of UL 4600 content will include the laundry list above. And a lot more.