People Do It, Why Not Machines? Driverless Cars, and What Happens When They Kill
Humans die at the hands of other humans every day. Often, people don’t seem to notice unless they are affected personally. However, a recent death involving a driverless vehicle and an Arizona pedestrian is causing serious turmoil across the country. But why? Is it worse when computer error causes death than when human error does?
That is precisely the question that federal transportation regulators, state governments, and car-tech companies are asking themselves after a driverless Uber vehicle struck and killed a pedestrian in Tempe, Arizona on March 18, 2018.1 Elaine Herzberg, a forty-nine-year-old woman, was pushing her bicycle across a poorly lit road when she was hit by one of many driverless vehicles roaming the streets. She died on impact.2
To make matters worse, the Uber vehicle that struck Herzberg was not completely “driverless,” since a “backup driver” was sitting in the driver seat when she was killed. A video depicting the interior of the vehicle in the moments leading up to the accident reveal what one imagines the “driver” of a driverless car to be doing: not watching the road.3 However, the vehicle’s exterior camera captures a scene which calls into question whether a human driver would have made a difference. The road was oppressively dark, and Herzberg seemed to appear out of thin air.
A week after the incident, Arizona’s Governor, Doug Ducey, forbid Uber from operating its driverless vehicles in the state, claiming that Uber was out of compliance with the safety standards outlined in its agreement with Arizona.4 His decision is somewhat paradoxical, considering that, in 2015, Arizona regulators held the state out as a “regulation free zone” to attract car-tech companies like Uber to the region.5 In fact, Uber is not the only company operating driverless vehicles in the state. Waymo, Alphabet’s transportation division, and Lyft also operate driverless vehicles in Arizona, in addition to a handful of other states.6
This begs the question: How closely are federal and state governments monitoring driverless vehicle programs? Are systems in place that protect pedestrians and drivers from injury, or are governments simply allowing these companies to roam freely on public roads?
The National Transportation Safety Board (NTSB) does not currently require companies operating driverless vehicles to divulge their safety measures or to file safety reports, although it allows them to do so.7 In fact, Waymo is the only company in the country to publish such a report.8 It is also the only company that operates completely driverless vehicles on the road, in large part due to a self-study which found it takes an average of seventeen seconds for a “backup driver” to take control of a vehicle—or a distance of a quarter-mile at highway speeds.9 The study, if accurate, makes it clear that human drivers are unreliable behind the wheel of autonomous vehicles, as evidenced most recently in Arizona.10
Although the NTSB does not require safety reports, it does conduct after-the-fact investigations into accidents involving driverless cars. Following the accident in Arizona, the regulatory body posted a notice on its website announcing that it was conducting a thorough investigation into Uber’s responsibility in the accident.11 And although the NTSB’s findings have yet to be issued, Tempe police have already issued a statement that Uber was not likely at fault, and that a human driver would likely not have avoided the accident.12
Barring a contrary finding by the NTSB, it seems at the least unwarranted, and at most unfair, for Arizona to give Uber the boot. Based on the facts, it appears likely that a human driver would not be liable for the accident. Therefore, is Arizona’s boycott of Uber simply a knee-jerk reaction in the face of bad publicity? Or maybe an overcorrection for failing to enforce regulations on experimental technology? Either way, federal, state, and local governments must determine what level of risk they’re willing to accept when embracing driverless technology.
Of equal importance is the way society views the harm resulting from machine and computer error. It is perhaps telling that in 2015 one pedestrian was killed by a human driver every 1.6 hours, but only one has to die at the hands of a machine for us to raise an eyebrow.13 How are the two different? And how will the law choose to handle those differences? Will the law shape society’s view, or will it be shaped by society?
In the end, if it is true that machines err less than humans, should we not be doing everything in our power to fast-track and embrace technologies that take humans out of the driver seat? Are we ready to accept the strikingly backwards logic that we must allow more machines to kill so that more humans can live? Recent events indicate the answer is no, but who knows about tomorrow.
GLTR Staff Member, Georgetown Law, J.D. expected 2018; University of Maryland, B.S. 2008. ©2018, Joseph Simpson