Damon Ferrara

Self-Driving Cars: Whose Fault is It?



The burgeoning development of automated vehicle technology has been an increasing focus of journalists 1 and scholars 2 since major technology companies such as Google announced plans to develop the systems several years ago.3. Coupled with the excitement over the technology’s potential advantages has come a slew of controversies pertaining to the safety of such technology4, and the technology’s potential to supplant a number of human jobs.5 Receiving somewhat less attention are the technology’s legal idiosyncrasies, including questions regarding accident liability, insurance claims, vehicle and driver licensing, and even the application of existing vehicle laws to autonomous vehicles (self-driving cars appear to be much more observant of speed limits than their human counterparts, causing some to be cited by traffic police for driving too slowly).6

Perhaps the most significant legal question relates to vehicle accidents and liabilities: which party is at fault in an accident resulting from the features of self-driving technologies? The vast majority of automobile accidents are caused by driver error, which is an essential attraction of using artificial intelligence in personal vehicles.7

Consequently, it has become generally accepted that employing artificial intelligence (especially accident avoidance systems) in automobiles will help reduce traffic accidents in the aggregate.8 With that in mind, some of the liability for the vehicle’s operations (and accidents) will shift from the driver to the vehicle itself to the vehicle manufacturers and distributers.9 This means that when accidents do occur, the manufacturers will be held liable for a greater proportion of overall accidents than they are currently.

The question of liability is at the center of recent controversies, such as the unfortunate Tesla Motors fatality in Florida last May10, and the bizarre circumstances under which Uber asked test passengers to sign waivers freeing Uber of liability in the event of injury.11

The May 2016 Tesla Motors accident marked the first fatality involving a self-driving car, and some initial media reports seemed to attribute fault to the vehicle immediately, with one article describing the incident as the first known death “caused by a self-driving car.”12 Yet Tesla’s press release response indicated that fault was not yet determined in the accident, citing both the artificial intelligence and the driver as having failed to notice a tractor-trailer that had pulled in front of the vehicle moments before the collision.13 The investigation has not formally concluded, but later reports indicated that the driver was using a portable DVD player moments before the accident, leading Tesla some commentators to imply that the driver was perhaps more at fault than the vehicle.14

To be clear, Tesla’s 2015 Model S is not a fully autonomous vehicle; the Model S is equipped with a system called “Autopilot,” which Tesla describes as a “driver assistance system.” Autopilot is designed primarily to maintain “a vehicle’s position in [a] lane and adjust the vehicle’s speed to match surrounding traffic.”15 Still, it is likely that the feature (even if only in name alone) might give the impression that the vehicle can operate itself without human management under specific circumstances, leading many drivers to focus on things other than road conditions.16 In those situations, tort liability can become a tricky topic; did the driver fail to employ the system properly, or did Tesla market an unreasonably dangerous product?

In that sense, the legal boundaries for semi-autonomous vehicles might be even less certain than fully autonomous vehicles—for semi-autonomous cars, liability seems to be nebulously shared between the driver and the system’s manufacturer, rather than being apportioned almost entirely to the vehicle. Tesla, for example, chose to label Autopilot as a “driver assistance” system, and not as an “accident avoidance” feature. Within the list of Autopilot’s driver assistance systems are features such as the “Automatic Emergency Breaking” function, which Tesla describes as a “collision avoidance assist” feature.17 Labeling the Autopilot system as an “assist” feature is perhaps one way that Tesla can keep liability focused on the driver and thereby mitigate the potential for lawsuits when the system fails to fully “avoid accidents.” In the event of an accident, Tesla might then argue that the system is there only to “assist” the driver, not to drive the car autonomously. Nonetheless, it is difficult to answer the question as to where the feature’s responsibilities end, and where the driver’s begin.

To make matters worse for Tesla, in September, a Chinese consumer initiated the first-ever lawsuit against the company for the failure of the Autopilot system as the alleged cause of a traffic fatality.18 According to the lawsuit documents, the Tesla vehicle crashed into the rear end of a road-sweeping vehicle while under Autopilot control. The documents further allege that “the autopilot programme’s slow response failed to accurately gauge the road conditions ahead and provide instructions.”19 Tesla is reportedly still in the process of investigating the incident, which occurred in January in the Chinese Hebei province, but has had little progress in obtaining vehicle data from the plaintiffs.20

For its part, the National Highway Traffic Safety Administration (NHTSA) has been reviewing autonomous vehicles’ safety features with more scrutiny, asking Tesla for a detailed list of information pertaining to the Model S’s Autopilot functions.21 NHTSA has not made final conclusions with respect to the artificial intelligence system’s role in the May 2016 fatality, but the oversight is part of a larger trend towards possible federal oversight of automated vehicles in the future. 22

Informed regulation in this area could prevent similar accidents before tort litigation would be necessary, but the same regulation might bring about its own slew of legal adjudicatory issues, wherein manufacturers would lobby aggressively against proposed stringent rulemaking by local, state, and federal regulators. California lawmakers have already received strong opposition from industry, leading some companies to characterize the regulatory agencies as being “overly restrictive and stifling innovation.”23

Still, in cases similar to Tesla’s, questions will remain as to whether the systems were being employed properly, and whether, in spite of the driver’s actions, the system was responsible for any amount of the liability. If these issues are raised in tort lawsuits, the questions might also depend on whether the manufacturer provided adequate warnings or instructions for proper usage, whether or not those warnings would be heeded or understood by the average consumer, whether the risk of the dangers was justified given possible alternative designs, and what a reasonable consumer would expect from such a vehicle or product feature—and given the fact that reasonable expectations regarding autonomous or semi-autonomous vehicles is a fairly unexplored region of law, liability could be quite tricky, indeed.24

Tesla, for example, has pointed to abundant warnings about Autopilot’s proper use and the driver’s continued responsibility to keep the vehicle under their control. 25 In Tesla’s press release following the May fatality, the company cautioned that “every time that Autopilot is engaged, the car reminds the driver to ‘Always keep your hands on the wheel. Be prepared to take over at any time.’” 26 With Tesla’s assertion that the feature is simply an “assist” that requires you to maintain control of your vehicle at all times, one might ask what value the system provides at all? That is, if proper use of Autopilot does not allow the driver to remove their hands from the wheel, nor to cede “control” of their vehicle to Autopilot, is the experience substantially different from that of a non-autonomous vehicle?

Federal courts and products liability experts have long recognized that manufacturers cannot immunize themselves by simply “slapping warning labels” on dangerous products.27 The Restatement (Third) of Torts likewise provides that “instructions and warnings may be ineffective because users of the product… may be likely to be inattentive, or may be insufficiently motivated to follow the instructions or heed the warnings.”28 If one accepts the notion that an average Autopilot user might not “always keep [their] hands on the wheel,” perhaps some courts could find the warnings ineffective, as Tesla users would likely be “insufficiently motivated” to follow the instructions. After all, strict adherence to Tesla’s instructions would seem to defeat the attraction of self-driving technology in the first place.

In any event, the battle to determine liability is far from over, highlighted by Tesla’s most recent announcement that the Autopilot feature will be temporarily disabled for drivers who “ignore repeated warnings” to take back control of the vehicle.29 This was considered by some commentators as a response to “widespread concerns that the system lulled users into a false sense of security through its ‘hands-off’ driving capability.”30 Indeed, on September 30, California regulators introduced draft regulations prohibiting manufacturers from using terms such as “autonomous” and “self-driving” when describing systems that are merely “semi-autonomous.”31 The future of self-driving cars will likely see a mix of more stringent regulations, oversight, statutory measures, and perhaps some high-profile lawsuits—at least until the technology is a little more commonplace, and lawmakers better understand their complexities.

* GLTR Staff Member; Georgetown Law, J.D. expected 2018; University of Southern California, B.A. 2008.