The Future of Automation in the Workplace: Potential Criminal Liability Issues
Robert Williams was a twenty-five-year-old assembly line worker at a Ford Motor plant when he died in 1979. Robert was collecting parts when a robot’s arm struck him and killed him instantly as it also attempted to gather parts. This is the first ever recorded death of a human due to the actions of a robot.1 Ultimately, his death was determined an accident due to the lack of safety measures.2 However, as technology creeps closer to true artificial intelligence, incidents like Robert’s may no longer be considered industrial accidents that result in product liability reviews. In the near future, harm that is caused by artificial intelligence may require an in-depth examination of criminal liability, and our current framework is not equipped to hold artificial intelligence systems criminally liable. It is critical that the legal community further define the relationship between criminal liability and artificial intelligence as this technology has the potential to become what Elon Musk has called the “biggest existential threat” to humanity.3
Machine automation and artificial intelligence are impacting commercial production in a multitude of ways and is drastically transforming the workplace. Amazon is one of the companies embracing this change, and in early September, the New York Times reported it was the first news entity to cover Amazon’s recent implementation of an advanced robotic arm integral to a segment of its warehouse operations.4 Amazon has integrated over 100,000 of these types of robotic systems into its arsenal—a primary reason for their ability to deliver millions of items to customers within a two-day period.5 Intelligent robots are at nearly every stage of warehouse operations and assist employees with organizing, sorting, packing, and restocking inventory.6
Amazon is “on the forefront of automation, finding new ways of getting robots to do the work once handled by employees”; current automation works to optimize supply chain processes and also provides relief for employees that were previously executing more physically laborious, tedious jobs.7 Companies like Amazon are constantly searching for ways to improve operations,8 and startups such as Kindred are working to provide artificial intelligence to warehouse robots.9 Amazon’s thirst for optimized logistics is undeniable and evidenced by its annual innovation competition.10 It is only a matter of time before the company has sophisticated, complicated artificially intelligent robots on manufacturing floors. Although the benefits of implementing these systems are obvious and vast, the possibility for unintended consequences exists, which begs the question: what happens when one of these robots acts unpredictably and inflicts harm on someone?
Jack Balkin, Yale Law professor, explains that robots have the ability to inflict physical harm “because of their programming, or more precisely, the cumulative effect of their hardware, operating system, and software.”11 Artificial intelligence systems have the potential to develop solutions and make decisions that their human counterparts may have failed to consider or rejected in favor of more appealing outcomes. A key component of artificial intelligence is probabilistic reasoning—algorithms are generally selecting the most favorable outcome based on their environmental parameters.12 This inherently means that these systems may act unpredictably because the outcomes are not determined beforehand. But, humans are unpredictable too. The difference is that humans have an awareness of their actions and can possess a “guilty mind,” whereas robots cannot.
The difference in awareness presents a problem for our current understanding of criminal liability. Existing examinations of criminal liability are separated by an analysis of two elements: mens rea (guilty mind) and actus reus (guilty action). The Model Penal Code separates mental culpability into four distinct levels: negligently, recklessly, knowingly, and purposefully.13 Strict liability also exists where intent is not required for a criminal conviction. This framework exists to determine the mental awareness of the defendant, but is not applicable for artificial intelligence systems because they are not yet truly capable of intent or knowledge of wrongdoing, largely due to the fact that the cognitive spectrum of these systems does not yet include consciousness.14
Within our current framework, there are three possible solutions to this unique legal challenge. First, the judicial system could employ strict liability, but as Ryan Calo warns, this potentially would mean that “society could witness a barrage of activity that would be illegal were it carried out or even sanctioned by people.”15 Second, robot systems could be held to a reasonable person standard just as someone fit to stand trial would, but this would not include any cross examination or research into the mental culpability of the system at the time of decision.16 Third, we could hold the entity that employed the artificial intelligence responsible, but this is likely unfair, as anticipating the future behavior of these systems is extremely difficult due to their complex programming.17 This would also likely have a freezing effect on artificial intelligence—halting development and employment of these systems.
These solutions could all prove to be inadequate and may require a separate and distinct legal liability framework to be developed for criminal matters related to artificial intelligence systems. Regardless, these solutions require immediate attention, as artificial intelligence is rapidly developing and its progress is even surprising leaders in the field such as Sergey Brin, the co-Founder of Google.18
GLTR Staff Member, Georgetown Law, J.D. expected 2018; West Point, B.S. 2004. ©2018, John W. Christie