Jordan Thompson

Regulating the Robots: Key Principles for Framing the Future of Artificial Intelligence

On October 24, 2017, the Information Technology Industry Council (ITI)—a global organization composed of representatives from companies such as Apple, Facebook, Google, and Microsoft—published a set of guidelines detailing how governments and leaders within the technology industry can collaborate to reduce any harmful effects stemming from artificial intelligence (AI) developments.1 Fears of technology gaining dominance over human beings used to be a theoretical fixture within the realm of science fiction. However, what was once an imaginary threat may evolve into an actual concern given recent advancements in the field of AI. When minds like Stephen Hawking and Elon Musk began to describe AI as “the biggest event in the history of human civilization . . . or worst”2 and “our biggest existential threat,”3 policymakers around the world realized that this issue might be worth their attention. As suggested by the ITI principles, legislators and government agencies will have to strike a delicate balance between creating an environment where this technology is free to reach its full capability and establishing restraints to ensure this progress does not move ahead of society’s ability to comprehend its implications.

The ITI’s proposal was meant to inspire collaboration among those in both the public and private sectors by establishing a playbook for parties on both sides of the table.4 The council’s report urged that both established and emerging technology companies design autonomous machines that preserve human dignity and rights and “maintain safeguards to ensure controllability” by human operators.5 The report also encourages governments to take sector-specific action as opposed to creating blanket regulations that may stifle developments,6 and it promotes innovation by “support[ing] the foundation of A.I.” and requesting that governments avoid requiring “companies to transfer or provide access to technology, source code, algorithms, or encryption keys as conditions for doing business.”7

On one hand, regulatory measures are necessary to keep AI from moving at a pace that exceeds the law’s ability to adapt; but on the other, this may inhibit the very productivity and technological breakthroughs that make AI a worthy pursuit. The discussion surrounding the overall regulation of artificial remains murky and, to some, a premature debate.8

Though it may provide challenges, proactive regulation is necessary. The reason Hawking, Musk, and others have expressed concern over AI stems from the fact that control would be difficult to regain if autonomous machines are designed with elements of machine learning. For example, it would be possible for the technology to work towards a specific objective, as programmed by humans, using unexpected means that the original programmers did not intend.9

To many, the thought of artificial intelligence surpassing mankind’s abilities still appears to be a far-fetched idea fresh out the pages of one of Hollywood’s latest scripts. However, just in the last few months, Saudi Arabia became the first country to grant an AI bot national citizenship,10 and Japan became the first country to provide a technological being with residency.11 Clearly, AI has become ingrained into countless areas of society and is set to give birth to a number of complex legal issues. Although technology companies are still uncovering the potential benefits that may come from various applications of artificial intelligence, without some solidified form of regulatory framework, these advancements may ultimately change the power dynamic between man and machine.

GLTR Staff Member; Georgetown Law, J.D. expected 2018; Hampton University, B.S. 2010. ©2017, Jordan E. Thompson.