Flickr photo by Many Wonderful Artists, https://bit.ly/2SaJc8V

Big Tech, Artificial Intelligence, and the Duty to Protect Human Rights

The rapid expansion and evolution of artificial intelligence (AI) in our world has emphasized both the infinite amount of new possibilities this technology may be able to achieve, as well as the perils that its very use may cause. This duality is not news to the technology community—at  the end of October, Google announced that it would be “launching a global competition to help spur the development of applications and research that have positive impacts on the field [of AI] and society at large,” after the company faced backlash earlier this year when employees objected to Google’s contract with the US government relating to a drone AI initiative called “Project Maven.” In addition to funding this new competition, Google has also pledged to “never develop AI weaponry and that its AI research and product development would be guided by a set of ethical principles.” Such principles will align with “internationally accepted norms,” and Google has stated that its research will not “contravene[] widely accepted principles of international law and human rights.”

Google is not the only company to have faced backlash over concerns about AI. Amazon’s facial recognition software recently prompted 450 Amazon employees to sign a letter “asking CEO Jeff Bezos to stop selling its facial recognition software, Rekognition, to law enforcement agencies.” This software allegedly misidentifies those with darker skin at a rate that is “disproportionately higher” than those of lighter skin, yet it is already being used in both Florida and Oregon. Additionally, Amazon hosting of the software firm Palantir, “which helps immigration authorities track and deport immigrants,” has also caused backlash. Employees of Amazon have been immensely outspoken about this issue and have cautioned that these uses of AI must stop, or else the company risks violating human rights.

Just as the issues with AI and its potential violation of basic human rights are not new to those in the field, part of the solution is also not new. Scholars are now suggesting that technology companies and governments should begin to look to already established frameworks of international legal doctrine for guidance, such as the Universal Declaration on Human Rights, which was created by the United Nations in 1948. Such frameworks like the Declaration are thought to be good starting points since scholars believe that they “represent shared global values.” Conversely, a system that is a “purely legal, regulatory or compliance framework” is thought by scholars to not have the ability to keep up with the ever-constant evolution of AI, and a more general set of international legal norms relating to human rights mixed in with regulation and “hard” laws would allow flexibility in addressing these concerns with AI. Furthermore, all countries that use AI would be held to the same standard by utilizing a legal framework based on international human rights, which would protect such rights from “AI system’s negative social impact on even one individual in places like Myanmar and the most powerful companies in Silicon Valley.”

Such efforts have already begun. For example, in May of 2018, the “Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems” was drafted by Amnesty International and Access Now. This declaration, which other international rules for AI would presumably resemble, “outlines the responsibilities of both states and private sector actors . . . including mitigating discriminatory effects, transparency, and provision of effective remedies to those harms.” While endorsements from the AI community are still being sought, this is the first instance in which the traditional international legal framework of human rights has been applied to the issue of ethical conflicts in AI.

Questions remain as to the viability of this legal approach for the preservation of human rights in the new landscape of AI. At this juncture, utilizing established concepts of human rights and the international legal doctrines already in place in conjunction with regulation domestically may be just the type of preventative action needed to address these serious concerns in the ever-changing world of AI.

Alexandra Day Coyle

GLTR Staff Member; Georgetown Law, J.D. expected 2020; Boston College, B.A. 2015. © 2018, Alexandra Day Coyle.