
Moving Beyond The Devil You Know: The Need For Substantive Regulation of Algorithmic Decision-Making Systems
The willingness to be exposed to risk—to trust another—enables individuals to cooperate and accomplish more together than what would be possible individually.1 The impacts of this trust resonate throughout society even if they are not outwardly obvious:
We trust that architects and builders have created bridges that will support us when we cross them. We trust that merchants will accept the small, green pieces of paper (or digital code) we’ve earned in exchange for goods and services. We trust that airplanes will arrive safely and to the correct airport. We trust that professionals in our service will act in our best interest, and we trust that our friends will support us and look out for us. Without trust, our modern systems of government, commerce, and society itself would crumble. 2
The pursuit of fostering this trust is not static. As modern interconnectivity becomes increasingly digital, entities ranging from local grocery stores to world governments collect, store, and use information about us.3 In turn, regulatory efforts have focused on promoting trust in these information-based relationships.4 In the context of privacy and data security, the Fair Information Practice Principles, developed by the U.S. federal government in the 1970s, provided a theoretical base for trust-promoting regulations applicable to information relationships between consumers and data collectors.5
Harsimar Dhanoa
Associate, Hogan Lovells US LLP. Georgetown University Law Center, J.D 2020.