Regulating Privacy Dark Patterns in Practice—Drawing Inspiration from California Privacy Rights Act
Scholars define dark patterns as user interface design choices that benefit an online service by coercing, manipulating, or deceiving users into making unintended and potentially harmful decisions. Although dark patterns are a subject of interest of policymakers around the globe, to date, little legislation has been enacted to regulate them. This Article analyzes the definition of dark patterns introduced by the California Privacy Rights Act (CPRA), the first legislation explicitly regulating dark patterns in the United States. We discuss the factors that make defining and regulating privacy-focused dark patterns challenging, review current regulatory approaches, consider the challenges of measuring and evaluating dark patterns, and provide recommendations for policymakers. We argue that California’s model offers the opportunity for the state to take a leadership role in regulating dark patterns generally and privacy dark patterns specifically, and that the CPRA’s definition of dark patterns, which relies on outcomes and avoids targeting issues of designer intent, presents a potential model for others to follow. Because many dark patterns do not explicitly rely upon deception, we argue that regulating dark patterns necessitates expanding the Federal Trade Commission’s regulatory authority to include coercion and manipulation, as well as an embrace of performance-based standards, a change that state authorities may also wish to adopt. We conclude by suggesting that in regulating dark patterns, policymakers may also have the opportunity to introduce a paradigm shift in how to measure, evaluate, and enforce consumer privacy protections by drawing on human-centered design as a model.
Jennifer King & Adriana Stephan
Ph.D., Stanford Institute for Human-Centered AI.