According to conventional wisdom, legislative efforts to limit platform-based electoral manipulation—including especially laws that go beyond simply mandating additional disclosure about advertising expenditures—are most likely doomed to swift judicial invalidation for two reasons. First, although one might wonder whether the data-driven, algorithmic activities that enable and invite such manipulation ought to count as protected speech at all, the Court’s emerging jurisprudence about the baseline coverage of constitutional protection for speech seems poised to sweep many such information processing activities within the First Amendment’s ambit. Second, assuming First Amendment coverage, the level of scrutiny likely to be triggered by regulation of such activities will be strict. In this Essay, I bracket questions about baseline coverage and focus on the prediction of inevitable fatality.
Legislation aimed at electoral manipulation rightly confronts serious concerns about censorship and chilling effects, but the ways that both legislators and courts approach such legislation will also be powerfully influenced by framing choices that inform assessment of whether challenged legislation is responsive to claimed harms and appropriately tailored to the interests it assertedly serves. In Part II of this Essay, I identify three frames conventionally employed in evaluating the design of speech regulation — the distribution bottleneck, the rational listener, and the intentional facilitator — and explain why each is ill-suited to the platform-based information environment, which presents different incentives and failure modes. In their place, I offer the platform itself as a new frame. Part III defines the frame more precisely, identifies the harms and interests it brings into focus, and offers some preliminary thoughts on the kinds of legislation it might permit.