Extraordinary Meaning: Judge Newsom’s A.I. Experiments in Textualist Interpretation
This Note analyzes two concurring opinions by United States Circuit Judge Kevin Newsom of the United States Court of Appeals for the Eleventh Circuit. In both concurrences, Judge Newsom advances the idea that large language models (LLMs) are useful in determining ordinary meaning for purposes of textual interpretation. This Note takes the position that practical issues with LLMs currently make widespread adoption for textual interpretation undesirable. However, it assumes the use of artificial intelligence (AI) in determining ordinary meaning, following Judge Newsom’s efforts, will become more common, and provides recommendations for best practices in applying this technology. This Note also builds on Judge Newsom’s theory that the similarities between responses, upon prompting LLMs to provide multiple ordinary meanings of a given term, might provide evidence of the ordinary meaning as a conceptual “common core.” Specifically, this Note offers a novel approach by which interpreters might utilize LLMs to identify conceptual patterns between generated responses, hopefully revealing previously unconsidered analytical dimensions of interpretive questions. This approach involves generating large numbers of ordinary meaning responses and identifying patterns through two levels of forced categorization.
Finally, this Note explores how LLMs, and by extension the recommended best practices, might serve to advance other purported motivations of textualist interpretation, including decisional predictability, fair notice, and legal stability.
Parker Miller
Special thanks to Professor Kevin Tobia for his expertise and guidance in developing this Note. My sincere appreciation to Georgetown University Law Center, and the many wonderful editors at the Georgetown Law Technology Review for making publication of this Note possible. Thanks also to my colleagues who provided editorial assistance, including Retired Judicial Secretary Jeannette Miller.