Machines Ascendant: Robots and the Rules of Evidence
Once, courts eschewed “the spector [sic] of trial by machine” and the possibility that “each man’s sworn testimony may be put to the electronic test.”1 Judges worried “jurors w[ould] abdicate their
responsibility for determining credibility, and rely instead upon the assessment of a machine.”2 Forty years later, that fear has metamorphosed into trusting, even welcoming, machine evidence in place of human accusers.3 But these “machine accusers,” as creations of the imperfect, are fallible. And as tools operated by imperfect human agents, even an otherwise neutral machine can advance an ulterior agenda.4 Machines warrant no blind faith, and whatever trust they receive must be earned through the crucible of the rules of evidence.
Today’s robotic offerings look increasingly like the science fiction of years past, and their ascendance has only just begun. Trial by machine is now quite present. In a world where machines increasingly assume the “accuser” roles previously filled primarily by human actors in criminal trials, how do the rules of evidence apply? What rights does a criminal defendant have as to robotic accusers? Who must testify to authenticate machine-generated testimony? What are the consequences of defining statements made by machines as non-hearsay accusatory statements? This article analyzes these questions in the criminal context.
Associate Professor, Barry University Law School, LL.M. Columbia University Law School, J.D., Florida State University College of Law. I am deeply thankful to: Barry University Law School for supporting this article with a research grant; Michael McGinniss, Michael Morley, and Mark Summers, who offered helpful discussion on this topic previously; my research assistants, Jennifer Barron and Richard Pallas; and the forensic analysts who spoke with me about this article; and the Georgetown Law Technology Review for their helpful edits. Any mistakes herein are my own.