AI Is Coming, But the Rules Aren’t Ready

Rule 901 of the Federal Rules of Evidence concerns the authentication of evidence in federal courtrooms, defining what is sufficient for determining that a piece of evidence is what the proponent claims it to be. In April 2024, the Advisory Committee on Evidence Rules considered amendments to Rule 901 that would address potentially AI-generated evidence. The Committee declined to adopt these proposals. Daniel J. Capra, a Fordham Law professor who serves as the reporter to the Committee, believes that a cautious approach is preferable while the technology is so rapidly advancing: “It surely makes sense to monitor the case law for (at least) a year to see how the courts handle AI-related evidence under the existing, flexible, Federal Rules.”

The Committee’s “cautious approach” of doing nothing is, in fact, a reckless one. The Committee even acknowledged this when discussing amendments to Rule 702 that dealt with evaluating AI as an “expert witness,” during which Capra recognized the importance of having a rule “in the bullpen” to deal with AI in the courtroom. The rapid advance of AI technology is not a reason to postpone new rules but to create them as soon as possible. Given the unique challenges presented by AI-generated content, the Committee must change course and amend Rule 901 in 2025 or risk the courts wading through a deluge of AI-related evidentiary questions unguided.

AI-Generated Evidence Presents Unique Challenges In the Courtroom
AI presents unprecedented evidentiary challenges, and it is imprudent not to provide some rules by which courts can evaluate the validity of evidence. The proliferation of AI tools enables almost anyone to create false videos, audio, or photos that can later appear in court. Despite the best efforts of the tech community, there are no consistently reliable tools to detect AI-generated images and media. As it stands, questions on the authenticity of evidence will often have to be determined by the jury. This creates two risks: juries may believe that false evidence is real, and they may believe that real evidence is false.

By and large, people have difficulty identifying deepfakes. Recently, a Baltimore principal was the subject of a social media firestorm due to a purported recording of him engaging in a racist rant about Black students. A police investigation later determined that the recording was a fake created by a school employee with whom the principal had a payment dispute. While in that case the police were able to determine the origin of the recording, defendants will not always benefit from a thorough investigation to determine whether evidence against them is faked.

On the other hand, as public awareness of AI proliferates, it will become easier for anyone to claim that any evidence is a machine-generated fake. High-profile defendants are already alleging that evidence against them is AI-generated. While courts are so far unconvinced by such claims, the opportunity for defendants to make them presents what digital forensics expert Harry Farid calls a classic “Liar’s Dividend” – as photos, videos, and recordings become easier to fake, the less faith the public will have in them, which bad actors can take advantage of to discredit any evidence against them.

Rule 901 Needs a New Subsection for Authentication of AI Evidence
In light of the difficulties juries may face in determining the authenticity of evidence, the Committee must approve changes to Rule 901 as soon as possible. This should take the form of a new subsection, 901(c), to specifically address the authentication of AI evidence. Two proposals for such a subsection were brought before the Committee this year, one proposed by Paul W. Grimm and Maura R. Grossman and the other by Professor Rebecca Delfino. While the Committee did not move forward with either, each proposal has ideas that the Committee should consider for next year.

Grimm and Grossman’s proposal for 901(c) centers around what to do in the case of uncertainty regarding the authenticity of evidence. The proposed rule would allow judges to revoke the jury’s duty of determining the authenticity of computer-generated evidence if there was some real controversy between the parties so long as doing so does not overly burden the proponent’s case. While this could mitigate some of the prejudicial effects that AI-generated evidence has on juries, it does little to solve the issue of the “Liar’s Dividend,” and it does nothing to assure the jury that the evidence introduced is, in fact, authentic.

Professor Delfino’s proposal is significantly stricter. Delfino’s proposed 901(c) would take judgments about the authenticity of all audiovisual evidence out of the jury’s hands altogether, as judges would engage in mandatory evaluations of authenticity outside of the presence of juries. Delfino explains that juries would be instructed that evidence deemed authentic by the court must be considered authentic by the jury. This would nearly eliminate the “Liar’s Dividend” as parties would no longer be able to exploit the skepticism of the jury towards the authenticity of evidence. This is a more effective rule than Grimm and Grossman’s but is likely too harsh, as it applies to all audiovisual evidence rather than specifically electronic evidence. This rule would apply to both modern digital video recordings and video cassettes from the 1980s, and it could bog down the court in authenticity determinations beyond what is necessary to guard against AI-manipulated evidence. Moreover, instructing a jury to dismiss its judgment and skepticism is both hard to enforce for the court and hard to obey for a juror.

Conclusion
To be maximally effective, the Committee must draw on both proposals to form a rule that addresses the problems with AI evidence. The new proposal must be more lenient than Delfino’s and stricter than Grimm and Grossman’s. This can be achieved by taking the basic structure of Delfino’s proposal and limiting its scope to only electronic evidence, like in Grimm and Grossman’s proposal. Additionally, rather than prohibiting juries from considering whether authenticated evidence might be false, it would be more effective to prohibit the parties from alleging that evidence is false after it has been authenticated. This would significantly limit the effectiveness of the “Liar’s Dividend” while not being so overbroad as to be too burdensome.

Regardless of what rule the Committee ultimately adopts, courts need to have a rule “in the bullpen” to deal with the authentication of AI evidence as soon as possible before AI-generated images and video become so lifelike as to be entirely indistinguishable from the real thing.

James Bickford

GLTR Staff Editor; Georgetown University Law Center, J.D. expected 2027; Lafayette College B.A. 2018.