AI-Generated Evidence: Challenges and Evolving Standards Under the Federal Rules of Evidence
February 25, 2025

AI-Generated Evidence: Challenges and Evolving Standards Under the Federal Rules of Evidence
As artificial intelligence becomes increasingly integrated into various industries, its outputs are poised to play a significant role as evidence in litigation. However, according to a Bloomberg Law article by attorneys from DLA Piper, the admissibility of AI-generated evidence under the Federal Rules of Evidence (FRE) presents novel legal challenges, particularly concerning authenticity and hearsay.
Authenticating AI-generated evidence under FRE Rule 901 requires demonstrating that the output is what its proponent claims it to be. Yet, because AI systems generate content independently—often through opaque processes—even experts may struggle to verify how a specific output was produced. Courts have already recognized these concerns. A New York court recently held that, given AI’s rapid evolution and inherent reliability issues, a hearing should be required before admitting AI-generated evidence.
The article notes that the US Courts Advisory Committee on the FRE has proposed amendments to address these issues. One key proposal expands Rule 901(b)(9) to require not just accuracy but reliability, mandating that proponents of AI-generated outputs provide evidence describing the training data and system functionality. The committee also seeks to tackle deepfake concerns through a burden-shifting framework. Objecting parties must show that a jury could reasonably find the evidence manipulated before shifting the burden back to the proponent to establish authenticity.
Recognizing that AI-generated evidence often straddles the line between expert testimony and traditional evidence, the committee has proposed a new Rule 707. This rule would subject AI-generated outputs to the same admissibility standards as expert testimony under Rule 702, ensuring courts scrutinize the AI system’s inputs, validation, and accessibility for assessment by opposing parties.
While reliability concerns complicate the authenticity of AI-generated evidence, the article says they may also help overcome hearsay objections. Under FRE Rule 801, hearsay requires a human declarant, a requirement that AI outputs do not meet. Courts have already held that machine-generated statements, such as diagnostic machine results and automated transaction records, fall outside the scope of hearsay. Without human intervention, AI-generated outputs are likely admissible for their truth without implicating hearsay rules.
As AI technology advances, so will evidentiary standards governing its use in litigation. Law firms must stay ahead of these developments to litigate admissibility challenges effectively. Understanding the nuances of authentication and hearsay is critical, especially as AI-generated evidence faces increasing scrutiny akin to expert testimony.
Managing partners should ensure that their litigators remain informed about evolving AI capabilities, relevant case law, and proposed amendments to the FRE. Investing in legal technology expertise will be essential for navigating the complexities of AI-generated evidence in high-stakes litigation.
Get the free newsletter
Subscribe for news, insights and thought leadership curated for the law firm audience.