The Rising Challenge of Deepfakes and Shallow Fakes in Litigation

By Dan Regard
April 25, 2025

Dan Regard is the President and CEO of Intelligent Discovery Solutions, Inc. (iDS). He helps companies solve legal disputes through the smart use of digital evidence. He is the author of “Fact Crashing™ Methodology” and is a contributing author to multiple other books on discovery and eDiscovery.
The Rising Challenge of Deepfakes and Shallow Fakes in Litigation
This is the second article in a 10-part series on how technology is transforming evidence, litigation, and dispute resolution. You can find the first article here. In this installment, we’ll explore how digital evidence can be faked or modified—including the phenomenon of “deepfakes”—and how this could potentially shape the future of litigation.
The courtroom is no longer just a battleground for facts—it’s now a proving ground for digital truth. With the rise of Generative AI, litigators face an unprecedented challenge: how to separate genuine evidence from sophisticated forgeries.
False Evidence is Not New
The submission of false evidence in court is not new. It is as old as trial itself. But in the last 20 years, the overwhelming quantity of evidence submitted in court is digital in origin, and almost all litigants have access to digital tools. This combination of factors has made it easier than ever for people to create synthetic documents and believe that courts won’t scrutinize that evidence (because it comes from “technology”).
I know this personally because we are often hired in cases to test and validate, or dispute, evidence authenticity. What used to happen occasionally is now a regular occurrence.
And now we have GenAI. If GenAI has not made the situation worse, it’s certainly made the perception of the situation worse and increased the awareness of the potential.
Before digging into this further, some vocabulary review is appropriate. I consider “synthetic media” to be any digital file that is created artificially. All synthetic media is not bad. There are a myriad of reasons for creating synthetic content. This includes artistic purposes, modeling, testing, or just plain amusement. But when the intent (the mens rea) of the fabrication is to deceive, then it moves from synthetic to “fake.”
When a file is made from scratch, we call that a “deepfake”. This is often applied to videos, but can apply equally to images, audio files, or even documents such as emails, texts, or social media posts.
When a file is made by modifying an existing file, we call that a “shallow fake.” This can be pulling a video clip out of context, editing an audio file, or modifying the addresses or time stamp on an email.
We have seen both, but in most of the cases we work on the shallow fake is more pervasive than the deepfake. And while we have worked on audio and video cases, the majority of the cases we see are document based. Contracts, emails, and text messages predominate.
The Liar’s Dividend
Finally, the rise in actual cases involving fabricated digital evidence, combined with heightened awareness driven by the rapid advancement and marketing of GenAI, has intensified concerns about misinformation. In an era where “fake news” has become a rallying cry, a troubling phenomenon has emerged—the Liar’s Dividend. This concept refers to the advantage that individuals or parties gain by disparaging any unfavorable evidence as false, regardless of its authenticity. By casting doubt on legitimate evidence, bad actors can erode trust in the judicial process, forcing litigants and courts to expend significant resources proving that even valid evidence is real.
With this background, what is the solution to this increase in false evidence, and what is the answer to the Liar’s Dividend?
Three Steps to Detect and Verify Suspicious Evidence
After handling hundreds of disputes, we’ve identified three key steps to detect false evidence.
- Trust Your Gut: Many attorneys sense when something is off but hesitate to challenge digital evidence due to technical uncertainty. Trusting your instincts is the first step – if something feels wrong, investigate further. If it seems too good to be true, maybe it is.
- Apply the Regard 3-Part Test. I call this the Regard Test because I developed it. Ask yourself three simple questions:
- Is this a key piece of evidence?
- Is there no original document or file?
- Is there a complicated story behind its existence? If the answer to all three is yes, there’s a high likelihood the evidence is falsified.
- Seek a Second Opinion: Find a way to verify or invalidate the evidence by cross-checking against other documents for inconsistencies. If gaps remain, conduct additional discovery. Finally, consult a forensic expert—they can often spot telltale digital irregularities or metadata anomalies that signal fabrication.
As the legal system faces increasing challenges from false evidence, pre-trial evidentiary motions and stronger sanctions (under our existing rules) will become essential tools in combating digital falsification.
Rule 902(13) & 902(14): A Step Forward, But Not a Complete Solution
The 2017 amendments to the Federal Rules of Evidence (FRE) introduced Rules 902(13) and 902(14), expanding self-authentication for digital evidence. Rule 902(13) allows machine-generated records—such as logs or transaction data—to be admitted without live testimony. Rule 902(14) permits digital copies—such as forensic images of mobile devices, or copies of mailboxes—to also be admitted without live testimony. These amendments streamline authentication and reduce the burden of proving the integrity of large volumes of electronically stored information (ESI) in litigation.
While these rules improve efficiency, they do not address the rising risks of falsified evidence, including deepfakes and synthetic documents. They work well for structured, auditable data, but may fall short when a single, critical piece of evidence is disputed. However, there are other rules in Civil Procedure that can supplement. These include FRCP Rules 11(b) Representations to the Court; Rule 26(g) Certification of Discovery; and Rule 34 Document Production.
New amendments are also being considered to deal with AI-generated materials.
Digital deception is evolving rapidly, and legal professionals must evolve with it. The rules of evidence are also in flux, and those who adapt now will be best positioned to navigate the future of litigation. Let’s work together to ensure authenticity is not just assumed—but provable when necessary.
Closing Thoughts: Join the Conversation
This is just one piece of the bigger conversation on the future of evidence. As legal professionals, we are relied upon to stay ahead of how emerging technologies impact investigations, case strategy, and courtroom advocacy. It’s important to partner with people who specialize in helping law firms, corporations, and regulators navigate the challenges of digital evidence—whether it’s mobile forensics, AI-generated data, or system logs that rewrite how we establish facts. Let’s continue the discussion.
Get the free newsletter
Subscribe for news, insights and thought leadership curated for the law firm audience.