This technological shift has triggered a parallel evolution in law. The conversation now spans from reforming Rule 901 to proposing a new Federal Rule of Evidence 707 specifically for AI-generated evidence. Simultaneously, ethics regulators are clarifying that lawyer competence requires understanding these technologies.
In the January 7, 2026 edition of The Legal Intelligencer, Edward Kang writes, “Authenticity Under Pressure: Rethinking Rule 901 in the Age of AI.“
For decades, Federal Rule of Evidence 901 has been a pillar of evidentiary stability. Its intentionally flexible standard—requiring only “evidence sufficient to support a finding that the item is what the proponent claims it is”—easily accommodated evidence from handwritten letters to surveillance footage. Authentication disputes were rarely dispositive, typically resolved through minimal foundation and common sense.
Enter the age of artificial intelligence (AI). The advent of generative artificial intelligence—specifically, tools capable of producing hyper-realistic audio, video, images and text (deepfakes)—has fractured the rule’s foundational assumptions. When synthetic media can replicate a person’s likeness or voice with near-perfect fidelity, traditional authenticity markers become unreliable. Evidence can be entirely fabricated yet appears genuine, leaving opposing parties without meaningful tools to prove manipulation.
This technological shift has triggered a parallel evolution in law. The conversation now spans from reforming Rule 901 to proposing a new Federal Rule of Evidence 707 specifically for AI-generated evidence. Simultaneously, ethics regulators are clarifying that lawyer competence requires understanding these technologies. The result is a critical convergence: the mechanics of authentication are now inseparable from counsel’s duty of competence under ABA Model Rule 1.1.
Rule 901’s Analog Assumption in a Digital World
Rule 901 was conceived in an era where forgery was typically crude, detectable, and difficult to scale. Its illustrative examples—testimony based on personal knowledge, distinctive characteristics, chain of custody—presume authenticity can be assessed through human perception and circumstantial context.
Generative AI dismantles this logic. Modern models produce videos of individuals saying things they never uttered, audio clips indistinguishable from real recordings, and documents that mimic unique writing styles—often complete with consistent metadata and artifacts that evade casual scrutiny. In this context, a witness’ assurance that evidence “looks real” offers scant probative value.
Courts are sensing this mismatch. While few opinions directly confront deepfakes, judicial unease with superficial digital authentication is growing. See, e.g., Alford v. Commonwealth, No. 2022-SC-0278-MR, 2024 WL 313431, at *6 (Ky. Jan. 18, 2024) (stating that the emergence of artificial intelligence, with the capacity and initiative to manipulate digital media, “will only serve to further compromise our determinations of authenticity unless such advancements are both recognized and addressed by our courts”). Some courts are “mindful” that the rules of evidence “may need to adapt,” yet still apply the same “low threshold for authentication” of electronic evidence as instructed by the current rules. See State v. Ziolkowski, 329 A.3d 939, 950 (Conn. 2025). The result is unpredictability: similar evidence may be admitted in one courtroom and excluded in another, based largely on a judge’s comfort with the technology rather than a consistent doctrinal framework. “As artificial intelligence progresses, battles over the accuracy of computer images and manipulation of deepfakes can be expected to intensify.” See Pegasystems v. Appian, 904 S.E.2d 247, 279 (Va. App. 2024), appeal granted (Mar. 7, 2025); see also People v. Smith, 969 N.W.2d 548, 565 (Mich. App. 2021) (“we are mindful that in the age of … so-called deep fakes, a trial court faced with the question whether a social-media account is authentic must itself be mindful of these concerns”).
The Case for Reforming Rule 901: Asymmetry and Reliability
The push for reform centers on two flaws exposed by AI: asymmetry and compromised reliability. A proponent of synthetic evidence needs only to clear Rule 901’s low bar of plausibility. The opponent, however, may bear an impossible burden to prove falsity without access to proprietary tools, training data, or generation logs. The current rule relies on adversarial testing, but the technology itself can obscure any meaningful test.
Proposed reforms vary. Some suggest amending Rule 901’s examples to explicitly reference AI-generated evidence, signaling that courts may demand technical proof. Others advocate a more structural shift, placing an affirmative burden on the proponent of AI-susceptible evidence to demonstrate authenticity through forensic analysis.
Skeptics warn that rewriting Rule 901 risks complexity and could disadvantage less-resourced litigants. They argue that judicial discretion and evolving common law—which have adapted the rule to include email and digital photos—can suffice. Yet even skeptics acknowledge deepfakes present a qualitative leap: unlike prior technologies, generative AI is designed to evade detection. This reality fuels the argument for a more targeted solution.
Proposed Rule 707: A Surgical Response to Synthetic Evidence
The most direct proposal is a new Federal Rule of Evidence 707. While formulations vary, a leading draft proposes: “Rule 707. Machine-Generated Evidence. When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)-(d).” In essence, Rule 707 would do three things: Mandate disclosure when a party offers AI-generated or altered evidence. Authorize courts to demand technical proof of authenticity (e.g., expert testimony, metadata analysis, system documentation). Empower judges to exclude evidence if the risk of deception substantially outweighs its probative value, even if Rule 901 is nominally satisfied.Proponents argue that Rule 707 restores balance by aligning standards with technological risk, providing courts with a clear doctrinal framework. Opponents fear line-drawing problems (what counts as “AI-generated”?) and worry it could chill legitimate technological uses. Regardless of its adoption, the proposal signals a consensus: deepfake evidence cannot be treated as just another digital exhibit.
The New Authentication Battleground: From Lay Perception to Expert Analysis
A practical consequence is the forensic turn in authentication. Detecting AI-generated media often requires specialized tools and knowledge that extend beyond a lay understanding. Following trends under Daubert and Rule 702, courts are increasingly treating authenticity as a gatekeeping question, resolved at the time of admissibility, rather than a weight question for the jury.This raises the stakes for practitioners. Authentication is no longer a box-checking exercise but an architectural component of case strategy. It demands early investigation, strategic expert selection, and targeted discovery (e.g., requests for native files, generation logs, model training data). Practitioners who fail to meet these requirements risk exclusion or adverse credibility findings. Conversely, unsupported accusations of “deepfake” manipulation may be seen as speculative or dilatory.
Model Rule 1.1: The Ethical Imperative of AI Literacy
These evidentiary shifts directly implicate ABA Model Rule 1.1 (Competence). Comment 8 explicitly requires understanding the “benefits and risks associated with relevant technology.” While once applied to e-discovery, this now encompasses generative AI.Competence does not demand that lawyers become engineers. It does require baseline AI literacy: understanding the capabilities and limits of these tools, recognizing red flags for synthetic evidence, knowing when to consult an expert, and appreciating the limits of detection technologies. A lawyer who introduces digital media without considering manipulation risk or who fails to investigate credible challenges—may breach the duty of thorough preparation.
Similarly, a lawyer who reflexively cries “deepfake!” without factual grounding risks violating duties of candor and professionalism. Courts show diminishing tolerance for unfounded technological skepticism. Bar ethics opinions now emphasize the duty to supervise litigation technology, vet experts, and advise clients on associated risks. AI competence is transitioning from a niche specialty to a core component of general litigation practice.
Strategic Imperatives for the Modern Litigator
The convergence of Rules 901, proposed 707, and Model Rule 1.1 demands a strategic recalibration. Lawyers must treat potentially AI-generated evidence with the rigor applied to expert testimony: early planning, documented methodology and a clear explanatory narrative.Key questions must guide the process: Provenance: Who created the evidence and using what tools/models? Process: What data trained the system? What safeguards were in place? Verification: What independent, technical verification exists (metadata, hash values, audit logs)?
Discovery should be tailored to answer these questions. Equally crucial is judicial education. Many judges are still developing fluency with AI. Clear, restrained explanations of the technology and its uncertainties are more persuasive than alarmism. The goal is to provide courts with a principled framework for decision-making, not to sow indiscriminate doubt.
Conclusion: Competence Redefined
Whether Rule 901 is amended or Rule 707 adopted remains uncertain. (The public comment period for proposed Rule 707 is open until February 16, 2026.) Yet the trajectory is clear. Authentication doctrine is being reshaped by a technology that challenges bedrock assumptions about truth and reliability. In parallel, professional norms are evolving to mandate a responsible and informed understanding of the same technology.In this new environment, legal competence is comprised of three components: technical, strategic and ethical. Practitioners who integrate AI literacy into their practice and treat authentication as a substantive battleground will protect both their clients and their credibility. Those who do not may find the most persuasive evidence in their case is also the most vulnerable to attack.
Edward T. Kang is the managing member of Kang Haggerty. He devotes the majority of his practice to business litigation and other litigation involving business entities. Contact him at ekang@kanghaggerty.com.
Reprinted with permission from the January 7, 2026 edition of “The Legal Intelligencer” © 2026 ALM Global, LLC. All rights reserved. Further duplication without permission is prohibited. Request academic re-use from www.copyright.com. All other uses, submit a request to asset-and-logo-licensing@alm.com. For more information visit Asset & Logo Licensing.