Skip to content
Best Practices in Courts & Administration

AI on trial: How courts are litigating the GenAI boom

Dorna Moini  CEO & Founder / Gavel

· 7 minute read

Dorna Moini  CEO & Founder / Gavel

· 7 minute read

With AI moving faster than ever, courts have begun to grapple with the foundational elements of how artificial intelligence is treated differently from humans and how it impacts existing laws

We’re not looking at metallic robots in the distant, distant future anymore. Artificial intelligence has not only arrived but has also infiltrated almost every industry. Of course, with such disruption comes many questions.

While we have some early signs that the regulation may not move as fast as the technology, the courts have already started tackling these issues, to various degrees of depth.

With the rise in deepfakes, can courts trust video evidence?

Last year, a California judge commented that the ubiquitousness of AI-created deepfakes does not provide carte blanche immunity to celebrities’ public comments.

In Huang v. Tesla, the bereaved family of Walter Huang — a man who died while using Tesla’s auto-drive feature in 2018 — sued the company for wrongful death in a Santa Clara court. The complaint alleged that the Tesla automobile’s “defective state” led to Huang’s death. Plaintiffs also requested to depose Elon Musk, Tesla’s co-founder and CEO, regarding his public statements touting Tesla’s self-driving capabilities and safety in a 2016 recording.

Tesla opposed the request, arguing that because Musk was a public figure, he was subject to many deepfake videos, and as such the authenticity of the 2016 recording was called into question. To this, the Santa Clara judge remarked that Tesla’s arguments in opposition to the deposition were “deeply troubling,” further commenting that Tesla’s “position is that because Mr. Musk is famous and might be more of a target for deepfakes, his public statements are immune,” and as such this would allow public figures “to avoid taking ownership of what they actually say and do.”


“Right now, people talk about being an AI company. There was a time after the iPhone App Store launch where people talked about being a mobile company. But no software company says they’re a mobile company now because it’d be unthinkable to not have a mobile app. And it’ll be unthinkable not to have intelligence integrated into every product and service. It’ll just be an expected, obvious thing.”

— Sam Altman, co-founder and CEO, OpenAI


Before the case was eventually settled, the judge tentatively ordered a limited, three-hour deposition in which Musk could be asked whether he actually made the statements on the recordings.

It seems that in California discovery at least, parties will need more solid footing to deter a discovery tool when alleging AI misinformation.

The rights of an inventor among AI creators

In another legal battle, music labels filed cases in two federal districts courts — the District of Massachusetts and the Southern District of New York — on June 24, 2024, suing online music AI generators for copyright infringement for its AI-generated audio content. The music labels allege that an online music AI generator “can only work the way it does by [first] copying vast quantities of sound recordings from artists across every genre, style, and era” — much of which are recordings owned or exclusively controlled by the suing music labels.

Relatedly, almost exactly one year prior, authors filed a class action against OpenAI in federal court in the Northern District of California, alleging multiple causes of action, including direct and vicarious copyright infringement, violation of the Digital Millennium Copyright Act, unfair competition, negligence, and unjust enrichment. The authors argued that OpenAI was illegally using their works of art to train its ChatGPT. The complaint was eventually amended leaving only the direct copyright infringement claim after the judge scrutinized, among other things, the plaintiff’s failure to allege substantial similarity between plaintiffs’ works and the output of ChatGPT. This case is still in progress.

In the patent part of the intellectual property world, the U.S. Supreme Court declined to hear an appeal based on the U.S. Patent and Trademark Office’s refusal to issue patents created by AI in April. Computer scientist Stephen Thaler had previously filed a patent for two inventions that his AI system had generated. The Patent and Trademark Office and a Virginia judge previously rejected the patent applications on the grounds that the inventor listed on the application was not a natural person, as required by federal patent law. The Supreme Court’s refusal to grant Thaler’s appeal signals a hard boundary: inventors for patent purposes must be human.

For the flush of AI-related cases alleging copyright infringement of creatives’ work product that is to come, a large part of judges’ analysis will probably focus on whether AI output is substantially similar to the original works that were entered into the AI tool. In this analysis, judges will have to provide nuance and definition about what makes an invention AI-created and how much AI can be part of the inventing process before the end product is determined to be AI-created.

AI-enhanced video evidence in criminal court

In State of Washington v. Puloka, a Washington State Superior court rejected the admission of AI-enhanced video evidence in a criminal trial because, among other reasons, the forensic video analysis community did not consider it a reliable source of evidence.

Here, the defense attempted to admit into evidence an AI-enhanced version of a smartphone video, arguing that the original video was low resolution, had substantial motion blur, and had fuzzy images. In deciding the admissibility of the recording, the court heard testimony that the AI tool added and changed materials from the original video, and — while the AI enhancements made the video a “more attractive product for a user” — it did not maintain the image integrity. As such, the forensic video analysis community would not accept this as a technique to evaluate videos in the legal context.

The court found that using AI to enhance videos at criminal trial was a novel technique, and as such, they would have to pass the Frye test. The Frye test states that “[t]he standard for admitting evidence utilizing a novel scientific theory or principle is whether it has achieved general acceptance in the relevant scientific community.” The scientific community in this case was the forensic video analysis community, and the defense did not provide sufficient proof that such AI enhancement of video evidence in a criminal trial was generally accepted by this group.


The Patent and Trademark Office and a Virginia judge previously rejected the patent applications on the grounds that the inventor listed on the application was not a natural person, as required by federal patent law.


Despite this decision by the Washington Superior Court, there is an indication, however, that AI-enhanced audio evidence may be more welcome in the courts. While the courts will still likely go through the Frye test and apply the proffered evidence against the standards codified in respective legislation, AI-enhanced audio — such as the use of the so-called cocktail party effect to dim background noise in audio — differs in one significant way than that of its video counterpart. In much audio AI-enhancement, AI is used to separate material to help the listener focus on already existing content; in the AI-enhanced video from Puloka, on the other hand, AI added in content.

What’s next for AI in the courts?

As state courts are witnessing the beginning of AI-related litigation, it won’t be long before the highest court in the nation will be pulled into the discussion. With an increasingly inter-state and globalized economy and the pace and depth at which AI is being incorporated into business models, social media, healthcare, and entertainment, it is not a question of if but a matter of when the Supreme Court will hear similar issues.

One thing is for sure, AI is moving faster than ever, and sooner or later, all courts will have to grapple with the foundational elements of how AI is treated differently from humans and how it impacts existing laws.


You can find out more about establishing GenAI literacy in the courts here.

More insights