Skip to content
Risk Fraud & Compliance

Deepfakes: Federal and state regulation aims to curb a growing threat

Michelle M. Graham  Senior Legal Editor / Law Department / Thomson Reuters Practical Law

· 6 minute read

Michelle M. Graham  Senior Legal Editor / Law Department / Thomson Reuters Practical Law

· 6 minute read

From Pope Francis to Taylor Swift, deepfakes have already caused an uproar in the public sphere; but while there is no federal regulation specifically overseeing the technology, a patchwork of federal and state laws aims to govern its use

Is it real, or is it a deepfake? Deepfakes are simulated images, audio recordings, or videos that have been convincingly altered or manipulated to misrepresent someone as saying or doing something that the person did not actually say or do.

In March 2023, for example, a deepfake image of Pope Francis wearing a white puffer coat went viral on social media, confusing millions of viewers. In January 2024, sexually explicit deepfake images of Taylor Swift circulated on social media, causing an uproar among her millions of fans and in the news media.

Those images and the artificial intelligence (AI) tools that create deepfakes raised public awareness about the significant risks posed by the unauthorized creation, disclosure, and dissemination of these digital forgeries, which can result in defamation, intellectual property (IP) infringement, breach of publicity rights, harassment, fraud, blackmail, election interference, and incitement to violence and social and civil unrest.

Two generative AI (GenAI) tools are required to create a deepfake image or recording. One tool creates the image or recording, and one tries to detect if the output is fake. These systems are referred to as competing generator and discriminator artificial neural networks within a generative adversarial network (GAN), which is a deep learning process. The generator (also known as the encoder) analyzes input data and extracts key features from the recording to produce outputs, which are sent to the discriminator (also known as the decoder) to detect artificial outputs like a manipulated voice recording.

The generator and discriminator create a feedback loop, causing the generator to produce increasingly higher-quality artificial outputs and the discriminator to increasingly improve in detecting them. The feedback loop repeats until the desired quality of deepfake recording or image is created.

Federal legislation to combat deepfakes

Currently, there is no comprehensive enacted federal legislation in the United States that bans or even regulates deepfakes. However, the Identifying Outputs of Generative Adversarial Networks Act requires the director of the National Science Foundation to support research for the development and measurement of standards needed to generate GAN outputs and any other comparable techniques developed in the future.

Congress is considering additional legislation that, if passed, would regulate the creation, disclosure, and dissemination of deepfakes. Some of this legislation includes the Deepfake Report Act of 2019, which requires the Science and Technology directorate in the U.S. Department of Homeland Security to report at specified intervals on the state of digital content forgery technology; the DEEPFAKES Accountability Act, which aims to protect national security against the threats posed by deepfake technology and to provide legal recourse to victims of harmful deepfakes; the DEFIANCE Act of 2024, which would improve rights to relief for individuals affected by non-consensual activities involving intimate digital forgeries and for other purposes; and the Protecting Consumers from Deceptive AI Act, which requires the National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines relating to the identification of content created by GenAI, to ensure that audio or visual content created or substantially modified by GenAI includes a disclosure acknowledging the GenAI origin of such content, and for other purposes.

States pursue deepfake legislation

In addition, several states have enacted legislation to regulate deepfakes, including:

    • Texas SB 751 — makes it a criminal offense to fabricate a deceptive video with intent to injure a candidate or influence the outcome of an election.
    • Florida SB 1798 — criminalizes images created, altered, adapted, or modified by electronic, mechanical, or other means to portray an identifiable minor engaged in sexual conduct.
    • Louisiana Act 457 — criminalizes deepfakes involving minors engaging in sexual conduct.
    • South Dakota SB 79 — revises laws related to the possession, distribution, and manufacture of child pornography to include computer-generated child pornography, defined as any visual depiction of an actual minor that has been created, adapted, or modified to depict that minor engaged in a prohibited sexual act; an actual adult that has been created, adapted, or modified to depict that adult as a minor engaged in a prohibited sexual act; or an individual indistinguishable from an actual minor created using AI or other computer technology capable of processing and interpreting specific data inputs to create a visual depiction.
    • New Mexico HB 182 — amends and enacts sections of New Mexico’s Campaign Reporting Act by adding disclaimer requirements for advertisements containing materially deceptive media and creates the crime of distributing or entering into an agreement with another person to distribute materially deceptive media.
    • Indiana HB 1133 — requires certain election campaign communications that contain fabricated media to include a disclaimer. The legislation also permits a candidate depicted in fabricated media that does not include a required disclaimer to bring a civil action against specified persons.
    • Washington HB 1999 — relates to fabricated intimate or sexually explicit images and depictions. The law creates civil and criminal legal remedies for victims of sexually explicit deepfakes.
    • Tennessee the Ensuring Likeness, Voice, and Image Security (ELVIS) Act — updates and replaces the state’s Personal Rights Protection Act of 1984 to protect an individual’s name, photograph, voice, or likeness; provide for liability in a civil action for activities related to the unauthorized creation and distribution of a person’s photograph, voice, or likeness; and includes liability for persons who distribute, transmit, or otherwise make available technology with the primary purpose of unauthorized use of a person’s photograph, voice, or likeness.
    • Oregon SB 1571 — requires a disclosure of the use of synthetic media in election campaign communications.
    • Mississippi SB 2577 (effective July 1) — creates criminal penalties for the wrongful dissemination of digitizations, which are defined as the alteration of an image or audio in a realistic manner utilizing an image or audio of a person, other than the person depicted, or computer-generated images or audio, commonly called deepfakes; or the creation of an image or audio through the use of software, machine-learning AI, or any other computer-generated or technological means.

Additional state bills regulating deepfakes are pending in Florida, Virginia, California, and Ohio, and are being considered in other states.

Additional steps to mitigate deepfake risks

In addition to relying on the government to enact comprehensive legislation to regulate deepfakes and law enforcement and the courts to enforce that legislation, businesses can take several additional steps to reduce their exposure to the risks posed by deepfakes.

These steps include knowing how to defend against the increasingly sophisticated use of AI-enabled phishing and social engineering attacks; preventing AI-enabled harassment and impersonation by using social media responsibly; ensuring that the company has comprehensive employee and vendor policies in place to guard against AI and social media risks; and educating employees about how to properly use social media and AI tools.

More insights