It is now a matter of fact that deepfake technology has reached a level of sophistication that makes it an immediate and ongoing threat to enterprises, financial institutions, and national security. While lawmakers are attempting to regulate its most harmful uses, the current legislative landscape remains inconsistent and incomplete, leaving significant vulnerabilities that cybercriminals are already exploiting, causing a staggering 2,100% surge in AI-based fraud attempts (Signicat).
Federal Efforts Address Only Part of the Threat Landscape
At the federal level, several targeted bills against deepfakes have emerged. But while addressing some of the most important aspects of AI harm, these bills fall short of comprehensive protection. Below are some of the major bills coming from Congress:
-The DEEPFAKES Accountability Act introduces content labeling requirements, but lacks robust enforcement mechanisms and applies primarily to consumer-facing content.
-The Preventing Deepfakes of Intimate Images Act and the TAKE IT DOWN Act provide crucial protections for individuals against non-consensual intimate deepfakes. These laws are critical—the psychological trauma and reputational damage caused by such content can be devastating for victims, with effects lasting years.
-The Deepfake Task Force Act aims to study the issue but delays immediate protective measures that organizations need today.
While these protections are absolutely necessary, they don't address the broader deepfake ecosystem and risks that AI-generated deception poses to businesses, financial transactions, and critical infrastructure. With organizations losing an average of $450,000 per AI fraud incident (Regula), and 75% of businesses having been targeted in the past year (Ironscales), federal laws that regulate AI-generated content and its use at large are long overdue.
State-Level Responses Create a Regulatory Patchwork
Several states have introduced legislation to regulate malicious deepfakes, but as with federal bills, the approach remains fragmented. The most notable state efforts are:
-California and Texas have passed laws targeting election-related deepfakes and non-consensual intimate media.
-Virginia focuses on the unlawful dissemination or sale of images of another person, including AI-generated or altered images.
-Louisiana created a law making it a crime to unlawfully disseminate or sell AI-generated images of another individual, targeting harmful uses of synthetic media.
-Tennessee replaced its Personal Rights Protection Act with the Ensuring Likeness, Voice, and Image Security Act of 2024, which grants individuals a property right over the use of their name, photograph, voice, or likeness in any medium, including AI-manipulated media.
-Utah expanded the definition of counterfeit intimate images to include AI-generated depictions.
These laws are essential, but the incomplete regulatory patchwork creates jurisdictional complications that threat actors easily exploit. While some legislation targets deepfakes in specific contexts, AI-driven fraud — ranging from voice impersonation scams to synthetic identity attacks targeting supply chains — remains largely unregulated across most states. If the lack of federal response is meant to be supplemented by local efforts, so far, such a regulatory framework hasn’t materialized.
The Enterprise Security Gap
The selective regulation of deepfakes — largely ignoring enterprise-level fraud and cyber threats — creates dangerous blind spots in organizational security postures. Threat actors are not waiting for legislation to catch up, as 92% of companies have already experienced losses due to deepfakes (Regula), and 6 in 10 executives say their firms have no meaningful protocols regarding deepfake risks (Business.com).
The latter piece of data is particularly concerning — for security leaders, the absence of comprehensive regulation means that defensive measures must outpace both the technology and the regulatory environment. In absence of legislation, organizations need layered defensive approaches to preemptively identify and neutralize malicious AI-enabled attacks before they breach operations and cause devastating losses.
Moving Toward Comprehensive Protections
At Reality Defender, we see firsthand how organizations that implement robust detection capabilities can effectively mitigate these emerging risks, often stopping attacks that would have bypassed traditional authentication measures.
But organizations should not have to fend for themselves. At the policy level, lawmakers must shift from a reactive to a proactive stance, closing regulatory gaps that leave businesses exposed. A fragmented approach to deepfake regulation ensures that cybercriminals will continue to exploit inconsistencies, increasing fraud losses and undermining trust in digital communications.
The most effective legislative framework will address deepfakes not as isolated content issues but as sophisticated vectors for fraud, impersonation, and information warfare that threaten both individuals and organizations. Until such comprehensive regulation emerges, enterprises must rely on technical safeguards that protect their communications, authentication systems, and digital transactions from increasingly convincing AI-enabled threats.