How the Take It Down Act tackles nonconsensual deepfake porn — and how it falls short

How the Take It Down Act tackles nonconsensual deepfake porn -- and how it falls short

How the Take It Down Act tackles nonconsensual deepfake porn -- and how it falls short

President Donald Trump holds the signed the Take It Down Act in the Rose Garden of the White House in Washington, DC on Monday. The bill will enact stricter penalties for the distribution of non-consensual intimate imagery, including AI-generated deep fakes and “revenge pornography.” Photo by Bonnie Cash/UPI | License Photo

President Donald Trump signed the Take It Down Act into law Monday. The U.S. House of Representatives passed the bill 409-2 on April 28 after the U.S. Senate passed it by unanimous consent Feb. 13.

The law is an effort to confront one of the internet’s most appalling abuses: the viral spread of nonconsensual sexual imagery. The Take It Down Act targets “non-consensual intimate visual depictions” — a legal term that encompasses what most people call revenge porn and deepfake porn. These are sexual images or videos, often digitally manipulated or entirely fabricated, circulated online without the depicted person’s consent.

The law offers victims a mechanism to force platforms to remove intimate content shared without their permission — and to hold those responsible for distributing it to account. The law compels online platforms to build a user-friendly takedown process. When a victim submits a valid request, the platform must act within 48 hours.

Failure to do so may trigger enforcement actions by the Federal Trade Commission, which can treat the violation as an unfair or deceptive act or practice. Criminal penalties also apply to those who publish the images: Offenders may be fined and face up to three years in prison if anyone under 18 is involved, and up to two years if the subject is an adult.

As a scholar focused on AI and digital harms, I see this law as a critical milestone. Yet, it leaves troubling gaps. Without stronger protections and a more robust legal framework, the act may end up offering a promise it cannot keep. Enforcement issues and privacy blind spots could leave victims just as vulnerable.

A growing problem

Deepfake porn is not just a niche problem. It is a metastasizing crisis. With increasingly powerful and accessible AI tools, anyone can fabricate a hyper-realistic sexual image in minutes. Public figures, ex-partners and especially minors have become regular targets. Women, disproportionately, are the ones harmed.

These attacks dismantle lives. Victims of nonconsensual intimate image abuse suffer harassment, online stalking, ruined job prospects, public shaming and emotional trauma. Some are driven off the Internet. Others are haunted repeatedly by resurfacing content. Once online, these images replicate uncontrollably — they don’t simply disappear.

In that context, a swift and standardized takedown process can offer critical relief. The Take It Down Act’s 48-hour window for response has the potential to reclaim a fragment of control for those whose dignity and privacy were invaded by a click. Despite its promise, unresolved legal and procedural gaps can hinder its effectiveness.

Blind spots and shortfalls

The law targets only public-facing interactive platforms that primarily host user-generated content such as social media platforms. It may not reach the countless hidden private forums or encrypted peer-to-peer networks where such content often first appears.

This creates a critical legal gap: When nonconsensual sexual images are shared on closed or anonymous platforms, victims may never even know — or know in time — that the content exists, much less have a chance to request its removal.

Even on platforms covered by the law, implementation is likely to be challenging. Determining whether the online content depicts the person in question, lacks consent and affects the hard-to-define privacy interests requires careful judgment. This demands legal understanding, technical expertise and time. But platforms must reach that decision within 48 hours.

On the other hand, time is a luxury victims do not have. But even with the 48-hour removal window, the content can still spread widely before it is taken down. The law does not include meaningful incentives for platforms to detect and remove such content proactively. And it provides no deterrent strong enough to discourage most malicious creators from generating these images in the first place.

This takedown mechanism can also be subject to abuse. Critics warn that the law’s broad language and lack of safeguards could lead to censorship, potentially affecting journalistic and other legitimate content. As platforms may be flooded with a mix of real and malicious takedown requests — some filed in bad faith to suppress speech or art — they may resort to poorly designed and privacy-invasive automated monitoring filters that tend to issue blanket rejections or err on the side of removing content that falls outside the scope of the law.

Without clear standards, platforms may act improperly. How — and even whether — the FTC will hold platforms accountable under the act is another open question.

Burden on the victims

The Take It Down Act also places the burden of action on victims, who must locate the content, complete the paperwork, explain that it was nonconsensual and submit personal contact information — often while still reeling from the emotional toll.

Moreover, while the law targets both AI-generated deepfakes and revenge porn involving real images, it fails to account for the complex realities victims face. Many are trapped in unequal relationships and may have “consented” under pressure, manipulation or fear to having intimate content about them posted online.

Situations like this fall outside the law’s legal framing. The act bars consent obtained through overt threats and coercion, yet it overlooks more insidious forms of manipulation.

Even for those who do engage the takedown process, the risks remain. Victims must submit contact information and a statement explaining that the image was nonconsensual, without legal guarantees that this sensitive data will be protected. This exposure could invite new waves of harassment and exploitation.

Loopholes for offenders

The Take It Down Act includes liability-evasive conditions and exceptions that could allow distributors to escape liability. If the content was initially shared with the subject’s consent, served a public concern or was unintentional or caused no demonstrable harm, they may avoid consequences under the law.

If offenders deny causing harm, victims face an uphill battle. Emotional distress, reputational damage and career setbacks are real, but they rarely come with clear documentation or a straightforward chain of cause and effect.

Equally concerning, the law allows exceptions for publication of such content for legitimate medical, educational or scientific purposes. Though well-intentioned, this language creates a confusing and potentially dangerous loophole. It risks becoming a shield for exploitation masquerading as research or education.

Getting ahead of the problem

The notice and takedown mechanism is fundamentally reactive. It intervenes only after the damage has begun. But deepfake pornography is designed for rapid proliferation. By the time a takedown request is filed, the content may have already been saved, reposted or embedded across dozens of sites — some hosted overseas or buried in decentralized networks. The law provides a system that treats the symptoms, while leaving the harms to spread.

In my research on algorithmic and AI harms, I have argued that legal responses should move beyond reactive actions. I have proposed a framework that anticipates harm before it occurs — not one that merely responds after the fact.

That means putting incentives in place for platforms to take proactive steps to protect the privacy, autonomy, equality and safety of users exposed to harms caused by AI-generated images and tools. It also means broadening accountability to cover more perpetrators and platforms, supported by stronger safeguards and enforcement systems.

The Take It Down Act is a meaningful first step. But to truly protect the vulnerable, I believe that lawmakers should build stronger systems — ones that prevent harm before it happens and treat victims’ privacy and dignity not as afterthoughts, but as fundamental rights.

Sylvia Lu is a faculty fellow and visiting assistant professor of law at the University of Michigan. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions expressed in this commentary are solely the views of the author.

How the Take It Down Act tackles nonconsensual deepfake porn -- and how it falls short

Source

Leave A Reply

Your email address will not be published.