HR6334-119

Introduced

To amend section 230 of the Communications Act of 1934 and the TAKE IT DOWN Act to combat cyberstalking and intimate privacy violations, and for other purposes.

119th Congress Introduced Dec 1, 2025

Legislative Progress

Introduced
Introduced Committee Passed
Dec 1, 2025

Mr. Auchincloss (for himself and Ms. Maloy) introduced the following …

Summary

What This Bill Does

The Deepfake Liability Act addresses the growing problem of non-consensual intimate images and AI-generated "deepfakes" being shared online. It modifies Section 230 liability protections for social media platforms, requiring them to implement safeguards against cyberstalking and intimate privacy violations or risk losing their legal immunity from lawsuits over user-posted content.

Who Benefits and How

Victims of non-consensual intimate imagery and cyberstalking gain new protections under this bill. Platforms must establish 48-hour removal processes for flagged content and preserve data for legal proceedings. Individuals whose images are used in deepfakes or who face online harassment will have a clear legal pathway to get harmful content removed quickly. The bill explicitly protects people whose likeness is used without consent in sexually explicit AI-generated content.

Who Bears the Burden and How

Social media companies and online platforms face significant new compliance obligations. They must implement content removal processes, data logging systems, and prevention mechanisms to maintain their Section 230 liability protections. Smaller platforms may find these requirements particularly burdensome. The Federal Trade Commission, in consultation with the FCC and Attorney General, must create implementing regulations within 180 days, adding regulatory workload.

Key Provisions

  • Conditions Section 230 immunity on platforms having a "reasonable process" to address cyberstalking and intimate privacy violations
  • Requires platforms to remove flagged content within 48 hours and identify identical copies
  • Mandates data preservation for legal proceedings related to these violations
  • Expands the definition of "covered platform" to include websites, apps, and online services accessible to the public
  • Defines "sexually explicit digital forgery" to cover AI-generated deepfakes that are virtually indistinguishable from real images
  • Excludes broadband providers, email, messaging services, and data storage from platform requirements
  • Includes First Amendment protections to prevent overreach
Model: claude-opus-4-5-20251101
Generated: Dec 27, 2025 21:49

Evidence Chain:

This summary is derived from the structured analysis below. See "Detailed Analysis" for per-title beneficiaries/burden bearers with clause-level evidence links.

Primary Purpose

We use a combination of our own taxonomy and classification in addition to large language models to assess meaning and potential beneficiaries. High confidence means strong textual evidence. Always verify with the original bill text.

Learn more about our methodology