To amend section 230 of the Communications Act of 1934 to limit liability protection under that section for certain social media platforms, and for other purposes.
Sponsors
John R. Curtis
R-UT | Primary Sponsor
Legislative Progress
IntroducedMr. Curtis (for himself and Mr. Kelly) introduced the following …
Summary
What This Bill Does
The Algorithm Accountability Act creates new legal liability for large social media companies whose recommendation algorithms cause physical harm. It amends Section 230 of the Communications Act to require platforms with over 1 million users to exercise "reasonable care" when designing algorithms that recommend, rank, or amplify content. If their algorithms foreseeably lead to bodily injury or death, they can be sued.
Who Benefits and How
Individuals harmed by algorithmic content—such as teens exposed to self-harm content, people radicalized into violence, or victims of dangerous viral challenges—gain the right to sue social media companies for compensatory and punitive damages in federal court. Parents and legal guardians of affected minors can also bring lawsuits. Plaintiffs' attorneys benefit from a new category of product liability cases with potentially high damages. Smaller social media platforms (under 1M users) and alternative platforms using simple chronological feeds (instead of personalized algorithms) are fully exempt, giving them a competitive advantage. AI safety consulting firms and algorithm auditors stand to gain new business from platforms seeking to demonstrate compliance.
Who Bears the Burden and How
Major social media platforms—including Meta/Facebook, TikTok, YouTube, X/Twitter, Instagram, Reddit, and Snapchat—face significant new legal exposure. They must redesign their recommendation systems to prevent foreseeable harm, invest in safety testing and monitoring, and defend against civil lawsuits. Platforms lose their Section 230 immunity shield if they violate the duty of care. Insurance carriers covering these platforms face higher costs and risks, likely leading to premium increases. The bill also bans mandatory arbitration clauses for these claims, reducing revenue for arbitration service providers like AAA and JAMS.
Key Provisions
- Creates a legal duty of care for social media platforms to prevent algorithm-driven bodily injury or death that is reasonably foreseeable and attributable to algorithm design
- Removes Section 230 liability protection for platforms that violate this duty, exposing them to lawsuits
- Establishes a private right of action allowing injured users (or their legal representatives) to sue for compensatory and punitive damages in federal court
- Exempts chronological feeds, reverse-chronological feeds, and initial search results from the duty of care requirement
- Bans predispute arbitration agreements and class-action waivers for these claims, requiring court adjudication
- Excludes platforms with fewer than 1 million users, email services, messaging apps, videoconferencing, e-commerce sites, streaming services, and news platforms
- Includes First Amendment safeguards preventing enforcement based on viewpoint or protected speech content
Evidence Chain:
This summary is derived from the structured analysis below. See "Detailed Analysis" for per-title beneficiaries/burden bearers with clause-level evidence links.
Primary Purpose
Limits Section 230 liability protection for social media platforms that use recommendation algorithms causing bodily injury or death
Policy Domains
Legislative Strategy
"Create civil liability exposure for social media platforms whose recommendation algorithms cause foreseeable bodily harm, while preserving protections for chronological feeds and search results"
Likely Beneficiaries
- Plaintiffs' attorneys specializing in product liability
- Users harmed by algorithm-driven content
- Alternative social media platforms using chronological feeds
- Parents and guardians of minors affected by harmful content
Likely Burden Bearers
- Major social media platforms (Meta/Facebook, TikTok, YouTube, X/Twitter, Instagram)
- Technology companies using recommendation algorithms
- Platform insurers and risk management providers
Bill Structure & Actor Mappings
Who is "The Secretary" in each section?
- "the_commission"
- → Federal Communications Commission
Key Definitions
Terms defined in this bill
A fully or partially automated system used to rank, order, promote, recommend, amplify, or similarly curate content based on user's personal data including preferences, interests, behavior, or characteristics
A for-profit interactive computer service with 1M+ users that permits account creation for content sharing and interaction; excludes email, messaging services, teleconferencing, product review sites, e-commerce, streaming services, and news coverage
We use a combination of our own taxonomy and classification in addition to large language models to assess meaning and potential beneficiaries. High confidence means strong textual evidence. Always verify with the original bill text.
Learn more about our methodology