HR6356-119

In Committee

Artificial Intelligence Civil Rights Act of 2025

119th Congress Introduced Dec 2, 2025

At a Glance

Read full bill text

Legislative Progress

In Committee
Introduced Committee Passed
Dec 2, 2025

Ms. Clarke of New York (for herself, Ms. Lee of …

Summary

What This Bill Does

The Artificial Intelligence Civil Rights Act of 2025 establishes federal civil rights protections against discrimination by AI algorithms used in high-stakes decisions. It requires companies that develop or use AI systems to conduct bias testing before deployment, perform annual impact assessments, provide transparency disclosures to the public, and ensure individuals can appeal AI decisions to a human reviewer. The bill gives the Federal Trade Commission (FTC) broad authority to enforce these requirements and creates a private right of action for individuals harmed by discriminatory AI.

Who Benefits and How

Individuals from protected classes (including racial minorities, women, people with disabilities, and others) gain legal protections against AI systems that discriminate in employment, housing, credit, healthcare, insurance, and other consequential life decisions. They can sue for treble damages (at least $15,000 per violation), request human review of AI decisions affecting them, and cannot be forced into arbitration. AI auditing and compliance consulting firms will see substantial new business opportunities, as will plaintiffs' attorneys specializing in civil rights and consumer protection. The FTC receives authorization to hire up to 500 new staff for enforcement.

Who Bears the Burden and How

Technology companies developing AI algorithms face extensive new compliance requirements including mandatory pre-deployment bias evaluations, independent audits, detailed documentation, and 10-year record retention. Businesses deploying AI for hiring, lending, insurance underwriting, or other consequential decisions must conduct annual impact assessments, publish transparency disclosures in multiple languages, and maintain human appeal processes. Banks, airlines, telecommunications carriers, and non-profits (typically exempt from FTC jurisdiction) are explicitly covered. Companies face penalties of $15,000 per violation or 4% of annual revenue, whichever is greater, from FTC, state attorneys general, and private lawsuits. Pre-dispute arbitration agreements and class action waivers are prohibited.

Key Provisions

  • Prohibits AI algorithms from causing disparate impact discrimination based on race, sex, disability, age, religion, and other protected characteristics in employment, housing, credit, insurance, education, healthcare, and government benefits
  • Requires developers and deployers to conduct pre-deployment bias evaluations with independent auditors and submit findings to FTC
  • Mandates annual post-deployment impact assessments to monitor for discriminatory effects
  • Requires public disclosure of AI practices including short-form notices, with accessibility in 10 languages and for individuals with disabilities
  • Creates right to opt out of AI decisions and appeal to human reviewers
  • Establishes private right of action with treble damages, punitive damages, and attorneys' fees
  • Bans pre-dispute arbitration agreements and class action waivers for AI discrimination claims
  • Authorizes state attorneys general to bring enforcement actions with $15,000 per violation or 4% of revenue penalties
  • Extends FTC enforcement authority to cover banks, airlines, telecom carriers, and nonprofits
  • Creates new federal occupational series for algorithm auditors and authorizes 500 new FTC positions
Model: claude-opus-4
Generated: Dec 28, 2025 06:57

Evidence Chain:

This summary is derived from the structured analysis below. See "Detailed Analysis" for per-title beneficiaries/burden bearers with clause-level evidence links.

Primary Purpose

Establishes civil rights protections against discrimination by AI algorithms, requiring developers and deployers of AI systems to conduct bias evaluations, provide transparency disclosures, and ensure human alternatives for consequential decisions affecting employment, housing, credit, healthcare, and other critical life areas.

Policy Domains

Civil Rights Artificial Intelligence Consumer Protection Technology Regulation Data Privacy

Legislative Strategy

"Create comprehensive federal regulation of AI systems to prevent algorithmic discrimination by requiring pre-deployment bias testing, ongoing impact assessments, mandatory disclosures, human alternatives, and strong enforcement mechanisms including private right of action"

Likely Beneficiaries

  • Individuals subject to AI-based decisions in employment, housing, credit, healthcare, and other critical areas
  • Civil rights organizations
  • Consumer protection advocates
  • AI audit and compliance consulting firms
  • Plaintiffs' attorneys specializing in civil rights and consumer protection

Likely Burden Bearers

  • AI/ML developers and technology companies
  • Businesses deploying AI for hiring, lending, insurance, or other consequential decisions
  • Banks and financial institutions using algorithmic credit decisions
  • Insurance companies using AI for underwriting and claims
  • Employers using AI for hiring and worker management
  • Healthcare organizations using AI for treatment decisions

Bill Structure & Actor Mappings

Who is "The Secretary" in each section?

Domains
Civil Rights Artificial Intelligence Anti-Discrimination
Actor Mappings
"deployer"
→ Person that uses a covered algorithm for a commercial act
"developer"
→ Person that designs, codes, customizes, produces, or substantially modifies a covered algorithm
"the_commission"
→ Federal Trade Commission
"independent_auditor"
→ Individual that conducts pre-deployment evaluations or impact assessments with objective and impartial judgment
Domains
Consumer Protection Technology Standards Whistleblower Protection
Actor Mappings
"deployer"
→ Person that uses a covered algorithm for a commercial act
"developer"
→ Person that designs, codes, customizes, produces, or substantially modifies a covered algorithm
"the_commission"
→ Federal Trade Commission
Domains
Transparency Consumer Awareness Disclosure Requirements
Actor Mappings
"deployer"
→ Person that uses a covered algorithm for a commercial act
"developer"
→ Person that designs, codes, customizes, produces, or substantially modifies a covered algorithm
"the_commission"
→ Federal Trade Commission
Domains
Enforcement Private Right of Action State Enforcement
Actor Mappings
"individuals"
→ Natural persons in the United States affected by covered algorithms
"the_commission"
→ Federal Trade Commission
"state_attorney_general"
→ State attorney general or State data protection authority
Domains
Government Administration Appropriations
Actor Mappings
"director_opm"
→ Director of the Office of Personnel Management
"the_commission"
→ Federal Trade Commission

Key Definitions

Terms defined in this bill

9 terms
"covered algorithm" §2

A computational process derived from machine learning, natural language processing, AI techniques, or similar complexity that creates products/information, promotes/recommends/ranks information, makes decisions, or facilitates human decision making for consequential actions

"harm" §2_harm

A non-de minimis adverse effect on an individual or group based on a protected characteristic, involving force/coercion/harassment/intimidation/detention, or infringing a constitutional right

"deployer" §2_deployer

Any person that uses a covered algorithm for a commercial act

"developer" §2_developer

Any person that designs, codes, customizes, produces, or substantially modifies an algorithm intended or reasonably likely to be used as a covered algorithm for commercial acts or government use

"personal data" §2_personal_data

Information that identifies or is linked or reasonably linkable to an individual or an individual's device, including derived data and unique persistent identifiers

"disparate impact" §2_disparate_impact

An unjustified differential effect on an individual or group based on an actual or perceived protected characteristic

"independent auditor" §2_independent_auditor

An individual that conducts pre-deployment evaluations or impact assessments with objective and impartial judgment, having no employment or financial interest in the developer or deployer

"consequential action" §2_consequential_action

An act likely to have material effect on access to, eligibility for, cost of, or conditions related to: employment, education, housing, essential utilities, health care, credit/banking, insurance, criminal justice, elections, government benefits, or public accommodations

"protected characteristic" §2_protected_characteristic

Race, color, ethnicity, national origin, religion, sex (including pregnancy, sexual orientation, gender identity), disability, limited English proficiency, biometric information, familial status, income source/level, age, veteran status, genetic information, or any other classification protected by Federal law

We use a combination of our own taxonomy and classification in addition to large language models to assess meaning and potential beneficiaries. High confidence means strong textual evidence. Always verify with the original bill text.

Learn more about our methodology