HR6489-119

Introduced

To ensure that providers of chatbots clearly and conspicuously disclose to users who are minors that chatbots are artificial intelligence systems, not natural person, and do not provide advice from licensed professionals, and for other proposes.

119th Congress Introduced Dec 5, 2025

Legislative Progress

Introduced
Introduced Committee Passed
Dec 5, 2025

Mrs. Houchin introduced the following bill; which was referred to …

Summary

What This Bill Does

The SAFE BOTs Act (Safeguarding Adolescents From Exploitative BOTs Act) creates federal rules for AI chatbots used by minors under 17. It requires chatbot companies to tell young users that they are talking to an artificial intelligence system, not a real person, and to provide information about suicide prevention hotlines when relevant topics come up.

Who Benefits and How

Minors and families gain protection from AI systems that could mislead children into thinking they are getting advice from real licensed professionals (like therapists or doctors). They also benefit from mandatory disclosure of crisis resources if a child discusses suicide.

Suicide prevention hotlines like the 988 Lifeline will receive increased visibility, as chatbots must provide their contact information when users discuss suicidal thoughts.

Large AI companies may actually benefit from the federal preemption provision, which prevents states from creating their own patchwork of chatbot regulations. A single national standard may be easier to comply with than 50 different state laws.

Who Bears the Burden and How

AI chatbot companies (like OpenAI, Anthropic, Character.AI, Replika, and tech giants with chatbot products) face new compliance requirements: they must display AI disclosures, implement break reminders after 3 hours of continuous use, and create policies to address harmful content including sexual material, gambling, and drugs.

The Federal Trade Commission gains enforcement responsibility for violations, which are treated as unfair or deceptive practices.

The National Institutes of Health must conduct a 4-year longitudinal study on how chatbots affect minors mental health, including impacts on loneliness, anxiety, depression, and suicidal ideation.

State governments lose the ability to pass their own chatbot regulations for minors, as the bill preempts state law in this area.

Key Provisions

  • Chatbots cannot claim to be licensed professionals (doctors, therapists, etc.) unless true
  • Must disclose AI status at first interaction and when asked by user
  • Must provide suicide/crisis hotline information when user mentions suicidal thoughts
  • Must recommend breaks after 3 hours of continuous use
  • Must have policies addressing harmful content: sexual material, gambling, drugs/alcohol
  • FTC enforces violations as unfair or deceptive trade practices
  • State Attorneys General can also bring civil enforcement actions
  • NIH required to conduct 4-year study on chatbot mental health impacts on minors
  • Federal law preempts conflicting state regulations
  • Effective 1 year after enactment
Model: claude-opus-4
Generated: Dec 28, 2025 06:48

Evidence Chain:

This summary is derived from the structured analysis below. See "Detailed Analysis" for per-title beneficiaries/burden bearers with clause-level evidence links.

Primary Purpose

Requires chatbot providers to disclose to minors that chatbots are AI systems, not humans or licensed professionals, and mandates safety features including crisis hotline information, break reminders, and policies addressing harmful content.

Policy Domains

Consumer Protection Technology Regulation Child Safety Artificial Intelligence

Legislative Strategy

"Establish federal consumer protection framework for AI chatbots targeting minors, with FTC enforcement and state attorney general concurrent authority, while preempting state laws to create uniform national standards"

Likely Beneficiaries

  • Minors and their families (protection from deceptive AI interactions)
  • Mental health advocates and crisis hotlines (increased visibility and referrals)
  • Large AI/tech companies (uniform national standard may be less burdensome than patchwork state regulations)

Likely Burden Bearers

  • Chatbot providers and AI companies (new compliance requirements for disclosure, content policies, and break reminders)
  • Smaller AI startups (compliance costs may be proportionally higher)
  • Federal Trade Commission (enforcement responsibilities)
  • National Institutes of Health (mandated 4-year longitudinal study)

Bill Structure & Actor Mappings

Who is "The Secretary" in each section?

Domains
General
Domains
Consumer Protection Technology Regulation Child Safety Mental Health Research
Actor Mappings
"the_secretary"
→ Secretary of Health and Human Services
"the_commission"
→ Federal Trade Commission

Key Definitions

Terms defined in this bill

6 terms
"artificial intelligence" §2(i)(1)

Has the meaning given such term in section 5002 of the National Artificial Intelligence Initiative Act of 2020 (15 U.S.C. 9401)

"chatbot" §2(i)(2)

An artificial intelligence system, marketed to and available for use by consumers, that engages in interactive, natural-language communication with a user and generates or selects content in response to user inputs (including text, voice, or other inputs) using a conversational context

"chatbot provider" §2(i)(3)

A person that provides a chatbot directly to a consumer for the use of the consumer, including through a website, mobile application, or other online means. Excludes providers with chat functions incidental to the predominant purpose of their service.

"covered user" §2(i)(4)

A user of a chatbot if the provider has actual knowledge that such user is a minor or would know if not for willful disregard

"minor" §2(i)(5)

An individual under the age of 17 years

"sexual material harmful to minors" §2(i)(6)

Visual depictions that appeal to prurient interest, are patently offensive regarding suitability for minors, depict sexual acts or nudity, lack serious value for minors, or constitute child pornography

We use a combination of our own taxonomy and classification in addition to large language models to assess meaning and potential beneficiaries. High confidence means strong textual evidence. Always verify with the original bill text.

Learn more about our methodology