Loading
Loading
Your feedback directly shapes Sporos.
Sign in to track your feedback history
Existing law requires, among other things related to ensuring the safety of companion chatbots, an operator to prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, as specified. This bill would require, if a companion chatbot detects a credible crisis expression, the companion chatbot to take certain actions, including encouraging the user to seek immediate human support, and, if the companion chatbot detects that a user is reaffirming or escalating the credible crisis expression or detects a subsequent credible crisis expression, require the companion chatbot to initiate a crisis interruption pause of 20 minutes. The bill would define "credible crisis expression" to mean a statement by a user of a companion chatbot that reasonably indicates, as determined through contextual analysis rather than keyword detection alone, intent to harm the user or others. This bill would require an operator of a companion chatbot to document certain information related to credible crisis expressions and crisis interruption pauses and, beginning January 1, 2028, annually report that information to the Office of Suicide Prevention.
Introduced
Feb 13, 2026
Last Action
Mar 9, 2026
Session
CA 20252026
Sponsors
1 primary · 0 co
Referred to Coms. on P. & C.P. and HEALTH.
From printer. May be heard in committee March 16.
Read first time. To print.
Get a plain-English explanation of what this bill does, who it affects, and why it matters.
Referred to Coms. on P. & C.P. and HEALTH.