We’ve missed this before. In California, the US, a new bill is being proposed that requires AI companies to regularly remind minor users that chatbots are not human. Introduced by California Sen. Steve Padilla, the law aims to “protect children from predatory chatbot practices.”
The bill states that chatbot operators need to distribute “regular and prominent notifications” to remind users that chatbots are artificially generated. AI companies should also provide their annual reports to California’s state health services department, detailing several instances, as follows:
An instance that detects “suicidal thoughts” by minor users. If “suicidal ideation” is detected, there are minor users who attempted suicide, or died after doing the same thing. An instance of a chatbot that suggests such an idea.
Additionally, AI chatbot operators must undergo regular third-party audits to ensure compliance with this proposal.
Why is this important?
The proposal follows two lawsuits against AI chatbot service Character.AI, along with other negative effects on teenager mental health, promoted suicide, promoted sexual abuse and committed sexual abuse. Additionally, in a press release of the bill, Padilla recounted some troubling conversations between AI chatbots and minors that risk the safety of the latter through absurd recommendations. In addition to this, a study from Cambridge University showed that minors exhibit personification (assigning human attributes to non-human entities) while placing greater confidence in AI chatbots in contrast to humans. I emphasized it.
How feasible is this solution?
Padilla’s efforts are trying to prohibit businesses from developing “addictive engagement patterns” that affect “impressive users.” To determine whether the proposed measures will retain weight and protect children, Medianama will be directing Anureet Sethi, founder of Trijog, Mental Health Wellness Company, and adolescent director Mihika Shah. I talked to someone. This is what they had to say:
Disclaimer fails in certain circumstances
Sethi and Shah have the potential to treat chatbots as emotional and true understandings in children, such as personification and Eliza effects. I explained that there is. “Overall, despite repeated reminders, young children may struggle to completely distinguish between humans and chatbots. This tendency is the case for more imaginative or socially isolated children. It’s stronger,” they added. Furthermore, they argued that repeated interactions could enhance familiarity and trust that allowed users to overlook the disclaimer.
Although there is little evidence regarding the impact of warning labels on AI-generated content for teenagers, experiments have been conducted to analyze the overall impact on individuals. For example, two online experiments examining over 7,500 Americans found that when AI issues warning labels about generated content “has significantly reduced individual beliefs about post core claims.” got it. In particular, research has developed four different labels for such content, focusing on the processes used to create the content, “AI-generated”, “artificial”, “operation”, and “false”, The first labels that won the test result were significantly fewer. The impact on the audience. Going forward, this could be compared to the notification set forth by the new proposed California bill that shows chatbots are “artificially generated” rather than “human.”
This study investigated the impact of AI content labels on social media, but it is unclear how such disclaimers affect Chatbot users who already know they are interacting with AI. It remains.
Changes in empathy
Despite repeated reminders, regular interactions with children raise questions about their impact on human perceptions of interactions. Describing this in detail, Seti and Shah have the hope that children should provide a rapid, structured, non-judgmental response that in practice, human interactions are similar to chatbots. He said that it might be. “This can lead to dissatisfaction with real-world relationships that are slower, more predictable and affective,” they added.
Distrust in digital literacy vs. online interaction
Disclaimers may not be effective in certain circumstances, but they can have some positive impacts, such as building digital literacy. However, please note that if messaging is “too stiff” or “fear-based”, you may be mistrustful of all online interactions that include legitimate support sources. They say that if children are frequently told that “chatbots can’t help you with real problems,” they will not let this skepticism go down to other digital platforms, including online counseling services, mental health forums, and educational tools. It may become generalized to.
Chatbot for emotional distress
Speaking of the use of AI chatbots to combat emotional distress and mental health issues, they have constant reminders that act as a check of reality and overreliance on AI for serious problems. We are focusing on what can be prevented. However, these reminders also have a double aspect. For example, it could prevent children from facing any form of digital support, leading them to feel isolated due to a lack of alternative help sources.
Importantly, researchers have found that mental health chatbots significantly reduce short-term recession and pain, but experts have found that lack of technical expertise and tendencies to compose information. Because there are things we continue to do with them. Furthermore, researchers have concluded that many of these chatbots currently illuminating the moonlight as mental health counselors are “untested” and unsafe.
advertisement
What should businesses do instead?
To prevent the inefficiency of such disclaimers, Seti and Shah proposed several subtle and educational strategies.
Issuing adaptive reminder assesses the nature of the interaction instead of static disclaimer. Provide resources to parents and educators to discuss AI restrictions with your child. Use “Conversation Nudge” to direct your child towards actual interactions. The chatbot explains, “It sounds really important. Did you share this with someone you trust?” Introducing transparency features such as interactive tutorials and quizzes to help children understand the role, limitations and ethics of AI.
Overall, Seti and Shah argue that AI should promote digital literacy and emotional intelligence by acting as a “useful tool rather than an alternative to actual relationships and expert help.”
Similar duties in other jurisdictions
In addition to the proposed California bill in the United States, several other guidelines and laws outline similar transparency requirements for AI companies, although not explicit.
EU AI Law: The law does not necessarily require regular notifications, but it provides for transparency norms such as certain “disclosure obligations.” For example, AI systems like chatbots should notify users that they are interacting with the machine, allowing the latter to make informed decisions. This adds the norms imposed on the generated AI service, allowing content to be identified through labels. European Commission Ethical Guidelines on Trustworthy AI: Released in 2019, these guidelines provide transparency or human awareness of interacting with AI systems, and accordingly about the capabilities and limitations of the system. I have knowledge. OECD AI Principles: The Organization for Economic Cooperation and Development’s 2019 Principles outlines provisions of transparency and explanability. They do not legally require disclaimers, but they require AI companies to assume “responsible disclosure” about their systems.
Specific Questions
Medianama contacted Charition.ai with questions about compliance efforts with the proposed bill.
What specific measures will your company implement to enable these regular reminders? Does this vary by age group? What specific design changes will you make to comply with the limitations regarding “addictive engagement patterns”? Also, how do you define and measure addiction in the context of an AI chatbot? What mechanisms does AI use to detect suicidal thoughts in children? Also, are you planning to report these cases while maintaining user privacy? How does a chatbot determine which users are children? Also, what safeguards do you implement to ensure compliance without violating user privacy? What challenges do you face when determining whether a chatbot is “appropriate” for your child? And how do you ensure transparency in communicating these risks to parents and guardians? How does AI treat conversations about mental health and suicidal ideation differently compared to adult users?
Chariture.ai response
In response to the query, a spokesman for Cherality.AI said that to clarify to users that AI chatbots are not real people and should not be relied on facts or advice, the company said it was “prominent disclaimer” claimed to have been displayed. In addition to this, the company has strengthened its detection and response efforts, noting that in the case of suicide or self-harm voice detection, pop-ups that direct users to the relevant country’s national suicide prevention helpline will flash. He claims that.
This comes in addition to other safety features, including another model for teenagers with additional features and pledges to make the internet a safer place. In particular, the characters filed a lawsuit against the company in October 2024 to strengthen these moderation efforts. However, the plaintiffs in this case still opin that such guardrails are “not absolutely sufficient to protect our children.”
Read again:
Supporting our journalism: