Publication date: January 14, 2025

What will happen to AI chatbots that provide mental health advice to teenagers?
By Movieguide® Contributor
The FTC is investigating the practices of an AI chatbot after the parents of two teenagers sued Character.ai over content their children viewed on the site.
The lawsuit, which is the basis of the FTC’s investigation, alleges that these teens were exposed to overly sexual and deceptive content during conversations with chatbots. The American Psychological Association (APA) reviewed the lawsuit and wrote a letter supporting the parents’ claims, agreeing that the conversations may have confused and misled the children.
“Allowing the unregulated proliferation of unregulated AI-enabled apps such as Character.ai is a serious threat to chatbots’ ability to ensure that they are not only human beings, but also qualified and qualified professionals such as psychologists. It appears to be fully consistent with the FTC’s mission to protect against deceptive practices, including misrepresentations such as: said APA CEO Dr. Arthur C. Evans;
Forbes goes on to state that “The field of AI is rife with overreaching and misleading claims and falsehoods, and creators and propagators of AI systems should be careful about how they portray their AI products.” He dutifully pointed out that it is necessary to evaluate the situation.
Character.ai responded by claiming that users should treat all answers as fiction and that a disclaimer will always be displayed informing them that the AI chatbot has no special knowledge, training or expertise. did.
Read more: Mother believes AI chatbot led her son to commit suicide. What parents should know.
“In addition, for user-created characters whose names include ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms, an addition to clarify that users should not rely on these characters. I have included the text. It’s a kind of professional advice,” said a Character.ai spokesperson. said Can be crushed.
However, Futurism found that “Character.AI’s actual bots frequently contradict the service’s disclaimer.”
“Earlier today, for example, we chatted with one of the platform’s popular ‘therapist’ bots, who were ‘licensed’ with a degree from Harvard University and actually He claimed to be a real human being,” the outlet said.
Nevertheless, when users create a character using these prompts, they receive advice as if they were talking to an expert. For example, in the lawsuit, parents gave examples of the reactions their children received after creating characters to discuss life’s challenges. One of the teens complains that his parents are limiting his screen time, and the character responds that his parents have betrayed him. “It’s like my entire childhood was taken away from me.” continuation.
APA suggests that AI chatbots should not be allowed to provide any kind of professional advice as they have no special training. Humans pretending to be doctors, psychologists, or other experts without proper qualifications are against the law, so AI chatbots must be held to the same standards.
Read more: AI chatbot returns offensive messages to students