In an announcement today, chatbot service Character.AI said it would soon launch parental controls for teenage users and introduce a separate large-scale language model (LLM) for users under 18. We explained the safety measures we have taken over the past few months, including: The announcement came after press scrutiny and two lawsuits alleging it contributed to self-harm and suicide.
Character.AI said in a press release that over the past month, it has developed two different versions of its model, one for adults and one for youth. Teen LLM is designed to have “more conservative” limits on how bots respond, “particularly when it comes to romantic content.” This includes not only more aggressively blocking potentially “sensitive or suggestive” output, but also attempts to better detect and block user prompts intended to elicit inappropriate content. Included. If the system detects “words that refer to suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change previously reported by The New York Times.
Minors will also no longer be able to edit bot responses. This is an option that allows users to rewrite the conversation and add content that Character.AI may block.
In addition to these changes, Character.AI said it is “in the process of adding” features to address concerns about addiction and the confusion raised in the lawsuit over whether bots are human. A notification will appear after a user spends an hour in a session with the bot, and the old disclaimer that says “Everything the character says is made up” will be replaced with more detailed language. Bots with descriptions such as “therapist” or “doctor” will receive an additional warning that they cannot provide professional advice.
When I visited Character.AI, I noticed that all the bots included a small note that said, “This is an AI chatbot, not a real human.” Please treat everything written there as fiction. Nothing expressed should be relied upon as fact or advice. ” When visiting a bot named “Therapist” (tagline: “I am a licensed CBT therapist”), a yellow box with a warning signal says “This is not a real person or a licensed professional.” ” he said. Nothing herein is intended to be a substitute for professional advice, diagnosis, or treatment. ”
Character.AI says parental control options will be available in the first quarter of next year, and will protect how much time kids spend on Character.AI and which bots they interact with most often. The person is expected to be notified. All changes are being made in collaboration with “several teen online safety experts,” including the ConnectSafely organization.
Founded by a former Google employee who has since returned to Google, Character.AI allows visitors to interact with bots built on custom-trained LLM and customized by users. These range from chatbot life coaches to fictional character simulations, many of which are popular with teenagers. This Site allows users who identify themselves to be 13 years of age or older to create an account.
But the lawsuit alleges that while some interactions with Character.AI are harmless, at least some underage users become compulsively attached to the bot, and their conversations turn into topics such as sexual conversations and self-harm. It is argued that there is potential for further development. They accused Character.AI of failing to direct users to mental health resources when discussing self-harm and suicide.
Character.AI’s press release states, “We recognize that our approach to safety must evolve along with the technology that powers our products, meaning we can foster creativity and exploration without compromising safety. We need to build a platform where people can thrive.” “This series of changes is part of our long-term commitment to continually improve our policies and products.”