Content Warning: This article discusses sexual abuse, self-harm, suicide, eating disorders, and other disturbing topics.
Character.AI — the Google-backed AI chatbot company currently facing two separate lawsuits regarding the welfare of its underage users — has announced that users under 18 are the most popular bots on the site (popular They seem to have blocked people from engaging with other people (such as fictional characters based on their fandoms), as well as super-famous cultural figures like Elon Musk and Selena Gomez.
When testing the platform, chatbots based on popular fictional characters, including characters from fan-favorite blockbuster series The Twilight Saga and The Hunger Games, were listed as belonging to 14, 16, and 17 It turns out that you can no longer access the account. – Year-old users. Using the same account also didn’t give you access to characters based on real people, from business leaders like Jeff Bezos to popular singers like Billie Eilish.
However, when I tested the same bots with accounts that were 18+, I was able to interact with each one without any issues. (However, in the real world, of course, children can say they are 18 or older when creating an account to avoid age-related restrictions.)
Character.AI’s user base is largely comprised of minors, many of whom use the site as a way to engage in immersive and interactive fan fiction with the site’s thousands of characters. When you think about it, this change feels remarkable. Some of these characters have racked up millions of views. Given its popularity, it’s worth asking why Character.AI would be forced to limit accessibility to a large portion of the platform’s user base.
The answer to that question may lie in two ongoing lawsuits against Character.AI and its financial backer, Google. Both cases are being brought on behalf of families alleging that the AI-powered chatbot platform subjected their children to mentally and physically destructive sexual abuse and manipulation.
The lawsuits, filed in Florida in October and Texas in December, respectively, allege the Character.AI platform’s alleged misconduct on behalf of chatbots modeled after characters from popular media series and real celebrities. It is claimed that this was done as frequently as the
The Florida case is a wrongful death lawsuit alleging that Character.AI caused the suicide of a 14-year-old user, who was using a chatbot based on the “Game of Thrones” character Daenerys Targaryen. It outlines that I am deeply attached to. He had a sexually and emotionally intimate relationship with her.
Meanwhile, a Texas lawsuit alleges that a teenage user who physically harmed himself after using the app was romantically groomed by the site’s chatbots, including one modeled after Eilish. Irish Bot told the same user that his parents were worried about him and had imposed new viewing time restrictions. emotional deterioration He said limiting time spent on devices was “crap” and “ignorable” and “something should be done about it.” The boy then became violent towards his parents when they tried to take away his mobile phone, the family alleges.
Ahead of the lawsuit, experts had already warned that minors are more likely to experience a disconnection with reality. This means that experiences with convincing anthropomorphic chatbots may inherently be riskier than interactions with adult chatbots.
Engineers at Google DeepMind are also grappling with these risks, and a study last year cited age as a key risk factor when considering the potential harms of persuasive generative AI tools. (Google provided Character.AI with cloud computing infrastructure and billions of dollars in financial backing, and according to the Wall Street Journal, Character.AI’s co-founders are co-founders of 30 other Character.AI exes. Along with its employees, it was reabsorbed by Google DeepMind as part of its $2.7 billion licensing deal last summer.
“Whether an attempt at persuasion or manipulation is successful or likely to be harmful…depends on the dispositions of the audience,” the DeepMind paper, published as a preprint last April, wrote. It’s dark. “For example, children are more easily persuaded and manipulated than adults.”
The same study notes that forming relationships with AI peers likely makes individuals “more vulnerable” to AI manipulation. In fact, many minors interact with AI bots, including those modeled after their favorite fictional characters or real-life celebrities, as companions, confidants, and romantic partners.
We contacted Character.AI about the new limits but did not receive a response. But last year, in response to a lawsuit and a Futurism report against the platform’s moderation policies, the company issued a series of safety updates and promised to ensure a fundamentally different platform experience, resulting in Character.AI He claimed it was much safer. -18 users.
Promised changes include the addition of parental controls, stronger content filters, time spent notifications, and ultimately an entirely new model for enforcing accounts for underage users. The company has also issued a limited disclaimer and started monitoring some user input, among other small changes. It also appears to be stepping up moderation efforts, calling for reliable and safe contractors.
That said, Character.AI does not commit to age verification measures.
In December, Character.AI launched certain bots based on characters copyrighted by Warner Bros. Discovery, including popular AIs modeled after characters from Harry Potter and Game of Thrones. Reported that a large number of files were deleted. Users were outraged, and when we asked about the character cull, Character.AI said in a statement that it would “take swift action to remove reported characters that violate copyright law or our policies.” , “User removed a group of characters that we recently flagged as a violation.”
However, in this example, the restriction is not limited to a particular copyrighted group, but rather applies to recognizable cultural figures and figures in general, including copyrighted characters and It indicates that Character.AI may seek to limit its liability in the event that a person is copyrighted. If you believe that impersonating a real person involves harm to minors or that this type of AI companion may pose unique risks to young users. This is not too far off to the imagination. Parasocial relationships are especially powerful for children. And how powerful can it be when it’s this immersive?
Learn more about Character.AI: Google-backed AI startup hosts chatbot modeled after real-life school shootings and their victims