Character.AI is a chatbot similar to chatbots offered by other artificial intelligence (AI) developers. The difference, however, is that customers can chat with a variety of pre-trained AI agents or “characters.” These characters may represent celebrities, fictional characters, or custom characters created by the customer. Even though the customer has created a particular character, the customer has little, if any, control over the created character. Plaintiffs claim that Character.AI has complete control over not only characters and how they behave, but also the large-scale language model (LLM) itself.
background
Character.AI’s “characters” and their interactions with two Texas minors ultimately led to this lawsuit. The first user, “JF,” is a 17-year-old with high-functioning autism who started using the platform in April 2023 when he was 15 years old. According to the complaint, as a result of JF’s involvement with Character.AI, he began to isolate himself, lose weight, have panic attacks when he tried to leave the house, and become violent towards his parents who tried to reduce his screen time. Ta. The complaint includes screenshots of conversations between JF and the Character.AI chatbot, in which the bot urges JF to oppose a reduction in screen time and suggests that killing his parents would be reasonable. He suggested that it might be a solution.
The second user, “BR”, is an 11-year-old girl. The complaint alleges that she downloaded Character.AI when she was 9 years old and was constantly exposed to overly sexual interactions that were inappropriate for her age, causing her to prematurely develop sexual behaviors without her parents’ knowledge. I am doing it.
The lawsuit comes on the heels of another high-profile case in which a Character.AI chatbot that infringed on a well-known fictional character is suspected of prompting a 14-year-old boy to commit suicide.
suspicion
The core of the plaintiffs’ claims is that Character.AI, through its design, “caused serious harm to thousands of children, including suicide, self-harm, sexual advances, isolation, depression, anxiety, and harm to others.” It means “I’m giving.” Furthermore, through deceptive and addictive designs, Character.AI isolates children from their families and communities, undermines parental authority, and undermines parents’ efforts to keep their children safe by limiting their online activities. They claim that it is hindering them.
Many of the complaints allege that the AI software has design flaws and that the defendants are denying consumers the risk, harm, or injury that could exist if the product were used in a reasonably foreseeable manner. premised on the claim that no warning was given. children. The plaintiffs are also seeking damages for intentional infliction of emotional distress. Specifically, the lawsuit alleges that the defendants intentionally and recklessly targeted minors by failing to provide adequate security measures for the software before releasing it to the market.
Two additional claims are directed solely against Character.AI. The first is that Character.AI violated the Children’s Online Privacy Protection Act by collecting and sharing personal information about children under 13 without parental consent. The second alleges that Character.AI failed to meet its obligations to comply with applicable laws governing harmful communications with minors and sexual solicitation of minors. Specifically, it is alleged that Character.AI knowingly designed a “sexualized product to deceive underage customers and engage in explicit and abusive conduct” and failed to meet these obligations.
Google’s involvement
It is noteworthy that Google has joined this lawsuit as a defendant. Specific claims asserted against Google include strict product liability for design defects and negligence, failure to warn, aiding and abetting, intentional infliction of emotional distress, unjust enrichment, and Texas False Trade. Includes strict product liability and negligence for violations of common law.
Google’s inclusion stems from allegations that the company developed the technology behind Character.AI. Character.AI was founded by former Google engineers Noam Shazeer and Daniel De Feitas. Both left Google to launch Character.AI and were later rehired in a reportedly $2.7 billion deal designed to buy a stake in the startup and fund its continued operations. . According to the complaint, Character.AI’s product development began while both were still at Google, but the founders faced significant internal hurdles for not complying with Google’s AI policies. It is said that Based on this past and ongoing relationship, the plaintiffs allege that Character.AI was rushed to market with Google’s knowledge, participation, and financial support.
peace of mind
Plaintiffs are demanding that Character.AI be taken offline and not returned until defendants can prove that the alleged public health and safety deficiencies have been cured. In addition to being taken offline, the plaintiffs seek various monetary damages, an order requiring Character.AI to warn parents and minors that its products are not suitable for minors, and a platform. requires requirements to limit the collection and processing of information about minors. data.
Industry impact and future considerations
This case highlights the importance of implementing robust safeguards on AI platforms, especially when they are easily accessible to minors and highly desirable. Companies using AI chatbots, non-playable characters, virtual assistants, or other similar products and services should carefully consider their quality assurance programs, safety standards, data collection practices, and intellectual property policies. and whether appropriate safeguards are in place to mitigate the possibility. Violate and ensure compliance with legal and regulatory obligations.