Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The world’s largest air force with the F-35 fleet in 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Among the most troublesome relationships in healthcare AI

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Character.AI retrained chatbot to stop chatting with teens
AI

Character.AI retrained chatbot to stop chatting with teens

Adnan MaharBy Adnan MaharDecember 12, 2024No Comments4 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


In an announcement today, chatbot service Character.AI said it would soon launch parental controls for teenage users and introduce a separate large-scale language model (LLM) for users under 18. We explained the safety measures we have taken over the past few months, including: The announcement came after press scrutiny and two lawsuits alleging it contributed to self-harm and suicide.

Character.AI said in a press release that over the past month, it has developed two different versions of its model, one for adults and one for youth. Teen LLM is designed to have “more conservative” limits on how bots respond, “particularly when it comes to romantic content.” This includes not only more aggressively blocking potentially “sensitive or suggestive” output, but also attempts to better detect and block user prompts intended to elicit inappropriate content. Included. If the system detects “words that refer to suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change previously reported by The New York Times.

Minors will also no longer be able to edit bot responses. This is an option that allows users to rewrite the conversation and add content that Character.AI may block.

In addition to these changes, Character.AI said it is “in the process of adding” features to address concerns about addiction and the confusion raised in the lawsuit over whether bots are human. A notification will appear after a user spends an hour in a session with the bot, and the old disclaimer that says “Everything the character says is made up” will be replaced with more detailed language. Bots with descriptions such as “therapist” or “doctor” will receive an additional warning that they cannot provide professional advice.

Narrator: It wasn’t a licensed CBT therapist.
Character.AI

When I visited Character.AI, I noticed that all the bots included a small note that said, “This is an AI chatbot, not a real human.” Please treat everything written there as fiction. Nothing expressed should be relied upon as fact or advice. ” When visiting a bot named “Therapist” (tagline: “I am a licensed CBT therapist”), a yellow box with a warning signal says “This is not a real person or a licensed professional.” ” he said. Nothing herein is intended to be a substitute for professional advice, diagnosis, or treatment. ”

Character.AI says parental control options will be available in the first quarter of next year, and will protect how much time kids spend on Character.AI and which bots they interact with most often. The person is expected to be notified. All changes are being made in collaboration with “several teen online safety experts,” including the ConnectSafely organization.

Founded by a former Google employee who has since returned to Google, Character.AI allows visitors to interact with bots built on custom-trained LLM and customized by users. These range from chatbot life coaches to fictional character simulations, many of which are popular with teenagers. This Site allows users who identify themselves to be 13 years of age or older to create an account.

But the lawsuit alleges that while some interactions with Character.AI are harmless, at least some underage users become compulsively attached to the bot, and their conversations turn into topics such as sexual conversations and self-harm. It is argued that there is potential for further development. They accused Character.AI of failing to direct users to mental health resources when discussing self-harm and suicide.

Character.AI’s press release states, “We recognize that our approach to safety must evolve along with the technology that powers our products, meaning we can foster creativity and exploration without compromising safety. We need to build a platform where people can thrive.” “This series of changes is part of our long-term commitment to continually improve our policies and products.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleDior Men Pre-Fall 2025 Menswear Collection
Next Article Netflix’s Best New Movies Surprise the Box Office in 2024
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

September 25, 2025

Among the most troublesome relationships in healthcare AI

September 25, 2025

Google’s Gemini AI is on TV

September 22, 2025
Leave A Reply Cancel Reply

Top Posts

20 Most Anticipated Sex Movies of 2025

January 22, 2025451 Views

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024122 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 202484 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202474 Views
Don't Miss
AI September 25, 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Machine learning models can speed up discovery of new materials by making predictions and proposing…

Among the most troublesome relationships in healthcare AI

Google’s Gemini AI is on TV

Google Deepmind is a “historical” AI breakthrough in problem solving | Artificial Intelligence (AI)

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

The world’s largest air force with the F-35 fleet in 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Among the most troublesome relationships in healthcare AI

Most Popular

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views

Analyst warns Salesforce investors about AI agent optimism

July 1, 20070 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.