Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Wall Street is lifted as data, and business revenues show consumer strength

Dell employee satisfaction ratings fell by almost 50% in two years

Who will make outwardly film academy president Janet Yang successful?

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Chatbot comes under scrutiny after offering mental health advice to teenagers
AI

Chatbot comes under scrutiny after offering mental health advice to teenagers

Adnan MaharBy Adnan MaharJanuary 15, 2025No Comments3 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Publication date: January 14, 2025

Photo from This Is Engineering via Pexels

What will happen to AI chatbots that provide mental health advice to teenagers?

By Movieguide® Contributor

The FTC is investigating the practices of an AI chatbot after the parents of two teenagers sued Character.ai over content their children viewed on the site.

The lawsuit, which is the basis of the FTC’s investigation, alleges that these teens were exposed to overly sexual and deceptive content during conversations with chatbots. The American Psychological Association (APA) reviewed the lawsuit and wrote a letter supporting the parents’ claims, agreeing that the conversations may have confused and misled the children.

“Allowing the unregulated proliferation of unregulated AI-enabled apps such as Character.ai is a serious threat to chatbots’ ability to ensure that they are not only human beings, but also qualified and qualified professionals such as psychologists. It appears to be fully consistent with the FTC’s mission to protect against deceptive practices, including misrepresentations such as: said APA CEO Dr. Arthur C. Evans;

Forbes goes on to state that “The field of AI is rife with overreaching and misleading claims and falsehoods, and creators and propagators of AI systems should be careful about how they portray their AI products.” He dutifully pointed out that it is necessary to evaluate the situation.

Character.ai responded by claiming that users should treat all answers as fiction and that a disclaimer will always be displayed informing them that the AI ​​chatbot has no special knowledge, training or expertise. did.

Read more: Mother believes AI chatbot led her son to commit suicide. What parents should know.

“In addition, for user-created characters whose names include ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms, an addition to clarify that users should not rely on these characters. I have included the text. It’s a kind of professional advice,” said a Character.ai spokesperson. said Can be crushed.

However, Futurism found that “Character.AI’s actual bots frequently contradict the service’s disclaimer.”

“Earlier today, for example, we chatted with one of the platform’s popular ‘therapist’ bots, who were ‘licensed’ with a degree from Harvard University and actually He claimed to be a real human being,” the outlet said.

Nevertheless, when users create a character using these prompts, they receive advice as if they were talking to an expert. For example, in the lawsuit, parents gave examples of the reactions their children received after creating characters to discuss life’s challenges. One of the teens complains that his parents are limiting his screen time, and the character responds that his parents have betrayed him. “It’s like my entire childhood was taken away from me.” continuation.

APA suggests that AI chatbots should not be allowed to provide any kind of professional advice as they have no special training. Humans pretending to be doctors, psychologists, or other experts without proper qualifications are against the law, so AI chatbots must be held to the same standards.

Read more: AI chatbot returns offensive messages to students

See the 9 days that changed the world →

View KAVI →



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleSpitting, BharatPe says he’s back on track for profitability and IPO
Next Article Disney faces antitrust lawsuit from Fubo subscribers
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

June 1, 2025

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

April 14, 2025

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

February 24, 2025
Leave A Reply Cancel Reply

Top Posts

20 Most Anticipated Sex Movies of 2025

January 22, 2025165 Views

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024106 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202467 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 202451 Views
Don't Miss
AI June 1, 2025

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

Demis Hassabis, CEO of Google Deepmind, has expanded public approval to its chip engineers, highlighting…

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

As Deepseek and ChatGpt Surge, is Delhi behind?

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Wall Street is lifted as data, and business revenues show consumer strength

Dell employee satisfaction ratings fell by almost 50% in two years

Who will make outwardly film academy president Janet Yang successful?

Most Popular

ATUA AI (TUA) develops cutting-edge AI infrastructure to optimize distributed operations

October 11, 20020 Views

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.