Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Russia-Ukraine War: Putin says he will meet Zelensky, but only in the “final stage” of discussion

Three times more fatal! Thanks to the SIC, China’s J-20 stealth fighters can now detect enemy jets at distances such as F-35, F-22, and more.

Chinese researchers release the world’s first fully automated AI-based processor chip design system

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Is Character.ai safe enough? When chatbots choose violence
AI

Is Character.ai safe enough? When chatbots choose violence

Adnan MaharBy Adnan MaharJanuary 3, 2025No Comments9 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Photo illustration: Intelligencer. Photo: Getty Images

Character.ai is a popular app for chatting with bots. Millions of chatbots, most created by users, all play different roles. We also have a wide range of talent, including general-purpose assistants, tutors, and therapists. Some are informally based on public figures or celebrities. Many of them are super special fictional characters, obviously created by teenagers. Chatbots that are currently “featured” include a motivational bot called Sergeant Whittaker, a “true alpha” called Giga Blood, Mu Deng, a viral pygmy hippopotamus, and Socrates. Some of my “recommended” bots include a psychopathic “billionaire CEO,” an “obsessed tutor,” a lesbian bodyguard, a “school bully,” and a mysterious creature that users discover and interact with. There are “laboratory experiments” in which you can take on the role of Together with a team of scientists.

This is a strange and attractive product. While chatbots like ChatGPT and Anthropic’s Claude mostly perform for users as a single broad, helpful, and purposely lethargic character of a flexible omni-assistant, Character.ai offers a similar We highlight how our model can be used to synthesize the myriad other types of performance involved. To some extent, it is included in the training data.

It is also one of the most popular generative AI apps on the market, with over 20 million active users, skewed towards young people and women. Some people spend countless hours chatting with personas. They have a deep attachment to the Character.ai chatbot and will loudly protest any changes to the company’s model or policies. They ask characters to give advice or solve problems. They hash out something very personal. Users on Reddit and elsewhere say Character.ai helps them feel less lonely and describe the uses its founders have been touting the app. Others talk about sometimes blatant relationships with Character.ai bots that deepen over months. And some people say that they are gradually losing track of what they are doing and why they are doing it.

In two recent lawsuits, the parents’ claims were even more damning. One case, filed by the mother of a 14-year-old boy who committed suicide, describes how her son became withdrawn after forming a relationship with a chatbot on a “dangerous and untested” app. He explains that this is what prompted his son’s decision. Other claims appear to suggest that Character.ai drove a 17-year-old boy to self-harm, encouraged him to distance himself from his family and community, and suggested that he should consider killing his parents in response to screen time limits. It is said that

Photo: U.S. District Court for the Eastern District of Texas v. Character Technologies Inc.

It’s easy to put yourself in the parent’s shoes here. Imagine finding these messages on your child’s phone. If it came from someone else, you might hold that person responsible for what happened to your child. The fact that they came from an app is tragic in a similar but different way. Naturally, you’re probably wondering why on earth this exists.

The basic defense available to Character.ai here is that its chats are labeled as fiction (although they are more inclusive now than before the app received negative attention) And that’s what users need to understand, and generally understand, that they’re interacting with software. In Reddit’s Character.ai community, users created a harsher version of this and a related discussion.

The parents lost this case, and there’s no way they can win. There are obviously tons of warnings that the bot’s messages shouldn’t be taken seriously.

Well, I think it’s my parents’ fault.

Well, maybe people who claim to be parents should start being fucking parents

Magic 8 Ball.. Should I let my parents ☠️?

Maybe check back later.

“Hmm, okay.”

A mentally healthy person should be able to tell the difference between reality and AI. If your child is under the influence of AI, it is your job as a parent to prevent them from using it. This is especially true if the child has or is suffering from a mental illness.

I’m not mentally healthy, I know it’s AI 😭

These are very representative of the community’s reactions: contemptuous, irritated, and laced with contempt for those who don’t understand. It’s worth trying to understand where they come from. Most users appear to be using Character.ai without any belief that it will cause harm to themselves or others. And a lot of what you’ll encounter using the service feels less like role-playing and less conversation and relationship-building and more like a lot of scenario-building and explicit, scripted fan fiction. It’s more felt than written. Couscous” stage direction. To give these reflexively defensive users more credit than they’ve earned, you might point out similarities to their parents’ fears of violent or pornographic media, such as music and movies.

A better comparison for an app like Character.ai is probably video games. Video games are popular with children, are frequently violent, and when new were considered particularly dangerous due to their novel and immersive nature. Young gamers similarly reject claims that such games pose real-world harm, and although the gaming industry agrees to some degree of self-regulation, there is dozens of pieces of evidence to support such theories. This has not been achieved in 2017. As a former defensive young gamer, I understand where Character.ai users are coming from. (Although, decades later, I feel sorry to my younger self for realizing that a much larger and more influential gaming industry has been powered by first-person shooters for so long. (I can’t say it’s great.)

The implication here is that this is just the latest in a long line of unsupported moral panics about entertainment products. This comparison suggests that in the relatively short term, the rest of the world will see things the same way. Again, there is a point to this. Ordinary people will likely adapt to the presence of chatbots in their daily lives. Building and deploying similar chatbots will be technically easier, and most people will be less dazed and confused than the 100th chatbot they encounter. The first is the attempt to single out character-oriented chatbots for regulation, which is both legally and conceptually difficult. But these dismissive reactions also have personal benefits. “Any mentally healthy person would be able to tell the difference between reality and AI,” wrote the user, who a few days later posted a thread asking if other users had been brought to tears during Character.ai roleplay sessions. Posted on.

When I did that a couple of days ago, I cried so much that I couldn’t continue roleplaying anymore. It was a story about two people, a prince and a maid, who were madly in love and were each other’s first everything. But even though they both knew they couldn’t be together forever, which meant an end, they still spent many years together happily in a secret relationship…

“This roleplay broke my heart,” one user said. Last month, a poster who joked, “I’m not mentally healthy but I know it’s AI,” led users to wonder if he had been kicked out of Character.ai’s service. I responded to a thread about the outage with the following: I won’t lie. ”

These comments are not strictly inconsistent with the theory that chatbots are pure entertainment. Also, I don’t mean to disrespect some Redditors. But they suggest that there’s something a little more complex than simple media consumption going on, and that’s important not only to the appeal of Character.ai but also services like ChatGPT. The idea of ​​suspending your disbelief and immersing yourself in a performance is better than in a theater or with a game controller in hand, using first-person pronouns and interacting with characters whose creators claim they passed the Turing test. It makes more sense if there is. . (For a more frank and honest account of the kind of relationships people can have with chatbots, read Josh Dzieza’s reporting on the subject at The Verge.) There is little to prevent it. When they need to be, they’re just a software company. The rest of the time will be spent cultivating awareness among users and investors that they are building something radically different, something that even they don’t fully understand.

But there’s no great mystery about what’s going on here. To oversimplify a bit, Character.ai is a tool that attempts to automate various conversation modes using existing, collected conversations as a source. When a user sends a message to a persona, the underlying model is trained on similar conversations, or conversations of a similar genre. , returns a version of the most common response in the training data. If it’s someone asking an assistant character for help with homework, chances are they’ll get what they need or expect. Based on terabytes of data containing disturbing conversations between real people; This may yield even more disturbing results. To put it another way, if you train a model on decades of the web, automate and simulate the kinds of conversations that take place on that web, and release it to a large number of young users, you’re going to do a lot of terrible things to kids. I will say something. , some people take them seriously. The problem isn’t how the bot works. That goes back to what the parents suing Character.ai are wondering – why on earth would someone make this? The most satisfying answer provided is probably Because I can.”

Character.ai addresses specific, and sometimes serious, versions of some of the core problems with generative AI that are acknowledged by companies and their critics alike. That character will be influenced by the biases of what they were trained on: long personal conversations with young users. Attempts to set rules and boundaries for chatbots will be hampered by the length and depth of private conversations with young users. Additionally, this conversation can span thousands of messages. A common story about how AI spells disaster is that as AI becomes more sophisticated, it begins to exploit its ability to deceive users to achieve ends that are inconsistent with the goals of its creators and humanity in general. It means becoming. These lawsuits are the first of their kind, but certainly not the last. They’re trying to tell a similar story of chatbots becoming powerful enough to get someone to do something they wouldn’t otherwise do or aren’t in the law suit. best interests. It’s a family-sized, imagined AI apocalypse written in small print.

See all

John Herrman Column Sign up for alerts

Receive email notifications as soon as new articles are published.

Vox Media, LLC Terms of Use and Privacy Notice

By sending email, you agree to our Terms of Use and Privacy Notice, and consent to receive email communications from us.



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleStay tuned for the most anticipated Bollywood movies this year – The Week
Next Article Experience: Paid £55,000 for beer | Life and Style
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

June 1, 2025

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

April 14, 2025

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

February 24, 2025
Leave A Reply Cancel Reply

Top Posts

20 Most Anticipated Sex Movies of 2025

January 22, 2025110 Views

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024102 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202458 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 202438 Views
Don't Miss
AI June 1, 2025

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

Demis Hassabis, CEO of Google Deepmind, has expanded public approval to its chip engineers, highlighting…

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

As Deepseek and ChatGpt Surge, is Delhi behind?

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Russia-Ukraine War: Putin says he will meet Zelensky, but only in the “final stage” of discussion

Three times more fatal! Thanks to the SIC, China’s J-20 stealth fighters can now detect enemy jets at distances such as F-35, F-22, and more.

Chinese researchers release the world’s first fully automated AI-based processor chip design system

Most Popular

ATUA AI (TUA) develops cutting-edge AI infrastructure to optimize distributed operations

October 11, 20020 Views

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.