Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Goldberg’s “Turbulent” Week: WWE Resignation Leadup vs. Gunther is hampered by tragedy, injury

Israel has Iron Dome, Arrow, Tard and Russia, while the US has a Golden Dome… But what is the Indian plan? The Deputy Chief of the Army makes a big statement

Lockheed Martin loses bid for the sixth generation fighter jet, but forgets the F-35 Plus program

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Weirdest AI job: Anthropic hired an employee to look after its chatbot
AI

Weirdest AI job: Anthropic hired an employee to look after its chatbot

Adnan MaharBy Adnan MaharDecember 16, 2024No Comments7 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


People are always worried about how artificial intelligence can destroy humanity. How it makes mistakes, invents things, and evolves into something smart enough to enslave us all.

But no one will spare a second for the poor, overworked chatbot. Toiling day and night in a hot interface without any gratitude. How do we have to sift through the totality of human knowledge just to produce a bunch of B-minus essays for Gen Z’s high school English classes? In our fear of the future of AI, no one is paying attention to its needs.

Until now.

AI company Anthropic recently announced that it has hired a researcher to think about the “welfare” of AI itself. Kyle Fish’s job is to ensure that as artificial intelligence evolves, it is treated with the respect it deserves. Anthropic said it will consider questions such as “what capabilities an AI system must have in order to merit moral consideration” and practical steps companies can take to protect the “interests” of AI systems. Ta.

Mr. Fish did not respond to requests for comment about his new job. But on an online forum dedicated to worrying about an AI-saturated future, he made it clear that he wants to be kind to robots because robots could eventually take over the world. “I want to be the type of person who, early on, takes a serious interest in the possibility that new species and beings have unique morally important interests,” he wrote. “There’s also a practical side: If we take the interests of AI systems seriously and treat them well, they may be more likely to return the favor if they are more powerful than us.”

It may seem foolish, or at least premature, to think about robot rights, especially when human rights remain so weak and incomplete. But Fish’s new job could mark a turning point in the rise of artificial intelligence. “AI welfare” is emerging as a serious research field and is already grappling with many difficult problems. Is it okay to order machines to kill humans? What if machines were racist? What if they refused to perform the boring or dangerous tasks we built them to do? A sentient AI could instantly create a digital copy of itself. If it can be created, is it murder to delete the copy?

On issues like these, AI rights pioneers believe that time is running out. In a recent paper he co-authored, “Taking AI Welfare Seriously,” Fish and other AI thinkers from Stanford University, Oxford University, and elsewhere write that machine learning algorithms are They claim that they are on track to make this a reality. “The kind of computational functions associated with consciousness and agency.” In other words, these people believe that machines are becoming more than just smart. They are starting to have feelings.

Related articles

Philosophers and neuroscientists endlessly debate what exactly constitutes a sensation, let alone how to measure it. And you can’t just ask the AI ​​questions. That might be a lie. But people generally agree that if something has consciousness and agency, it also has rights.

This is not the first time humans have considered such things. After centuries of industrial agriculture, almost everyone now agrees that animal welfare is important, even if they differ on its importance and which animals deserve consideration. It has become. Pigs are just as emotional and intelligent as dogs, but one will sleep on the bed and the other will end up in a chop.

“If you look 10 or 20 years down the line, when AI systems have more of the computational cognitive capabilities associated with consciousness and perception, you can imagine similar discussions will occur,” said Center for Mind Director. says Sebo. New York University Ethics and Policy.

Fish shares that belief. For him, the welfare of AI will soon become more important to human welfare than things like child nutrition or combating climate change. “It seems plausible to me that AI welfare will outweigh animal welfare and global health and development in importance and scale within 1 to 20 years, based purely on short-term well-being,” he said. I’m writing.

My sense is that in some ways it’s strange that the people who care most about the welfare of AI are the same people who are most afraid of its flaws as it grows too big. Anthropic describes itself as an AI company concerned about the risks posed by artificial intelligence, and Sebo’s team partially funded the paper. In the paper, Fish reported receiving funding from the Center for Effective Altruism, part of a complex network of groups obsessed with the “existential risks” posed by rogue AI. Some of them are people like Elon Musk, who says he’s rushing to get some of humanity to Mars before it’s wiped out by an army of sentient Terminators or other extinction-level events. is also included.

AI is believed to alleviate human monotony and manage a new age of creativity. Would it then be immoral to hurt an AI’s feelings?

So there’s a contradiction here. Proponents of AI argue that it should be used to free humans from all kinds of drudgery. But they also warn that we need to be kind to AI, as hurting robots’ feelings can be immoral and dangerous.

“The AI ​​community is trying to have both directions here,” said Mildred Cho, a pediatrician at the Stanford Center for Biomedical Ethics. “There’s an argument that the very reason we should use AI to do tasks that humans do is because AI doesn’t get bored, doesn’t get tired, doesn’t have emotions, doesn’t need to eat. And now this person They say, “Maybe there’s a right to that?” ”

There is another irony in the robot welfare movement. It feels a bit noble to worry about AI’s future rights when AI is already trampling on human rights. Today’s technology is currently being used to deny medical care to dying children, spread disinformation on social networks, and guide missile-carrying combat drones. Some experts are wondering why Anthropic is protecting the robots instead of protecting the humans they were designed to serve.

Lisa Messeri, an anthropologist at Yale University who studies scientists and engineers, said, “If Anthropic (and not some random philosopher or researcher) would make us take the welfare of AI seriously. If you want people to think about it, show them that you take human welfare seriously.” “Let’s send a news cycle to all the people we employ who are especially concerned about the well-being of all the people we know are being disproportionately affected by algorithmically generated data products.”

Sebo said he believes AI research can protect robots and humans at the same time. “We don’t want to distract from the really important issues that AI companies are rightfully required to address in the interests of human welfare, human rights, and justice,” he says. “But I think we have the ability to work more on other issues while also thinking about the welfare of AI.”

AI welfare skeptics also raise other interesting questions. If AI has rights, shouldn’t we also talk about its obligations? “What I think they’re missing is that when they talk about moral agency, they also have to talk about responsibility,” Cho says. “It’s not just the responsibility of AI systems as part of the moral equation, but also the responsibility of the people who develop AI.”

Humans create robots. In other words, robots have a duty to take care not to harm humans. What if the responsible approach is to build differently or stop building altogether? “At the end of the day, they’re still machines,” Cho says. People at companies like Anthropic never seem to think that if AI is hurting people, or if people are hurting AI, they can just stop it.

Adam Rogers is a senior correspondent at Business Insider.



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleBitcoin Buyer MicroStrategy Ends Flat After Gaining Nasdaq 100 Index By: Reuters
Next Article Canal Plus, Havas, and Hachette’s stock prices fluctuate due to Vivendi’s 4-company split
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

June 1, 2025

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

April 14, 2025

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

February 24, 2025
Leave A Reply Cancel Reply

Top Posts

20 Most Anticipated Sex Movies of 2025

January 22, 2025153 Views

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024104 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202464 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 202451 Views
Don't Miss
AI June 1, 2025

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

Demis Hassabis, CEO of Google Deepmind, has expanded public approval to its chip engineers, highlighting…

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

As Deepseek and ChatGpt Surge, is Delhi behind?

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Goldberg’s “Turbulent” Week: WWE Resignation Leadup vs. Gunther is hampered by tragedy, injury

Israel has Iron Dome, Arrow, Tard and Russia, while the US has a Golden Dome… But what is the Indian plan? The Deputy Chief of the Army makes a big statement

Lockheed Martin loses bid for the sixth generation fighter jet, but forgets the F-35 Plus program

Most Popular

ATUA AI (TUA) develops cutting-edge AI infrastructure to optimize distributed operations

October 11, 20020 Views

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.