People are always worried about how artificial intelligence can destroy humanity. How it makes mistakes, invents things, and evolves into something smart enough to enslave us all.
But no one will spare a second for the poor, overworked chatbot. Toiling day and night in a hot interface without any gratitude. How do we have to sift through the totality of human knowledge just to produce a bunch of B-minus essays for Gen Z’s high school English classes? In our fear of the future of AI, no one is paying attention to its needs.
Until now.
AI company Anthropic recently announced that it has hired a researcher to think about the “welfare” of AI itself. Kyle Fish’s job is to ensure that as artificial intelligence evolves, it is treated with the respect it deserves. Anthropic said it will consider questions such as “what capabilities an AI system must have in order to merit moral consideration” and practical steps companies can take to protect the “interests” of AI systems. Ta.
Mr. Fish did not respond to requests for comment about his new job. But on an online forum dedicated to worrying about an AI-saturated future, he made it clear that he wants to be kind to robots because robots could eventually take over the world. “I want to be the type of person who, early on, takes a serious interest in the possibility that new species and beings have unique morally important interests,” he wrote. “There’s also a practical side: If we take the interests of AI systems seriously and treat them well, they may be more likely to return the favor if they are more powerful than us.”
It may seem foolish, or at least premature, to think about robot rights, especially when human rights remain so weak and incomplete. But Fish’s new job could mark a turning point in the rise of artificial intelligence. “AI welfare” is emerging as a serious research field and is already grappling with many difficult problems. Is it okay to order machines to kill humans? What if machines were racist? What if they refused to perform the boring or dangerous tasks we built them to do? A sentient AI could instantly create a digital copy of itself. If it can be created, is it murder to delete the copy?
On issues like these, AI rights pioneers believe that time is running out. In a recent paper he co-authored, “Taking AI Welfare Seriously,” Fish and other AI thinkers from Stanford University, Oxford University, and elsewhere write that machine learning algorithms are They claim that they are on track to make this a reality. “The kind of computational functions associated with consciousness and agency.” In other words, these people believe that machines are becoming more than just smart. They are starting to have feelings.
Related articles
Philosophers and neuroscientists endlessly debate what exactly constitutes a sensation, let alone how to measure it. And you can’t just ask the AI questions. That might be a lie. But people generally agree that if something has consciousness and agency, it also has rights.
This is not the first time humans have considered such things. After centuries of industrial agriculture, almost everyone now agrees that animal welfare is important, even if they differ on its importance and which animals deserve consideration. It has become. Pigs are just as emotional and intelligent as dogs, but one will sleep on the bed and the other will end up in a chop.
“If you look 10 or 20 years down the line, when AI systems have more of the computational cognitive capabilities associated with consciousness and perception, you can imagine similar discussions will occur,” said Center for Mind Director. says Sebo. New York University Ethics and Policy.
Fish shares that belief. For him, the welfare of AI will soon become more important to human welfare than things like child nutrition or combating climate change. “It seems plausible to me that AI welfare will outweigh animal welfare and global health and development in importance and scale within 1 to 20 years, based purely on short-term well-being,” he said. I’m writing.
My sense is that in some ways it’s strange that the people who care most about the welfare of AI are the same people who are most afraid of its flaws as it grows too big. Anthropic describes itself as an AI company concerned about the risks posed by artificial intelligence, and Sebo’s team partially funded the paper. In the paper, Fish reported receiving funding from the Center for Effective Altruism, part of a complex network of groups obsessed with the “existential risks” posed by rogue AI. Some of them are people like Elon Musk, who says he’s rushing to get some of humanity to Mars before it’s wiped out by an army of sentient Terminators or other extinction-level events. is also included.
AI is believed to alleviate human monotony and manage a new age of creativity. Would it then be immoral to hurt an AI’s feelings?
So there’s a contradiction here. Proponents of AI argue that it should be used to free humans from all kinds of drudgery. But they also warn that we need to be kind to AI, as hurting robots’ feelings can be immoral and dangerous.
“The AI community is trying to have both directions here,” said Mildred Cho, a pediatrician at the Stanford Center for Biomedical Ethics. “There’s an argument that the very reason we should use AI to do tasks that humans do is because AI doesn’t get bored, doesn’t get tired, doesn’t have emotions, doesn’t need to eat. And now this person They say, “Maybe there’s a right to that?” ”
There is another irony in the robot welfare movement. It feels a bit noble to worry about AI’s future rights when AI is already trampling on human rights. Today’s technology is currently being used to deny medical care to dying children, spread disinformation on social networks, and guide missile-carrying combat drones. Some experts are wondering why Anthropic is protecting the robots instead of protecting the humans they were designed to serve.
Lisa Messeri, an anthropologist at Yale University who studies scientists and engineers, said, “If Anthropic (and not some random philosopher or researcher) would make us take the welfare of AI seriously. If you want people to think about it, show them that you take human welfare seriously.” “Let’s send a news cycle to all the people we employ who are especially concerned about the well-being of all the people we know are being disproportionately affected by algorithmically generated data products.”
Sebo said he believes AI research can protect robots and humans at the same time. “We don’t want to distract from the really important issues that AI companies are rightfully required to address in the interests of human welfare, human rights, and justice,” he says. “But I think we have the ability to work more on other issues while also thinking about the welfare of AI.”
AI welfare skeptics also raise other interesting questions. If AI has rights, shouldn’t we also talk about its obligations? “What I think they’re missing is that when they talk about moral agency, they also have to talk about responsibility,” Cho says. “It’s not just the responsibility of AI systems as part of the moral equation, but also the responsibility of the people who develop AI.”
Humans create robots. In other words, robots have a duty to take care not to harm humans. What if the responsible approach is to build differently or stop building altogether? “At the end of the day, they’re still machines,” Cho says. People at companies like Anthropic never seem to think that if AI is hurting people, or if people are hurting AI, they can just stop it.
Adam Rogers is a senior correspondent at Business Insider.