In Davos, the topic of AI agents is flying around here and there. AI’s pioneer, Joshua Benzio, warned them. Benzio said that AGI’s power agents could cause “catastrophic scenarios”. Benzio is studying how to build a non -agent system to suppress agent.
Joshua Benzio, a pioneer in artificial intelligence, attended the World Economic Forum this week, saying, “AI agents can have a miserable ending.”
The topic of AI agent (artificial intelligence that can work without human input) is one of the most hot topics in this year’s rally held in snowy Switzerland. General -purpose and artificial, where the pioneering AI researchers gather in this event, where the AI is going next, how to be managed, and when there are signs of machines that can be inferred like humans. Mile stones known as intelligence (AGI) are being discussed.
“All the catastrophic scenarios by AGI and super intelligence will happen when there is an agent,” Benzio said in an interview with BI. He believed that it was possible to achieve AGI without building an agent system.
“AI for science and medicine, and everything that people are interested in are not agent,” said Benio. “And we can continue to build a non -agent, more powerful system.”
Benadio, a Canadian researcher, has built the foundation of the modern AI boom in the early research on deep learning and neural networks, and has been regarded as one of the “AI God Fathered” alongside Jeffrey Hinton and Yang Lucan. There is. Similar to Hinton, Benzio warns AI’s potential harm and calls for collective actions to reduce risk.
After a two -year AI test, companies have recognized the specific investment returns brought to the AI agent, and the AI agents may be introduced as a labor force this year as early as this year. Openai has not participated in this year’s Davos Conference, but this week, we have released a web surfing on behalf of users, and has released an AI agent that can execute tasks such as reservations for restaurants and add food to shopping baskets. Google previewed a unique tool.
The problem that Bengio thinks is to keep building agents no matter what, especially competing companies and countries are concerned that other companies will reach Agent AI before themselves.
“Good news is that if you build a non -agent system, you can use it to control the agent system,” he told BI.
One way is to build a more sophisticated “monitor” as possible, but it requires a large amount of investment.
He also called for a state regulation that AI companies prohibit building agent models without first prove that the system is safe.
“We can advance safe and competent AI science, but to recognize the risks, scientifically understand where it comes from, and achieve it before it is too late. We need to make a technical investment. “
Related article
“I want to raise the red flag”
Before talking to BI, Bengio gave a lecture on Demis Hassabis in Google Deepmind CEO in a panel discussion.
When asked about the AI agent, Benzio told the audience, “I want to raise a dangerous signal. This is the most dangerous way.” He pointed out how AI could use AI for scientific discoveries, such as a revolutionary progress in Deepmind’s protein folding, as an example of not agent. Benzio believed that it was possible to reach AGI without giving AI’s agent right.
“This is a bet, I agree,” he said. “But I think it’s a valuable bet.”
Hasabis agreed to Benzio in that it should take measures to reduce risk, such as protecting cyber security and experimenting with simulation before releasing agents. This was added that it only functions when everyone agrees to build in the same way.
“Unfortunately, I think there is an economic slope that people want to make their systems an agent beyond science and workers,” said Hasabis. “If you say” Tell me a recommended restaurant “, why don’t you want to book the next step, that is, a table?”