According to a groundbreaking report by AI experts, the possibility that artificial intelligence systems are used for malicious acts is increasing, and the main researchers in Deepseek and other destroyers have safety risks. He warns that it may increase.
Yoshua Bengio is considered one of the modern AI Godfather, and the progress of China’s Startup DeepSeek could be a worrisome development in the field dominated in the United States.
“That would mean a closer race. This is not a good thing from the viewpoint of AI’s safety,” he said.
Bengio said that American companies and other rivals could concentrate on DeepSeek’s rivals, not safe, regain the lead. Openai, the developer of Chatgpt, who challenged Deepseek to launch its own virtual assistant, pledged to accelerate the product release as a result.
“If you imagine the competition between the two entities and think they are going ahead, they will be wiser and you still know they stay first,” said Benguio. Ta. “If there are competition between the two entities and the other entities are at the same level, they need to accelerate, and they probably don’t pay much attention.”
Bengio spoke with his personal abilities before a wide range of reports on AI’s safety were announced.
The first complete international AI safety report was edited by a group of 96 experts, including the Nobel Prize winner, Jeffrey Hinton. Benguio, the 2018 Turing Award (co -winner called the Nobel Prize Computing Award), governs the British government to govern a report published at the global AI safety summit held in Blatch La Park in 2023. I was entrusted with. Panel panel. The members were nominated not only in the EU and the United Nations but also in 30 countries. The next global AI Summit will be held in Paris on February 10 and 11.
The report states that since the Provisional Research was released in May last year, general -purpose AI systems such as chatbots, such as the use of automated tools to emphasize software vulnerabilities, have increased their ability. 。 Provides guidance on the production of IT systems, biological and chemical weapons.
The new AI model states that it can generate steps of pathogens and toxins that exceed the ability of PHD experts. Openai acknowledges that its advanced O1 model helps plan a method of creating biological threats.
However, the report states that it is unknown whether beginners can act based on guidance, and that models can be used for useful purposes such as medicine.
Talking to the Guardian, Benguio said that using a smartphone camera has already appeared that the theoretical task, such as trying to build a biological weapon, has already appeared.
“These tools are easier and easier for non -experts to be used to decompose complex tasks into small steps that anyone can understand, and then to make them correct, and that’s it. For example, it is very different from using Google Search, “he said.
According to the report, the AI system has been greatly improved since last year with the ability to autonomously find software defects without human intervention. This helps hackers to plan cyber attacks.
However, the report states that implementing actual attacks autonomously requires “exceptional accuracy” and has so far exceeded the AI system.
Other locations in the risk analysis proposed by AI, this report is used to create a person’s compelling portrait, whether it is both images, voices, or both. It shows that. Deep Fake states that it is used to deceive the company and hand over money, commit the fear Mail, and create people’s pornographic images. Since there is no comprehensive and reliable statistics, it is difficult to measure the accurate level of such actions.
The so -called closed source model that cannot change the basic code is vulnerable to jailbreak that avoids safety guardrails, and has the risk of malicious use. Specialists can fine -tune them and have the risk of “promoting malicious or misplaced use”.
Since it was added to the report by Bengio at the last minute, Canadian computer scientists are paying attention to the appearance of a new “reasoning” model by OPENAI called O3, which appears in December (immediately after the report is confirmed). Masu. Bengio stated that the ability to break through in important abstract testing tests was the result of many experts, including himself, that they were out of reach.
Technology jumps into a way to form our lives every week
Privacy notification: Newsletter may include information about the content provided by a charitable organization, online advertising, and an outside parties. See the Privacy Policy for more information. We use Google Recaptcha to protect our website, Google privacy policy and terms of use.
After the newsletter promotion
“The tendency to be proven by O3 can have a significant effect on the risk of AI,” says Bengio, who flags DeepSeek’s R1 model. “The risk evaluation of this report should be understood and read by understanding that AI has gained abilities after the report is written.”
Bengio told the Guardian that the advancement of reasoning could result in the employment market by creating an autonomous agent that could perform human work, but terrorists could support.
“If you’re a terrorist, you want to have a very autonomous AI,” he said. “As the number of agencies is increased, increase the potential profits of AI and increase risk.”
However, Bengio stated that the AI system has not yet separated long -term plans to create completely autonomous tools that avoid human control. “If AI can’t plan a long horizon, it won’t escape our control,” he said.
Other places, nearly 300 pages of reports quoted “established” concerns about AI, such as generating the image of fraud and sexual abuse of children. Privacy violations such as biased output and leakage of confidential information shared with chatbots. Researchers said that these fears could not be “completely solved”.
AI can be roughly defined as a computer system that usually runs a task that requires human intelligence.
The report, which is an international science report on advanced AI safety, is the effect of “rapidly growing” in the environment through the use of AI data centers, and the AI agent is “deep”. We are making a flag to the possibility of an impact. market.
The future of AI is uncertain and states that a wide range of results in the near future, such as “very positive and very negative results”. It is said that society and government still have the opportunity to decide which path the technology is going.
“This uncertainty evokes a heroism and appears as something that causes AI to us, but it navigates this uncertainty that determines which path we take. The report will be decided by society and the government. “