The first international AI safety report is used to convey diplomatic discussions on how to relieve various dangers related to artificial intelligence (AI), but the exact nature of many threats. A method that is highly certain and treats them best.
In November 2023, the British government was sponsored by British Government at Brochi Park and was entrusted after the first AI safety summit led by AI Academic Yoshua Benguio. This report covers a wide range of threats brought by technology, such as employment and environment. The possibility of multiplying cyber attacks and deep fake, and how can it amplify social bias?
In addition, we will examine the “AI R & D (R & D) division”, which grows related to market concentration on AI, but is limited to all of these risks in a context of a system that can execute various tasks. Known as a general -purpose AI.
Regarding each of the highly evaluated risks, this report emphasizes high uncertainty about how rapidly moving technology develops, with a decisive conclusion. It called for further monitoring and evidence in each area.
“The current evidence shows two central issues in general AI risk management,” he said. “First, it is difficult to give priority to risk due to the uncertainty about the severity and the possibility of occurrence. Second, the AI Value chain as a whole is determined and responsible, and is effective. Encouraging actions may be complicated.
However, this report is clear from the conclusion that all of the potential future impact of the AI is mainly political and determined by today’s society and the government.
“The types of risks that are designed to be developed and resolve, can enjoy their complete economic possibilities, who benefit, and are exposed to themselves -these and these. Many other questions are based on the choice of society and the government today and in the future to form the development of general -purpose AI in the future. ” I added it.
“Construction scientific and public discussions are essential for society and policy proppons to make the right choice.”
The survey results of reports built based on the provisional AI safety report indicating the lack of agreements of experts for the biggest risks announced in May 2024 is France scheduled for early February 2025. The purpose is to notify the discussion at the AI action summit held in. This continues from the two previous summits in South Korea’s Bretchley and Seoul.
“Artificial intelligence is a central topic of our era, and its safety is an important foundation for building trust and promoting adoption. According to Clara Chappaz, the representative of AI and digital technology. , I mentioned as follows.
“This first comprehensive scientific evaluation provides the evidence base necessary for society and the government to form the future direction of AI with responsibility. These insights will be held in Paris. We will convey important discussions in the AI action summit.
Systematic risk
When examining the widespread social risk of AI deployment -Beyond the ability of individual models, the report stated that the impact on the labor market is particularly “profound.”
Although it is quite uncertain how AI affects the labor market, the improvement of productivity created by technology “leads to a mixture of different sector wages, which leads to some workers. While increasing wages, we will reduce the wages of other workers. ” The most important short -term impact is mainly in the job consisting of cognitive tasks.
The improved general -purpose AI function may increase the current risk of workers’ autonomy and happiness, and “continuous monitoring and AI -led workloads” are particularly harmful to logistics workers. Is emphasized.
The following reports, in accordance with the evaluation of AI in January 2024, the international currency fund (IMF) found that AI is likely to worsen inequality without political intervention. I have stated. The ratio of all income sent to workers compared to capital owners. “
As a result, the report is called the “AI R & D disparity”, which can further deepen the inequality. In this case, the development of technology is very focused on the hands of a large country in a country with powerful digital infrastructure.
“For example, in 2023, most of the prominent general -purpose AI models (56 %) were developed in the United States. He said that it could worsen the existing inequality, “he said, which only increased development costs, further exacerbating this disparity.
In addition, this report refers to the “ghost work” that refers to the main labor force conducted by workers in a low -income country to support the development of the AI model. I emphasized the increase trend. This work can provide people with economic opportunities, but “The nature of the contract style of this work is that the platform rotates the market to find inexpensive labor, and often in most cases and workers. It often reduces protection and work stability. “
All of these are the “altitude” of the market concentration around AI, and a small number of powerful companies can dominate the development and use of technology development and use.
Regarding the impact on the environment, this report has been pointed out while the DataCentre operator is increasingly focusing on renewable energy sources. The considerable part of AI training still depends on high carbon energy sources such as charcoal and natural gas, and uses a considerable amount of water.
Only AI -related hardware improvement alone, “the overall growth of AI energy is not disabled, and it will not accelerate further due to the” rebound effect “.” , “The current number is greatly dependent on the estimation. If it is out of the future for a rapid pace of development in the field, it is variable and unreliable.”
Risk from malfunction
AI emphasizes “specific harm”, which can cause AI as a result of the possibility of amplifying existing political and social prejudice, and “enhancement of unequal resources and stereotypes. He said that it could lead to discriminatory results such as systematic ignorance.
Most AI systems are trained in language and image datasets that disagree with English and Western cultures, and many design choices are sacrificed to others at the expense of others. He clearly pointed out that it is along and that the current bias easing technology is unreliable.
“The overall and participating approach, including various perspectives and stakeholders, is essential for alleviating bias.”
Reflecting the provisional reports on the loss of the loss of human control of the AI system -Some people are worried and may cause an extinction level event, but this report acknowledges such fears. However, he pointed out that his opinion was very different.
“Some people think that it is incredible, others think that it is likely to occur, while others consider it as a discreet likelihood risk to keep attention because of its high degree of severity.” 。 “Basic, competitive pressure can partially determine the risk of control of controls between companies or countries.
Risk of malicious use
Regarding the use of malicious AI, the report emphasized issues on cyber security, deep fake, and its use in the development of biological or chemical weapons.
Deep Fake pointed out that children and women who faced the clear threats of sexual abuse and violence were particularly harmful.
“Current detection methods and watermark technology have various consequences and facing sustainable technical issues during the progress.” “This means that there is no single robust solution to detect and reduce the diffusion of harmful AI generated content. Finally, rapid progress in AI technology often exceeds how to detect it. Emphasize potential restrictions only for technical and reactive interventions. “
In cyber security, while the AI system indicates great progress in identifying and abusing cyber vulnerabilities, these risks can be managed in principle because AI can be used defensive.
“We emphasize the need to evaluate and monitor these risks, as it is difficult to exclude large -scale risks in the short term due to rapid progress in abilities.” “Especially when humans and AIS cooperate, understanding the actual attack scenario requires better indicators. The important challenge is to alleviate the attack ability without impairing the defense application.”
The new AI model can create a step -based guide to create pathogens and toxins that exceed PHD -level expertise, but may contribute to a decrease in barriers to biological or chemical weapons. However, it remains the “technically complex” process, “practical utility for beginners remains uncertain.”