“IMO and AGI races are very dangerous gambling and have huge drawbacks,” said Openai’s safety researcher Steven Adler after four years of service, announced an exit from AI startup. Showed me. “There is no lab that has a solution to today’s AI alignment, and the faster we compete, the less likely it is to find it in time.”
He is a safety researcher at Openai Echo AI, and asserts the probability of safety of 99.999999 % of AI -end humanity, raised by the Adler departure message from the Director of the Cyber Security Research Institute at the University of Louisville. I am doing it.
Several personal news: After working for four years in @openai, I left in mid -November. It was a wild ride with many chapters -avoiding dangerous abilities, safety/control of agents, AGI, online identity, and I missed many of them. June 27, 2025
To be honest, I’m pretty scary at the recent AI development pace. When I think about where to grow my family in the future and how much I can save for retirement, I can’t help wondering. Today, we seem to stay in a really bad equilibrium. Even if the lab really wants to develop AGI with responsibility, others can still catch up to catch up. And this allows you to speed up. I hope the lab will be frank the true safety legs needed to stop this.
Former Openai researcher, Steven Adler
According to the researcher’s P (Doom) value, the only way to avoid unavoidable fate is not to build AI in the first place. Perhaps the option is outside the window with a $ 500 billion betting of Openai and SoftBank in Stargate, promoting the construction of data centers throughout the United States and promoting sophisticated AI progress.
Adler is not the first employee to leave Chatgpt manufacturers about safety concerns. Last year, Openai’s co -worker Jan Leike, Super Alignment Lead, and Executive announced his departure. The former Alignment Lead has revealed that he opposed the core priority on the next -generation AI model, security, and monitoring of Openai.
Probably more concerns, Leike showed that the safety process took the back seat because the glossy product, such as AGI, gained precedent. In addition, Openai is reportedly assigned to the team in less than a week to run the GPT-4O release and carry out safety tests. A close -to -belong sources have indicated that Openai sent an invitation for the product release celebration before the safety team was performed. Openai’s spokeswoman acknowledged that the GPT-4O was in a hurry to feel stressed by the safety team, but he argued that the company did not reduce the safety corner to respond to the severe deadline. 。
Human CEO says that AI extends human life by 2037 by 150 years.
Dario Amday, a CEO of mankind, in the Swiss Davos world economic forum, claimed that the generated AI could double the life of humans within 5 to 10 years (via X’s Tsurnic). :
“By 2026 or 2027, I think there is an AI system that is larger and superior to almost all humans. I see many positive possibilities.”
The human CEO, Dario Amday, says that by 2037, the life of humans will be expanded to 150 years by ai pic.twitter.com/9an3zrbxuqjanuary 27 and 2025.
Executive emphasized significant improvements in technology, military and health. He believes that AI can increase human life, despite the report of the technology that promotes the existing fate.
According to Dario Amday, the CEO of mankind:
“If I had to guess, and if you know, this is not a very accurate science. If I really do this AI, it’s 5 or 10 years. I think that you may be able to achieve 100 years in a field of biology in a 100 -year -old field. I don’t think it is crazy at all, to get it in 10 years.
Amodei admits that this is not a very accurate science. In other words, in the claim that the expansion law has begun due to the lack of high -quality content for training AI model training, the progress of AI may take a different route. Interestingly, Openai CEO’s Sam Altman claims that AI is wise to solve the rapid progress of the landscape, including the destruction of mankind.
Openai has confirmed that Altman knows how to build AGI, confirms that the current hardware can be achieved faster than expected, and races to AGI. He also suggested that the company could focus on Super Intelligence.