
The former Chinese official enjoyed a major international AI safety report led by Professor Joshua Bengio, co-authored by 96 global experts.
Fu Ying is Deputy Minister of Foreign Affairs, former British Ambassador to China and is now academic at Tsinghua University in Beijing.
The pair spoke at a panel discussion ahead of the two-day Global AI Summit, which begins Monday in Paris.
The purpose of the summit is to unite world leaders, technical executives and academics to examine the impact of AI on society, governance and the environment.
Fu Ying thanked Canadian Professor Benguio for the “very long” document, adding that the Chinese translations have stretched to around 400 pages and that he has not finished reading them.
She also unearthed the title of the AI Safety Institute. Among them, Professor Bengio is a member.
China currently has equivalents. But they decided to call it AI development and safety networks, she said, because there are already many labs, the language underscored the importance of collaboration.
The AI Action Summit welcomes guests from 80 countries. Openai CEO Sam Altman, Microsoft President Brad Smith, and Google CEO Sundar Pichai are participating in US technology.
Elon Musk is not on the guest list, but it is currently unclear whether he will decide to join them.
The key focus is regulating AI in an increasingly fractured world. The summit will take place a few weeks after the earthquake industry changed as China’s Deepsake unveiled a powerful, low-cost AI model and challenged US domination.
The pair’s fierce exchanges have been a symbol of global political conflict in the powerful AI arms race, but Fu Ying also regrets the negative impact of current US and China hostilities on advances in AI security. It has been announced.
“When science is on an upward trajectory, relationships are falling in the wrong direction, affecting unity and collaboration to manage risk,” she said.
“It’s very unfortunate.”
She has a careful and skillful glimpse behind the curtains of the Chinese AI scene, and has since first unveiled AI development plans in 2017, five years ago when ChatGpt became a viral sensation in the West, she has been in the way of innovation. I explained the “explosive period.”
She added that “when something (in development) is fast paced and dangerous” did not elaborate on what could have happened.
“The Chinese move faster (than the West), but they’re full of problems,” she said.
Fu Ying means that building AI tools on the foundation of open source means that everyone can see how they work and thus contribute to improving them, and technology is harmful They argued that it is the most effective way to avoid doing so.
Most of the US tech giants do not share the technology that drives their products.

Open source offers humans “a better opportunity to detect and solve problems,” adding that “the lack of transparency among the giants makes people nervous.”
However, Professor Bengio disagreed.
In his view, open source also kept technology wide open for criminals to misuse.
However, he admits that the issue of the viral Chinese AI assistant Deepseek, built using an open source architecture “from a safety standpoint,” that the code is not shared by creator Openai than ChatGpt. I’ve admitted.
World leaders, including French President Emmanuel Macron, Indian Prime Minister Narendra Modi and US Vice President JD Vance, will hold talks at the summit on Tuesday.
The discussion includes how AI affects the world of work, is used for the public interest, and how it can mitigate its risks.
A new $400 million partnership between several countries has also been announced, aimed at supporting AI initiatives that serve public interests such as healthcare.
In an interview with the BBC, UK Technology Secretary Peter Kyle said he thought it was dangerous for the UK to fall behind in adopting technology.
Dr. Laura Gilbert, who advises the government on AI, said he believes it is essential to maintain the NHS for the efficiency it promised. “How do you fund the NHS without grabbing AI?” she asked.
Matt Clifford, who wrote the UK’s AI action plan that the government had fully embraced, warned that computers first entered the workplace, making typing more “radical” than when typing was replaced by word processing.
“The Industrial Revolution was the automation of physical labor. AI is the automation of cognitive labor,” said Mark Warner, the boss of the AI company’s teacher. He added that he doesn’t believe that a two-year-old will “work the way we know today.”