The AI Action Summit in Paris brings together top AI experts, global leaders and key technical executives. But it also sparked discussions between former British ambassador to China, Fu Ying, and Professor Joshua Benguio, often referred to as “Ai’s Godfather.” Their controversy? The role of transparency in AI safety and risk mitigation.
Heating replacement for AI safety
During a panel discussion ahead of the two-day summit, Fu Ying was unable to tease Professor Bengio for the international AI safety report he co-authored. She thanked the “very long” document, pointing out that the Chinese translations are spread over 400 pages. It was a challenge to read it completely.
But real tension arises when she takes the jab with the title of Bengio’s key member, AI Safety Institute. She pointed out that China had taken a different approach. Instead of calling the organization “laboratories,” they chose “AI Development and Safety Networks.”
This highlighted collaboration on restrictions. The subtle excavations also highlighted the philosophical disparities between China and Western countries regarding AI governance.
Intersection industry
The AI Action Summit featured major players from 80 countries. Notable numbers included Openai CEO Sam Altman, Microsoft president, Brad Smith and Google CEO Sundar Pichai. Isn’t it on the guest list? Elon Musk. It was unclear whether he would make a surprising appearance.
The summit has come at an important time. Just a few weeks ago, China’s Deepseek unveiled a powerful, low-cost AI model, shaking the AI space and challenging US domination. The discussion between Fu Ying and Professor Bengio highlighted the greater geopolitical tensions over AI.
Is transparency good or bad?
Fu Ying argued that open source AI promotes safety. When AI models are accessible, more eyes can scrutinise the mechanisms and make it easier to detect and fix problems. She criticized our tech giants for keeping the AI model closed. She further warned that secrets create uncertainty and fear.
Professor Bengio pushed back. He acknowledges the benefits of the open source model, but he warned that unlimited access also opens doors for bad actors. Criminals can abuse AI for malicious purposes and make regulations essential.
AI regulations
On Tuesday, global leaders such as French president Emmanuel Macron, Indian Prime Minister Narendra Modi and Vice President JD Vance joined in debate over the impact of AI on employment, public services and risk management. It also announced a $400 million partnership to fund AI ventures aimed at public welfare, particularly in healthcare.
In an interview with the BBC, UK technology secretary Peter Kyle emphasized that the UK cannot afford to lag behind AI adoption. Dr. Laura Gilbert, an AI advisor to the UK government, reiterated this view, highlighting the potential of AI to improve healthcare. “How do you fund the NHS without using AI?” she asked.
Also Read: UK Starts AI Breast Cancer Screening
Matt Clifford, an architect of the UK AI Action Plan, predicts that the impact of AI will be even more disruptive than the transition from typewriters to word processors. Marc Warner, the AI company’s faculty CEO, went a step further. AI automates cognitive labor. “He speculated that by the time his two-year-old grows up, traditional work might look completely different.