I It was technophile in my early teenage years, but sometimes I wanted to be born in 2090 rather than 1990, so I was able to see the incredible technology of the future. However, recently I have become far more skeptical about whether the technology we interact with really serves us or whether we are offering it.
So when I was invited to attend a conference on the development of safe and ethical AI ahead of the Paris AI Summit, I was ready to hear Maria Russa, a Philippine journalist and 2021 Nobel Peace Prize winner. It was there. Tech was able to flood its network with disinformation, hatred and manipulation in ways that had a very realistic and negative impact on the election.
But I wasn’t ready to hear some of the “Godferser of AI” including Joshua Bengio, Jeffrey Hinton, Stuart Russell, and Max Tegmark. Their concerns are central to the competition for AGI (artificial general information, Tegmark believes that “A” should refer to “autonomous”, but for the first time in the history of life on Earth, Non-human entities simultaneously possess high autonomy, high generality, and high intelligence, which could develop goals that are “misaligned” with human happiness. Perhaps it is national security. It will result from a strategy, or at any cost, or itself as a result of a search for the profits of the company.
“It’s not AI today that we need to worry about, it’s next year,” Tegmark told me. “If you were interviewing me in 1942, you asked me: “Why don’t people worry about the nuclear arms race?” I think they depend on the arms race. But it’s actually a suicide race. ”
It reminded me of Ronald D. Moore’s 2003 battle star Galactica. The spokesperson told the journalist: “Name.” “It was all designed to work against enemies that could infiltrate and confuse everything except the most basic computer systems. We were so scared of the enemy. literally turned to the back for protection.”
I thought maybe I needed a new acronym. Instead of mutually guaranteed destruction, we must talk about “confident destruction” with special emphasis: sad! An acronym that could break through Donald Trump.
The idea that we could lose control of AGIs on Earth sounds like science fiction, but given the exponential growth of AI development, it really is Is it a big deal? As Bengio pointed out, some of the most advanced AI models are trying to deceive human programmers during testing to pursue a specified purpose, and avoid being replaced by deletion or updates. That’s what I mean.
When the human cloning breakthrough was out of reach of scientists, biologists agreed not to come together and pursue it, says Stuart Russell, who literally wrote the textbook for AI. Similarly, both Tegmark and Russell support a moratorium on the pursuit of AGI and a step-by-step risk approach that is stricter than the EU AI law. This requires demonstrating high-risk AI systems, similar to the drug approval process. Do not cross certain red lines so that the regulator can copy itself to other computers.
However, even if the conference seemed to be weighed on these future fears, there was a rather obvious division among the industry, academia and government’s leading AI safety and ethics experts. If the “Godfather” were concerned about AGI, younger and more diverse demographics were driving AIS to have an equal focus on the dangers posed to climate and democracy.
There’s no need to wait for Microsoft, Meta, Alphabet, Openai and their Chinese counterparts to decide to flood the world with their data centers to evolve faster, as AGI has already done it. Or Donald Trump and Elon Musk have already pursued the AGI independently decide to manipulate a massive number of voters to serve as a deregulated agenda for politicians. And even in the current early stages of AI, energy use is devastating. According to Kate Crawford, the chairman of AI and Justice at Ecole Normal Superior, data centers already account for more than 6% of all electricity consumption in the US and China, and demand only continues to surge is.
“We need policymakers and government to explain both, rather than treating topics as mutually exclusive,” Stanford’s doctoral researcher Sacha Alanoca told me. . “And we should prioritize empirically driven issues like environmental hazards that already have concrete solutions.”
To that end, the climate leads on Sasha Lucciioni, AI, and Climate Face (a joint platform for open source AI models) have been 166 in energy consumption this week when startups deploy AI energy scores and complete different tasks. We have announced that we have ranked the models. The startup also offers a 1-5 star rating system that rivals the energy labels of EU appliances to guide users towards sustainable choices.
Sign up for This is Europe
The Most Impending Story and Discussion for Europeans – From Identity to Economics to Environment
Privacy Notice: Newsletters may contain information about charities, online advertising, and content funded by external parties. For more information, please refer to our Privacy Policy. We use Google Recaptcha to protect the website and the application of our Google Privacy Policy and Terms of Use.
After the newsletter promotion
“There’s a global science budget and there’s the money we’re spending on AI,” Russell says. “We could have done something useful. Instead, we put our resources into this race and are off the edge of the cliff.” He didn’t specify what options. But just two months after the year, around 1tn was announced in AI investment, but the world is still far below what it takes to stay within 2°C. C.
It appears that the opportunity for businesses to actually build incentives to create the kind of AI that benefits our personal and collective lives: sustainable, inclusive, democratic compatibility Sex, control. And beyond regulations, “to ensure there is a culture of participation built into AI development in general,” said Eloïse Gabadou, an OECD consultant on technology and democracy.
At the end of the meeting, I was fighting head-on to Russell using an incredible amount of energy and other natural resources, perhaps not to create first, and the relatively benign version was already In many ways, it is misaligned with the kind of society we actually want to live in.
“Yes,” he replied.