Artificial intelligence (AI) tools could be used to manipulate online viewers into making decisions, from what they buy to who they vote for, according to researchers at the University of Cambridge. It is said that there is a sex.
The paper explores the emerging market for “digital intent signals” known as the “intention economy,” where AI assistants understand, predict, and manipulate human intentions and sell that information to companies that can profit from it. Focused on emergence.
Researchers at Cambridge’s Leverhulme Center for the Future of Intelligence (LCFI) describe the intention economy as a successor to the attention economy, where social networks attract users to their platforms and serve them advertising. It is advertised as.
The intention economy involves AI-savvy technology companies selling what they know about your motivations, from how you plan your hotel stay to your opinions about political candidates, to the highest bidder. Included.
“For decades, attention has been the currency of the Internet,” says Dr. Johnny Penn, technology historian at LCFI. “Sharing attention on social media platforms like Facebook and Instagram has driven the online economy.”
He added, “Unless regulated, the intention economy will treat your motivation as a new currency. It will be a gold rush for those who target, channel, and sell human intentions.
“We must consider the impact such markets may have on human aspirations such as free and fair elections, a free press, and fair market competition before they become victims of unintended consequences.” You need to start doing it.”
The study found that large-scale language models (LLMs), the technology behind AI tools such as ChatGPT chatbots, are used to “predict and guide” users based on “intentional, behavioral, and psychological data.” It claims to be used.
The authors argue that the attention economy allows advertisers to buy access to users’ attention now through real-time bidding on ad exchanges, or buy it in the future by acquiring a month’s worth of advertising space on billboards. He said you can buy it.
LLM can also access attention in real-time by asking, for example, whether a user has thought of watching a particular movie: “Have you thought about watching Spider-Man tonight?” – Also make suggestions regarding future intentions, such as, “You said you were overworked. Would you like to book tickets for the movie you were talking about?”
The study cites a scenario in which these examples are “dynamically generated” to match factors such as the user’s “personal behavioral signature” and “psychological profile.”
“In the intent economy, LLM works with brokered bidding to leverage a user’s rhythm, politics, vocabulary, age, gender, and pandering preferences at low cost to maximize the likelihood of achieving a given objective. “(e.g. to sell movie tickets),” the study suggests. In such a world, AI models will guide conversations across advertisers, businesses, and other third-party services.
Advertisers will be able to create bespoke online ads using generative AI tools, the report claims. It also cites the example of an AI model created by Mark Zuckerberg’s meta called Cicero, which achieved a “human-level” ability to play the board game Diplomacy. The game, the authors say, relies on reasoning and predicting diplomatic intentions. Opponent.
AI models will be able to fine-tune their output depending on the “stream of user-generated data they receive,” the study adds, adding that the models will be able to infer personal information through weekday interactions and gain more personal information. Information citing research showing that it is even possible to “guide” a conversation.
The study posits a future scenario where meta auctions off users’ intentions to book restaurants, flights, and hotels to advertisers. While there is already an industry dedicated to predicting human behavior and bidding, AI models will distill those practices into “highly quantified, dynamic and personalized formats,” the report said. There is.
The study quotes the research team behind Cicero as warning that “(AI) agents may learn to guide conversation partners to achieve specific goals.” .
The study notes tech executives discussing how AI models can predict user intent and behavior. This cites Jensen Huang, CEO of Nvidia, the largest AI chip maker, saying last year that models should “take into account the context of the user’s intent, their desires, what they are trying to do. We will understand your needs and provide you with information in the best possible way.”