Researchers have discovered that they can create an exact replica of a person’s personality just by talking to an artificial intelligence (AI) model for two hours.
In a new study published Nov. 15 on the preprint database arXiv, researchers from Google and Stanford University studied 1,052 individual “simulation agents,” or We essentially created an AI replica. These interviews were used to train a generative AI model designed to mimic human behavior.
To assess the accuracy of the AI replica, each participant was asked to complete two rounds of personality tests, social surveys, and logic games, and repeat the process two weeks later. When an AI replica was put through the same test, it matched the corresponding human response with 85% accuracy.
In this paper, we show that AI models that emulate human behavior can be used to evaluate the effectiveness of public health policies, understand reactions to product launches, and even model reactions to major social events that may be too costly. We propose that it may be useful in a variety of research scenarios, such as , are difficult or ethically complex to study with human participants.
Related article: AI voice generator will ‘reach human parity’ – but too dangerous to release, scientists say
“General-purpose simulations of human attitudes and behavior, in which each simulated person can engage across a range of social, political, and informational contexts, enable researchers to test a wide range of interventions and theories. “It is possible,” the researchers wrote in the paper. paper. Simulations can also help pilot new public interventions, develop theories about causal and contextual interactions, and improve our understanding of how institutions and networks influence people. They added that there is.
To create the simulation agent, the researchers conducted in-depth interviews covering participants’ life stories, values, and opinions on social issues. The researchers explained that this allows AI to pick up on nuances that might be missed by typical surveys or demographic data. Most importantly, the structure of these interviews gave researchers the freedom to emphasize what they personally felt was most important.
Scientists used these interviews to generate personalized AI models that can predict how individuals will respond to survey questions, social experiments, and behavioral games. This included responses to the General Social Survey, a well-established tool for measuring social attitudes and behaviors. Big 5 personality inventory. There are also economic games such as the dictator game and the trust game.
AI agents closely mirrored human agents in many areas, but their accuracy varied depending on the task. They performed particularly well in reproducing responses to personality surveys and judging social attitudes, but their accuracy in predicting behavior in interactive games involving economic decision-making was not that expensive. The researchers explained that AI typically struggles with tasks that involve social dynamics and contextual nuances.
They also acknowledged that the technology could be misused. AI and “deepfake” technologies are already being used by malicious actors to deceive, impersonate, exploit, and manipulate others online. Researchers say simulation agents can also be exploited.
However, this technology provides a highly controlled testing environment without the ethical, logistical, and interpersonal challenges of working with humans in ways that were previously impractical. They said it has the potential to study aspects of human behavior.
In a statement to MIT Technology Review, Joon Sung Park, a doctoral student in computer science at Stanford University and lead author of the study, said, “If there were a bunch of little ‘you’s” running around and actually making decisions, If you could have done it, you would have done it.” —I think that is ultimately the future. ”