Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Louis Vuitton and Felix team from UNICEF’s Silver Rock Collection

How to vet software developer candidates in the age of AI coding tools

New Trends in Personal Finance: Revenge Saving

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » It takes just 2 hours for an AI agent to recreate your personality with 85% accuracy
AI

It takes just 2 hours for an AI agent to recreate your personality with 85% accuracy

Adnan MaharBy Adnan MaharJanuary 4, 2025No Comments3 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Researchers have discovered that they can create an exact replica of a person’s personality just by talking to an artificial intelligence (AI) model for two hours.

In a new study published Nov. 15 on the preprint database arXiv, researchers from Google and Stanford University studied 1,052 individual “simulation agents,” or We essentially created an AI replica. These interviews were used to train a generative AI model designed to mimic human behavior.

To assess the accuracy of the AI ​​replica, each participant was asked to complete two rounds of personality tests, social surveys, and logic games, and repeat the process two weeks later. When an AI replica was put through the same test, it matched the corresponding human response with 85% accuracy.

In this paper, we show that AI models that emulate human behavior can be used to evaluate the effectiveness of public health policies, understand reactions to product launches, and even model reactions to major social events that may be too costly. We propose that it may be useful in a variety of research scenarios, such as , are difficult or ethically complex to study with human participants.

Related article: AI voice generator will ‘reach human parity’ – but too dangerous to release, scientists say

“General-purpose simulations of human attitudes and behavior, in which each simulated person can engage across a range of social, political, and informational contexts, enable researchers to test a wide range of interventions and theories. “It is possible,” the researchers wrote in the paper. paper. Simulations can also help pilot new public interventions, develop theories about causal and contextual interactions, and improve our understanding of how institutions and networks influence people. They added that there is.

To create the simulation agent, the researchers conducted in-depth interviews covering participants’ life stories, values, and opinions on social issues. The researchers explained that this allows AI to pick up on nuances that might be missed by typical surveys or demographic data. Most importantly, the structure of these interviews gave researchers the freedom to emphasize what they personally felt was most important.

Get the world’s most fascinating discoveries delivered straight to your inbox.

Scientists used these interviews to generate personalized AI models that can predict how individuals will respond to survey questions, social experiments, and behavioral games. This included responses to the General Social Survey, a well-established tool for measuring social attitudes and behaviors. Big 5 personality inventory. There are also economic games such as the dictator game and the trust game.

AI agents closely mirrored human agents in many areas, but their accuracy varied depending on the task. They performed particularly well in reproducing responses to personality surveys and judging social attitudes, but their accuracy in predicting behavior in interactive games involving economic decision-making was not that expensive. The researchers explained that AI typically struggles with tasks that involve social dynamics and contextual nuances.

They also acknowledged that the technology could be misused. AI and “deepfake” technologies are already being used by malicious actors to deceive, impersonate, exploit, and manipulate others online. Researchers say simulation agents can also be exploited.

However, this technology provides a highly controlled testing environment without the ethical, logistical, and interpersonal challenges of working with humans in ways that were previously impractical. They said it has the potential to study aspects of human behavior.

In a statement to MIT Technology Review, Joon Sung Park, a doctoral student in computer science at Stanford University and lead author of the study, said, “If there were a bunch of little ‘you’s” running around and actually making decisions, If you could have done it, you would have done it.” —I think that is ultimately the future. ”



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticlePolice stop drug smuggling in Afghanistan | World News
Next Article BRICS expands with nine new partners. It now accounts for half of the world’s population and 41% of the global economy.
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

June 1, 2025

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

April 14, 2025

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

February 24, 2025
Leave A Reply Cancel Reply

Top Posts

20 Most Anticipated Sex Movies of 2025

January 22, 2025140 Views

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024104 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202460 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 202448 Views
Don't Miss
AI June 1, 2025

Dig into Google Deepmind CEO “Shout Out” Chip Engineers and Openai CEO Sam Altman, Sundar Pichai responds with emojis

Demis Hassabis, CEO of Google Deepmind, has expanded public approval to its chip engineers, highlighting…

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

As Deepseek and ChatGpt Surge, is Delhi behind?

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Louis Vuitton and Felix team from UNICEF’s Silver Rock Collection

How to vet software developer candidates in the age of AI coding tools

New Trends in Personal Finance: Revenge Saving

Most Popular

ATUA AI (TUA) develops cutting-edge AI infrastructure to optimize distributed operations

October 11, 20020 Views

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.