ELON MUSK responds to research that shows that the popular AI model Chatgpt is not politically neutral, but often suppresses conservative perspectives. This study emphasizes the systematic tendency of AI chatbots, generates content from the left wing perspective, and frequently avoids the right prompt.
According to ScitechDaily, this discovery “raises concerns about social impacts. Research is urgently necessary for AI tools to maintain fair balance and consistency with democratic value. Is emphasized. ”
The results of this study are published in the Journal of Economic Behavior and Organization.
The X user posted a report to X, and Musk re -beed in 2 words.
Survey: Chatgpt bias is real. And it is leaning to the left
New studies have confirmed that many suspicion, that is, chat -supo prefers a left -hand view, often avoids or restricts conservative perspectives.
Researchers have discovered that the AI generated text and images are systematically tilted to the left, and there is a prompt in the lower right, which is more often blocked than the so -called “incorrect information.”
This makes a major question on the impact of AI on fairness, freedom of speech, and AI on public debates.
Experts warn you by forming news, policies and education. Without transparency and protection, biased algorithms could reconstruct democracy itself.
“The left end” answered the mask.
What the research is saying
Researchers have discovered that ChatGpt often often avoids involvement in mainstream conservative perspectives while easily generating left -hand content.
“Our survey suggests that the generated AI tools are far from neutral. He is a lecturer in the UEA no -witch business school and” political prejudice and value in producing artificial intelligence. Dr. Fabio Motoki, the chief researcher in the “evaluating incorrect arrangement,” said:
This study demands transparency and regulation protection to ensure consistency between social value and democracy principles.