Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Surprisingly Tough Competition for Meta’s Ray-Ban

How AI assistance impacts the formation of coding skills \ Anthropic

Chip stocks rise after earnings, Nvidia H200 approved in China

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Meta announces AI models that convert brain activity into text with unparalleled accuracy
AI

Meta announces AI models that convert brain activity into text with unparalleled accuracy

Adnan MaharBy Adnan MaharFebruary 11, 2025No Comments3 Mins Read3 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


What happened? In collaboration with international researchers, Meta has presented major milestones in understanding human intelligence through two groundbreaking research. They created AI models that could read, interpret, reconstruct typed sentences and map the exact neural processes that translate thoughts into spoken or written words.

The first study conducted by the Meta Basic Artificial Intelligence Research (FAIR) Lab in Paris, in collaboration with the Basque Center for Cognition, Brain and Language in San Sebastian, Spain demonstrates the ability to decipher the production of non-sentences from non-derived origin. An invasive brain record. Using magnetic EEG and EEG, the researchers recorded brain activity and entered text from 35 healthy volunteers.

The system employs a three-part architecture consisting of an image encoder, a brain encoder, and an image decoder. Image encoders construct a rich set of representations of images independent of the brain. Next, brain encoders learn to line up MEG signals in these image embeddings. Finally, the image decoder generates plausible images based on these brain representations.

The results are impressive. AI models can decode up to 80% of characters typed by participants whose brain activity was recorded in MEG. This is at least twice as effective as a traditional EEG system. This study opens up new possibilities for non-invasive brain computer interfaces that can help restore communication among individuals who have lost the ability to speak.

The second study focuses on understanding how the brain transforms thoughts into language. By interpreting MEG signals using AI, researchers can identify the exact moment when thoughts are converted into words, syllables, and individual letters while participants are entering sentences. It’s done.

This study reveals that the brain starts at the most abstract level (the meaning of a sentence) and generates a series of representations that gradually transform into specific actions, such as keyboard finger movements. This study also demonstrates that the brain uses “dynamic neural codes” to chain consecutive representations while maintaining each one for a long period of time.

While this technology has shown to be promising, there are still some challenges before it can be applied in a clinical setting. The decoding performance remains incomplete, and in MEG, subjects must be in a magnetically shielded room and remain stationary. The MEG scanner itself is large and expensive, and the Earth’s magnetic field is 1 trillion times stronger than the brain’s magnetic field, so it must be operated in a shielded room.

Meta improves the accuracy and reliability of decoding processes, explores alternative non-invasive brain imaging techniques that are more practical for everyday use, and creates more sophisticated AI models that can better interpret complex brain signals. By developing it, these limitations will be addressed in future research. The company also aims to expand its research to include a wider range of cognitive processes and explore potential applications in areas such as healthcare, education, and human computer interactions.

Further research is needed before these developments can help people with brain damage, but they bring us closer to building AI systems that can learn and reason like humans.



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleIBM and C40 cities are collaborating on new AI projects for resilient cities
Next Article Kerala Court acquits Malayalam actor Shintom Chakko in 2015 NDPS case
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Surprisingly Tough Competition for Meta’s Ray-Ban

January 31, 2026

How AI assistance impacts the formation of coding skills \ Anthropic

January 29, 2026

Visual reasoning added to Gemini Flash models

January 28, 2026
Leave A Reply Cancel Reply

Top Posts

20 Most Anticipated Sex Movies of 2025

January 22, 2025869 Views

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024134 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 2024133 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202490 Views
Don't Miss
AI January 31, 2026

Surprisingly Tough Competition for Meta’s Ray-Ban

Thanks to Meta, everyone wants a piece of the AI glasses pie. While Ray-Ban Meta…

How AI assistance impacts the formation of coding skills \ Anthropic

Visual reasoning added to Gemini Flash models

Mozilla, OpenAI builds an AI “rebel alliance” against Anthropic

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Surprisingly Tough Competition for Meta’s Ray-Ban

How AI assistance impacts the formation of coding skills \ Anthropic

Chip stocks rise after earnings, Nvidia H200 approved in China

Most Popular

Anthropic agrees to work with music publishers to prevent copyright infringement

December 16, 20070 Views

Elon Musk launches new UK AI technology company amid speculation he is planning to donate millions to Nigel Farage’s Reform Party

July 14, 20170 Views

chatgpt makers claim data breach claims “seriously”

July 14, 20170 Views
© 2026 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.