What happened? In collaboration with international researchers, Meta has presented major milestones in understanding human intelligence through two groundbreaking research. They created AI models that could read, interpret, reconstruct typed sentences and map the exact neural processes that translate thoughts into spoken or written words.
The first study conducted by the Meta Basic Artificial Intelligence Research (FAIR) Lab in Paris, in collaboration with the Basque Center for Cognition, Brain and Language in San Sebastian, Spain demonstrates the ability to decipher the production of non-sentences from non-derived origin. An invasive brain record. Using magnetic EEG and EEG, the researchers recorded brain activity and entered text from 35 healthy volunteers.
The system employs a three-part architecture consisting of an image encoder, a brain encoder, and an image decoder. Image encoders construct a rich set of representations of images independent of the brain. Next, brain encoders learn to line up MEG signals in these image embeddings. Finally, the image decoder generates plausible images based on these brain representations.
The results are impressive. AI models can decode up to 80% of characters typed by participants whose brain activity was recorded in MEG. This is at least twice as effective as a traditional EEG system. This study opens up new possibilities for non-invasive brain computer interfaces that can help restore communication among individuals who have lost the ability to speak.
The second study focuses on understanding how the brain transforms thoughts into language. By interpreting MEG signals using AI, researchers can identify the exact moment when thoughts are converted into words, syllables, and individual letters while participants are entering sentences. It’s done.
This study reveals that the brain starts at the most abstract level (the meaning of a sentence) and generates a series of representations that gradually transform into specific actions, such as keyboard finger movements. This study also demonstrates that the brain uses “dynamic neural codes” to chain consecutive representations while maintaining each one for a long period of time.
While this technology has shown to be promising, there are still some challenges before it can be applied in a clinical setting. The decoding performance remains incomplete, and in MEG, subjects must be in a magnetically shielded room and remain stationary. The MEG scanner itself is large and expensive, and the Earth’s magnetic field is 1 trillion times stronger than the brain’s magnetic field, so it must be operated in a shielded room.
Meta improves the accuracy and reliability of decoding processes, explores alternative non-invasive brain imaging techniques that are more practical for everyday use, and creates more sophisticated AI models that can better interpret complex brain signals. By developing it, these limitations will be addressed in future research. The company also aims to expand its research to include a wider range of cognitive processes and explore potential applications in areas such as healthcare, education, and human computer interactions.
Further research is needed before these developments can help people with brain damage, but they bring us closer to building AI systems that can learn and reason like humans.