Researchers at the GrapheneX-UTS Human-Centered artificial intelligence Center (University of technology Sydney (UTS)) have developed a remarkable system capable of decoding silent thoughts and converting them into written text. This technology has potential applications to aid communication for people who cannot speak due to conditions such as stroke or paralysis and enable better human-machine interaction.
Presented as a featured paper at the NeurIPS conference in New Orleans, the research team presents a portable, non-invasive system. The HAI GrapheneX-UTS Center team collaborated with members of the UTS Faculty of Engineering and IT to create a method that translates brain signals into textual content without invasive procedures.
During the study, participants silently read passages of text while wearing a specialized cap equipped with electrodes to record the brain's electrical activity through an electroencephalogram (EEG). The captured EEG data was processed using an artificial intelligence model called DeWave, which was developed by the researchers and translates these brain signals into understandable words and sentences.
The researchers emphasized the importance of this innovation by directly converting raw EEG waves into language, highlighting the integration of discrete coding techniques in the brain-to-text translation process. This approach opens up new possibilities in the field of neuroscience and artificial intelligence.
Unlike previous technologies that required invasive procedures such as brain implants or the use of MRI machines, the team's system offers a practical and non-intrusive alternative. Importantly, it does not rely on eye tracking, making it potentially more adaptable for everyday use.
The study involved 29 participants, ensuring a higher level of robustness and adaptability compared to previous studies limited to one or two individuals. Although using a threshold to collect EEG signals introduces noise, the study reported top-notch performance in EEG translation, outperforming previous benchmarks.
The team highlighted the model's proficiency in matching verbs with nouns. However, when decoding nouns, the system showed a tendency toward synonymous pairs rather than exact translations. The researchers explained that semantically similar words could evoke similar brain wave patterns during word processing.
The current translation accuracy, measured by the BLEU-1 score, is around 40%. The researchers aim to improve this score to levels comparable to traditional language translation or speech recognition programs, which typically achieve accuracy levels of around 90%.
This research builds on previous advances in brain-computer interface technology at UTS, indicating promising potential to revolutionize communication pathways for people previously hampered by physical limitations.
The findings of this research hold promise for facilitating the smooth translation of thoughts into words, empowering people facing communication barriers, and fostering improved human-machine interactions.
Review the Paper and GitHub. All credit for this research goes to the researchers of this project. Also, don't forget to join. our 34k+ ML SubReddit, 41k+ Facebook community, Discord channel, and Electronic newsletterwhere we share the latest news on ai research, interesting ai projects and more.
If you like our work, you'll love our newsletter.
Niharika is a Technical Consulting Intern at Marktechpost. She is a third-year student currently pursuing her B.tech degree at the Indian Institute of technology (IIT), Kharagpur. She is a very enthusiastic person with a keen interest in machine learning, data science and artificial intelligence and an avid reader of the latest developments in these fields.
<!– ai CONTENT END 2 –>