Can a Brain-Computer Interface Convert your Thoughts to Text?

Ever wonder what it would be like if a device could decode your thoughts into actual speech or written words? While this might enhance the capabilities of already existing speech interfaces with devices, it could be a potential game-changer for those with speech pathologies, and even more so for "locked-in" patients who lack any speech or motor function.

"So instead of saying 'Siri, what is the weather like today' or 'Ok Google, where can I go for lunch?' I just imagine saying these things," explains Christian Herff, author of a review recently published in the journal Frontiers in Human Neuroscience.

While reading one's thoughts might still belong to the realms of science fiction, scientists are already decoding speech from signals generated in our brains when we speak or listen to speech.

In their review, Herff and co-author, Dr. Tanja Schultz, compare the pros and cons of using various brain imaging techniques to capture neural signals from the brain and then decode them to text.

The technologies include functional MRI and near infrared imaging that can detect neural signals based on metabolic activity of neurons, to methods such as EEG and magnetoencephalography (MEG) that can detect electromagnetic activity of neurons responding to speech. One method in particular, called electrocorticography or ECoG, showed promise in Herff's study.

This study presents the Brain-to-text system in which epilepsy patients who already had electrode grids implanted for treatment of their condition participated. They read out texts presented on a screen in front of them while their brain activity was recorded. This formed the basis of a database of patterns of neural signals that could now be matched to speech elements or "phones".

When the researchers also included language and dictionary models in their algorithms, they were able to decode neural signals to text with a high degree of accuracy. "For the first time, we could show that brain activity can be decoded specifically enough to use ASR technology on brain signals," says Herff. "However, the current need for implanted electrodes renders it far from usable in day-to-day life."

So, where does the field go from here to a functioning thought detection device? "A first milestone would be to actually decode imagined phrases from brain activity, but a lot of technical issues need to be solved for that," concedes Herff.

Their study results, while exciting, are still only a preliminary step towards this type of brain-computer interface.

Herff C, Schultz T.
Automatic Speech Recognition from Neural Signals: A Focused Review.
Front Neurosci. 2016 Sep 27;10:429 DOI: 10.3389/fnins.2016.00429

Most Popular Now

Researchers Invent AI Model to Design Ne…

Researchers at McMaster University and Stanford University have invented a new generative artificial intelligence (AI) model which can design billions of new antibiotic molecules that are inexpensive and easy to...

Alcidion and Novari Health Forge Strateg…

Alcidion Group Limited, a leading provider of FHIR-native patient flow solutions for healthcare, and Novari Health, a market leader in waitlist management and referral management technologies, have joined forces to...

Greater Manchester Reaches New Milestone…

Radiologists and radiographers at Northern Care Alliance NHS Foundation Trust have become the first in Greater Manchester to use the Sectra picture archiving and communication system (PACS) to report on...

AI-Based App can Help Physicians Find Sk…

A mobile app that uses artificial intelligence, AI, to analyse images of suspected skin lesions can diagnose melanoma with very high precision. This is shown in a study led from...

Powerful New AI can Predict People'…

A powerful new tool in artificial intelligence is able to predict whether someone is willing to be vaccinated against COVID-19. The predictive system uses a small set of data from demographics...

ChatGPT can Produce Medical Record Notes…

The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. This is according to a new study conducted by researchers at...

Can Language Models Read the Genome? Thi…

The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text - the genetic code. That code...

Advancing Drug Discovery with AI: Introd…

A transformative study published in Health Data Science, a Science Partner Journal, introduces a groundbreaking end-to-end deep learning framework, known as Knowledge-Empowered Drug Discovery (KEDD), aimed at revolutionizing the field...

Study Shows Human Medical Professionals …

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations...

Wanted: Young Talents. DMEA Sparks Bring…

9 - 11 April 2024, Berlin, Germany. The digital health industry urgently needs skilled workers, which is why DMEA sparks focuses on careers, jobs and supporting young people. Against the backdrop of...

Shared Digital NHS Prescribing Record co…

Implementing a single shared digital prescribing record across the NHS in England could avoid nearly 1 million drug errors every year, stopping up to 16,000 fewer patients from being harmed...

Ask Chat GPT about Your Radiation Oncolo…

Cancer patients about to undergo radiation oncology treatment have lots of questions. Could ChatGPT be the best way to get answers? A new Northwestern Medicine study tested a specially designed ChatGPT...