Computer Models of Neuronal Sound Processing in the Brain Lead to Cochlear Implant Improvements

Children learning to speak depend on functional hearing. So-called cochlear implants allow deaf people to hear again by stimulating the auditory nerve directly. Researchers at the Technische Universitaet Muenchen (TUM) are working to overcome current limits of the technology. They are investigating the implementation of signals in the auditory nerve and the subsequent neuronal processing in the brain. Using the computer models developed at the TUM manufacturers of cochlear implants improve their devices.

Intact hearing is a prerequisite for learning to speak. This is why children who are born deaf are fitted with so-called cochlear implants as early as possible. Cochlear implants consist of a speech processor and a transmitter coil worn behind the ear, together with the actual implant, an encapsulated microprocessor placed under the skin to directly stimulate the auditory nerve via an electrode with up to 22 contacts.

Adults who have lost their hearing can also benefit from cochlear implants. The devices have advanced to the most successful neuroprostheses. They allow patients to understand the spoken word quite well again. But the limits of the technology are reached when listening to music, for example, or when many people speak at once. Initial improvements are realized by using cochlear implants in both ears.

A further major development leap would ensue if spatial hearing could be restored. Since our ears are located a few centimeters apart, sound waves form a given source generally reach one ear before the other. The difference is only a few millionths of a second, but that is enough for the brain to localize the sound source. Modern microprocessors can react sufficiently fast, but a nerve impulse takes around one hundred times longer. To achieve a perfect interplay, new strategies need to be developed.

Modeling the auditory system
The perception of sound information begins in the inner ear. There, hair cells translate the mechanical vibrations into so-called action potentials, the language of nerve cells. Neural circuitry in the brain stem, mesencephalon and diencephalon transmits the signals to the auditory cortex, where around 100 million nerve cells are responsible for creating our perception of sound. Unfortunately, this "coding" is still poorly understood by science.

"Getting implants to operate more precisely will require strategies that are better geared to the information processing of the neuronal circuits in the brain. The prerequisite for this is a better understanding of the auditory system," explains Professor Werner Hemmert, director of the Department for Bio-Inspired Information Processing, at the TUM Institute of Medical Engineering (IMETUM).

Based on physiological measurements of neurons, his working group successfully built a computer model of acoustic coding in the inner ear and the neuronal information processing by the brain stem. This model will allow the researchers to further develop coding strategies and test them in experiments on people with normal hearing, as well as people carrying implants.

The fast track to better hearing aids
For manufacturers of cochlear implants collaborating with the TUM researchers, these models are very beneficial evaluation tools. Preliminary testing at the computer translates into enormous time and cost savings. "Many ideas can now be tested significantly faster. Then only the most promising processes need to be evaluated in cumbersome patient trials," says Werner Hemmert. The new models thus have the potential to significantly reduce development cycles. "In this way, patients will benefit from better devices sooner."

The working group reports on its work in the newly published book, "The Technology of Binaural Listening," which will be presented at the 166th conference of the Acoustical Society of America in San Francisco (2nd - 6th December 2013).

M. Nicoletti, C. Wirtz, W. Hemmert: Modeling Sound Localization with Cochlear Implants, The Technology of Binaural Listening, Springer-Verlag Berlin Heidelberg, 2013.

Most Popular Now

ChatGPT can Produce Medical Record Notes…

The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. This is according to a new study conducted by researchers at...

Can Language Models Read the Genome? Thi…

The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text - the genetic code. That code...

Bayer and Google Cloud to Accelerate Dev…

Bayer and Google Cloud announced a collaboration on the development of artificial intelligence (AI) solutions to support radiologists and ultimately better serve patients. As part of the collaboration, Bayer will...

Study Shows Human Medical Professionals …

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations...

Shared Digital NHS Prescribing Record co…

Implementing a single shared digital prescribing record across the NHS in England could avoid nearly 1 million drug errors every year, stopping up to 16,000 fewer patients from being harmed...

Ask Chat GPT about Your Radiation Oncolo…

Cancer patients about to undergo radiation oncology treatment have lots of questions. Could ChatGPT be the best way to get answers? A new Northwestern Medicine study tested a specially designed ChatGPT...

North West Anglia Works with Clinisys to…

North West Anglia NHS Foundation Trust has replaced two, legacy laboratory information systems with a single instance of Clinisys WinPath. The trust, which serves a catchment of 800,000 patients in North...

Can AI Techniques Help Clinicians Assess…

Investigators have applied artificial intelligence (AI) techniques to gait analyses and medical records data to provide insights about individuals with leg fractures and aspects of their recovery. The study, published in...

AI Makes Retinal Imaging 100 Times Faste…

Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is...

SPARK TSL Acquires Sentean Group

SPARK TSL is acquiring Sentean Group, a Dutch company with a complementary background in hospital entertainment and communication, and bringing its Fusion Bedside platform for clinical and patient apps to...

Standing Up for Health Tech and SMEs: Sh…

AS the new chair of the health and social care council at techUK, Shane Tickell talked to Highland Marketing about his determination to support small and innovative companies, by having...

GPT-4 Matches Radiologists in Detecting …

Large language model GPT-4 matched the performance of radiologists in detecting errors in radiology reports, according to research published in Radiology, a journal of the Radiological Society of North America...