Hearing aids using split-processing technology make it easier for wearers to focus on what they really want to hear.
When you watch a movie, consider this: Cinematic audio has been specially mixed so you’re able to focus on what’s important. For example, when Hollywood sound engineers want to draw your attention to dialogue, they’ll add more contrast to the speech track so it stands out from background noise. Today, new hearing aid technology can achieve the same effect for people with hearing loss.
One of the biggest challenges facing many of the 466 million people worldwide suffering from hearing loss isn’t that they can’t hear much at all. It’s that they struggle to hear what they want to hear when they’re surrounded by ambient noise. For example, it’s not that they can’t hear in a restaurant—many can. It’s that they can’t make out what their companion across the table is saying over the restaurant’s din. Or they have trouble understanding the waiter as he approaches from behind and announces the evening’s specials. Missing out on conversations over several years can be detrimental to health.
In addressing this issue, many hearing aid manufacturers have incorporated bilateral beam-forming, using full audio transfer between hearing aids and applying multiple types of digital noise reduction – all in a single device. While this technology can work well, it has its limitations.
When a typical hearing aid is utilized in noisy situations, all the sound is processed, amplified or attenuated the same way at the same time. Noise reduction is applied to the entire sound stream. However, advances in digital signal processing have enabled hearing aids to separate speech and background noise, like the Hollywood sound engineer does at a mixing table, improving hearing and lives.
For years, audio engineers have used multitrack recording to split the various members of a musical group into their own separate tracks, then apply different amounts of gain to each track to achieve a good contrast or mix between the different instruments. Now, thanks to advances in chip miniaturization and digital signal processing, this type of split processing is available in hearing aids. Among the most important benefits is the ability to separate and enhance the speech that wearers want to hear while diminishing the background noise they don’t.
Here’s how it works: in the case of split processing technology, sound detected by the hearing aid is split into two distinct audio streams – one beam-forming directional microphone for sounds coming primarily from in front of the wearer (focused sound, including speech) and another for sounds coming primarily from behind the wearer (surrounding sound, or background noise).
With split processing, a hearing aid for the first time includes two different processors. Each stream—front and back—enters its own processor for analysis before being recombined into a single, augmented stream, allowing the wearer to hear better.
Along the way, each processor breaks incoming sound into 48 channels, enabling detailed analysis. Like all modern hearing aids, the processors examine an input signal’s amplitude modulation to gauge what it is and how to handle it. The most important thing to discern is speech (and specifically, the focused speech the wearer wants to hear) rather than noise. But the processors can also identify sudden sounds, like plates dropped in the restaurant or a nearby table breaking out in laughter. Either noise could hinder the hearing-aid wearer’s ability to focus on a conversation, so the processor attenuates the unwanted sound while turning up the gain on the desired speech.
More than noise reduction
Mind you, split processing isn’t just about noise reduction—a technology that has been around for years but does little for speech intelligibility. In split processing, the two streams are independently “shaped” for greater contrast between wanted and unwanted sound.
Think back to the audio engineer mixing console as he records a musical group; the speech stream in a split-processing hearing aid is made clearer, crisper and very detailed–the way a sound engineer manipulates audio. This is done by processors identifying background noise and creating a clearer contrast with the speech sound. Less compression and noise reduction are applied to the speech stream, and the result in this case is that speech sounds nearer to the wearer and is easier to understand.
The background stream is still enhanced for excellent sound quality, but it’s less prominent. In this case, more compression and noise reduction are added to incoming sounds in the background stream, resulting in excellent sound quality, even while certain sounds are turned down or minimally amplified. The wearer can still, for example, hear other diners, but the sound is more distant, making the focused speech more intelligible and improving communication in noisy settings.
Split processing benefits
There are many reasons those who could benefit from hearing aids don’t wear them, including cost and stigma. Another big reason is that hearing aids don’t always sound good. With two processors working in parallel, however, sound artifacts are minimized, and the hearing aids create a cleaner, more natural sound, whether it’s speech or background noise.
In comparison, other hearing aids process sound in a serial manner, meaning that compression, noise reduction, etc., are performed one after the other. Hence, features like compression and noise reduction can sometimes work against each other, ultimately degrading sound quality for the wearer.
Why is this important? Studies have shown that people with hearing loss often grow fatigued from over-concentration. They’re so focused on trying to hear what people are saying that they quickly tire. As a result, many withdraw from social situations, which in turn leads to isolation and cognitive decline.
Amplifying sounds in quiet places can help, but improved hearing and understanding in the presence of noise is the goal. Many hearing-aid technologies attempt to cut through the noise to create a better soundtrack for wearers. Split processing is the first to really deliver.
This article was originally published on EE Times.
Brian Taylor is an audiologist and senior director of audiology at Signia.