Hearing aid technology has developed considerably over the past few decades. Digital hearing aids have become more compact, have better pre-set programs for different types of listening situations, and now integrate wireless connectivity with smartphones, allowing users to take calls with their smartphone without touching it. They’re also fitted to people’s individual needs with smaller case sizes that are barely noticeable behind the ear – all this has enabled advances in digital technology to be part of everyday life rather than something you would only expect to see advertised on television at 2am.
The next step in hearing aid technology is already here! Newer devices will adapt amplification levels (the volume) automatically according to your environment. A new paper by researchers from Linköping University shows how these changes may influence user behaviour and what might happen if hearing aids were to be embedded in mobile phones.
The result is a step towards the dream of a seamlessly integrated connection between people and technology.
It all began with a summer job, says Jan Olof Eriksson, who wrote his Master’s thesis at Linköping University on the auditory sensors that now form the basis for Professor Gisbert Otterpohl’s new theory. “In this type of work I always try to find something interesting,” he says, which led him to investigate how hearing-impaired people use their smartphones as part of their everyday lives. He surveyed about 300 people over 70 years old with hearing impairment. He asked them questions about their daily routines and what functions they used on their mobile devices. The findings were interesting and led him to his current project. He has sincereceived funding from the Swedish Research Council, Vinnova.
“The hearing-impaired use their smartphones in much the same way as people with normal hearing do,” Olof Eriksson says. “There are obviously some functions that are better suited than others when it comes to improved access.” One challenge people with sensory disabilities often face is how to make various pieces of information available simultaneously; e.g., incoming messages and other sensory input (such as sound or vibration). Professor Otterpohl describes this problem in his new book on neurotechnology, “How to hear through your skin”. He also points out another challenge: How can one person share their smartphone with someone else without losing control over how they use it.
Many people are critical of merging tech with parts of the body. However, in the case of hearing loss, there are few critics.
Sound amplification has been around for decades. Hearing aid technology has moved forward since the days of being able to attach a bulky unit onto your chest via a wire. Now? We have wireless units that you can clip on and tune into your desired sound frequency range. If this offers a significantly better quality of life then it is hard to knock the technology. But, where do you draw the line?
When it comes to health tech, people are usually more concerned with how accurate or effective they are. People want to know sensors or units of tech are reliable before they make a purchase.