Skip to main content

Research Repository

Advanced Search

Machine learning challenges to revolutionise hearing device processing

Graetzer, SN; Cox, TJ; Barker, Jon; Akeroyd, MA; Culling, J; Naylor, G

Authors

Jon Barker

MA Akeroyd

J Culling

G Naylor



Abstract

In this project, we will run a series of machine learning challenges to revolutionise speech processing for hearing devices. Over five years, there will be three paired challenges. Each pair will consist of a challenge focussed on hearing-device processing and another focussed on speech perception modelling. The series of processing challenges will help to develop new and improved approaches for hearing device signal processing for speech. The parallel series of perception challenges will develop and improve methods for predicting speech intelligibility and quality for hearing impaired listeners.
To facilitate the challenges, we will generate open-access datasets, models and infrastructure. These will include: (1) open-source tools for generating realistic test/training materials for different listening scenarios; (2) baseline models of hearing impairment; (3) baseline models of hearing-device speech processing; (4) baseline models of speech perception and (5) databases of speech perception in noise. The databases will include the results of listening tests that characterise how real people, including those who are hearing impaired, perceive speech in noise, along with a comprehensive characterisation of each test subject's hearing ability. This will allow us to improve on existing knowledge about how best to characterise listeners individually for the purpose of predicting their speech perception in noise.
The data, models and tools we generate will form a test-bed to allow other researchers to develop their own algorithms for speech and hearing aid processing in different listening scenarios. Providing open access to these resources will lower barriers that prevent researchers from considering hearing impairment. Through this, we aim to increase the number of researchers including hearing impairment in their work.
In round one, speech will occur in the context of a ‘living room’, i.e., a person speaking in a moderately reverberant room with minimal background noise. Entries can be submitted to either the processing or perception challenge, or both. We expect to open round one in October 2020 for a closing date in June 2021 and results in October 2021.
This project involves researchers from the Universities of Sheffield, Salford, Nottingham and Cardiff in conjunction with the Hearing Industry Research Consortium, Action on Hearing Loss, Amazon, and Honda. It is funded by EPSRC. For more information, go to www.claritychallenge.org.

Citation

Graetzer, S., Cox, T., Barker, J., Akeroyd, M., Culling, J., & Naylor, G. Machine learning challenges to revolutionise hearing device processing. Poster presented at Speech in Noise (SPiN) 2020, Toulouse, France

Presentation Conference Type Poster
Conference Name Speech in Noise (SPiN) 2020
Conference Location Toulouse, France
End Date Jan 10, 2020
Publication Date Jan 9, 2020
Deposit Date Mar 5, 2021
Publisher URL https://2020.speech-in-noise.eu/?p=program&id=85
Related Public URLs https://2020.speech-in-noise.eu/#:~:text=The%2012th%20Speech%20in%20Noise,Jacques%2C%20in%20Toulouse%2C%20France.&text=Given%20the%20current%20state%20of,is%20postponed%20to%20January%202022.
Additional Information Event Type : Conference