Cognitive Control of a Hearing Aid: foredrag på DTU

Indlæser kort...

Dato/tid
Date(s) - 03/10/2019
15:00 - 18:00

Sted
DTU Bygning 341

Kategori


COCOHA projektet har været et EU-finansieret konsortium med sigte at udforske muligheder for at lave et høreapparat der kan kontrolleres af hjernen (‘cognitive control’). På dette arrangement holder et antal forskere indlæg om forskellige resultater fra projektet. Sted: DTU, Bygning 341, Auditorium 22, 2800 Lyngby. Nærmere program følger senere. Tilmelding bliver her på siden (du skal have eller lave et login), som er en ny funktionalitet vi prøver af.

The COCOHA project revolves around a need, an opportunity, and a challenge. Millions of people struggle to communicate in noisy environments particularly the elderly: 7% of the European population are classified as hearing impaired. Hearing aids can effectively deal with a simple loss in sensitivity, but they do not restore the ability of a healthy pair of young ears to pick out a weak voice among many, that is needed for effective social communication. That is the need. The opportunity is that decisive technological progress has been made in the area of acoustic scene analysis: arrays of microphones and beamforming algorithms, or distributed networks of handheld devices such as smart phones can be recruited to vastly improve the signal-to-noise ratio of weak sound sources. Some of these techniques have been around for a while, and are even integrated within commercially available hearing aids. However their uptake is limited for one very simple reason: there is no easy way to steer the device, no way to tell it to direct the processing to the one source among many that the user wishes to attend to. The COCOHA project proposes to use brain signals (EEG) to help steer the acoustic scene analysis hardware, in effect extending the efferent neural pathways that control all stages of processing from cortex down to the cochlea, to govern also the external device. To succeed we must overcome major technical hurdles, drawing on methods from acoustic signal processing and machine learning borrowed from the field of Brain Computer Interfaces. On the way we will probe interesting scientific problems related to attention, electrophysiological correlates of sensory input and brain state, the structure of sound and brain signals. This is the challenge.

Program (all lectures will be in English)

15:00 – 15:05WelcomeAJMS
15:05 – 15:35Introduction + Towards intention-controlled hearing aids: experiences from eye-controlled hearing aidsTHLU
15:35 – 16:15Speech separation with microphones and brainsEC
16:15 – 16:30Break (coffe/tea/water) 
16:30 – 17:15Experiences with real-time attention decodingJH
17:15 – 17:45Recent Trends in the Auditory Attention DecodingEALI
17:45 – 18:00Discussion & wrap-upAlle, LABW, AJMS

 

Medvirkende:

AJMS  Josefine Munch Sørensen, DTU Hearing systems & DAS

THLU   Thomas Lunner, Eriksholm Research Centre & Linköping University

EC         Enea Ceolini, Institute of Neuroinformatics, University of Zürich and ETH Zürich

JH        Jens Hjortkjær, DTU Hearing systems

EALI     Emina Alkovic, Eriksholm Research Centre

LABW   Lars Bramsløw, Eriksholm Research Centre & DAS

Tilmelding

Tilmeldinger er lukket til denne event.