COCOHA projektet har været et EU-finansieret konsortium med sigte at udforske muligheder for at lave et høreapparat der kan kontrolleres af hjernen (‘cognitive control’). På dette arrangement holder et antal forskere indlæg om forskellige resultater fra projektet. Sted: DTU, Bygning 341, Auditorium 22, 2800 Lyngby. Nærmere program følger senere. Tilmelding bliver her på siden (du skal have eller lave et login), som er en ny funktionalitet vi prøver af.
The COCOHA project revolves around a need, an opportunity, and a challenge. Millions of people struggle to communicate in noisy environments particularly the elderly: 7% of the European population are classified as hearing impaired. Hearing aids can effectively deal with a simple loss in sensitivity, but they do not restore the ability of a healthy pair of young ears to pick out a weak voice among many, that is needed for effective social communication. That is the need. The opportunity is that decisive technological progress has been made in the area of acoustic scene analysis: arrays of microphones and beamforming algorithms, or distributed networks of handheld devices such as smart phones can be recruited to vastly improve the signal-to-noise ratio of weak sound sources. Some of these techniques have been around for a while, and are even integrated within commercially available hearing aids. However their uptake is limited for one very simple reason: there is no easy way to steer the device, no way to tell it to direct the processing to the one source among many that the user wishes to attend to. The COCOHA project proposes to use brain signals (EEG) to help steer the acoustic scene analysis hardware, in effect extending the efferent neural pathways that control all stages of processing from cortex down to the cochlea, to govern also the external device. To succeed we must overcome major technical hurdles, drawing on methods from acoustic signal processing and machine learning borrowed from the field of Brain Computer Interfaces. On the way we will probe interesting scientific problems related to attention, electrophysiological correlates of sensory input and brain state, the structure of sound and brain signals. This is the challenge.
Program (all lectures will be in English)
15:00 – 15:05 | Welcome | AJMS |
15:05 – 15:35 | Introduction + Towards intention-controlled hearing aids: experiences from eye-controlled hearing aids | THLU |
15:35 – 16:15 | Speech separation with microphones and brains | EC |
16:15 – 16:30 | Break (coffe/tea/water) | |
16:30 – 17:15 | Experiences with real-time attention decoding | JH |
17:15 – 17:45 | Recent Trends in the Auditory Attention Decoding | EALI |
17:45 – 18:00 | Discussion & wrap-up | Alle, LABW, AJMS |
Medvirkende:
AJMS Josefine Munch Sørensen, DTU Hearing systems & DAS
THLU Thomas Lunner, Eriksholm Research Centre & Linköping University
EC Enea Ceolini, Institute of Neuroinformatics, University of Zürich and ETH Zürich
JH Jens Hjortkjær, DTU Hearing systems
EALI Emina Alkovic, Eriksholm Research Centre
LABW Lars Bramsløw, Eriksholm Research Centre & DAS
Tilmeldinger
Tilmeldinger er lukket til denne event.