Listener behaviour in an audio-visual scene analysis task
* Präsentierender Autor
Zusammenfassung:Normal-hearing listeners are able to localize and identify sound sources in complex multi-talker environments. Such an auditory environment can be simulated in the laboratory using a loudspeaker-based virtual sound environment and visual information can be reproduced using a virtual reality headset. However, it remains unclear how listeners behave in such virtual audio-visual multi-talker scenarios when performing a realistic task. In the current study, we investigated the ability of normal-hearing listeners to localize a single speech sources out of a mixture of simultaneously presented talkers. At the same time, the reaction time as well as the head- and eye-movement behaviour of the listeners were measured using the virtual reality headset. Preliminary results showed that the reaction time to identify and localize a target talker increased with increasing numbers of simultaneous talkers. Furthermore, the analysis of the head- and eye-movements was also found to increase.