Research at the Computational Auditory Perception Lab seeks to understand the way in which human listeners and performers perceive, understand and appreciate complex auditory signals such music, speech and environmental scenes. This research spans a wide range of psychological domains, from low-level processes such as auditory stream segregation, through top-down mechanisms of expectation and attention, to high-level experience of affect and pleasure. The lab makes use of a wide range of research paradigms, including computational modelling, neuroimaging, and cognitive psychology.

The Computational Auditory Perception Lab is part of the School of Electronic Engineering and Computer Science (EECS), the Centre for Digital Music (C4DM), the Cognitive Science (CogSci) Research Group, the Centre for Multimodal Artificial Intelligence, and the Centre for Human-Centred Computing, all institutions of Queen Mary University of London. The lab is led by Marcus Pearce and Iran Roman.

Recent News

School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, United Kingdom