People / Faculty
Josh McDermott, Ph.D.
Our lab studies how people hear. Sound is produced by events in the world, travels through the air as pressure waves, and is measured by two sensors (the ears). The brain uses the signals from these sensors to infer a vast number of important things – what someone said, their emotional state when they said it, and the whereabouts and nature of events we cannot see, to name but a few. Humans make such auditory judgments hundreds of times a day, but their basis in our acoustic sensory input is often not obvious, and reflects many stages of sophisticated processing that remain poorly characterized.
We seek to understand the computational basis of these impressive yet routine perceptual inferences. We hope to use our research to improve devices for assisting those whose hearing is impaired, and to design more effective machine systems for recognizing and interpreting sound, which at present perform dramatically worse in real-world conditions than do normal human listeners.
Our work combines behavioral experiments with computational modeling and tools for analyzing, manipulating and synthesizing sounds. We draw particular inspiration from machine hearing research: we aim to conduct experiments in humans that reveal how we succeed where machine algorithms fail, and to use approaches in machine hearing to motivate new experimental work. We also have strong ties to auditory neuroscience. Models of the auditory system provide the backbone of our perceptual theories, and we collaborate actively with neurophysiologists and cognitive neuroscientists. The lab thus functions at the intersection of psychology, neuroscience, and engineering.
Current research in our lab explores how humans recognize real-world sound sources, segregate particular sounds from the mixture that enters the ear (the cocktail party problem), separate the acoustic contribution of the environment (e.g. room reverberation) from that of the sound source, and remember and/or attend to particular sounds of interest. We also study music perception and cognition, both for their intrinsic interest, and because music often provides revealing examples of basic hearing mechanisms at work.
McDermott, J.H., Simoncelli, E.P. (2011) Sound texture perception via statistics of the auditory periphery: Evidence from sound synthesis. Neuron, 71, 926-940.
McDermott, J.H., Wrobleski, D., Oxenham, A.J. (2011) Recovering sound sources from embedded repetition. Proceedings of the National Academy of Sciences, 108, 1188-1193.
McDermott, J.H., Lehr, A.J., Oxenham, A.J. (2010) Individual differences reveal the basis of consonance. Current Biology, 20, 1035-1041.
McDermott, J.H. (2009) The cocktail party problem. Current Biology, 19, R1024-R1027.
McDermott, J.H., Lehr, A.J., Oxenham, A.J. (2008) Is relative pitch specific to pitch? Psychological Science, 19 (12), 1263-1271.