Skip to main content
Brain and Cognitive Sciences

Brain and Cognitive Sciences

  • Directory
  • Give to BCS
  • Apply to BCS
  • Contact Us

Search form

Main menu

  • About BCS
    • Governance & Leadership
    • Building 46
    • History
    • BCS HQ Staff Contacts
    • Employment
  • Our Faculty
  • Academic Program
    • Undergraduate & MEng
      • Course 9: Brain and Cognitive Sciences
      • Course 6-9: Computation and Cognition
      • Course 6-9 Master of Engineering
    • Post-baccalaureate
    • Doctoral Program
      • Degree Requirements
      • Research Programs & Opportunities
      • Resources & Forms
      • Class Schedule
      • Admissions
      • Financial Information
    • Postdoctoral
    • Helpful Links & Resources
  • Research
    • Cellular / Molecular Neuroscience
    • Systems Neuroscience
    • Computation
    • Cognitive Science
    • Community & Resources
  • News + Events
    • News
    • Events
    • Media
    • Newsletter
    • Archives
  • Diversity
    • Statement of Support
    • MIT Summer Research Program (MSRP)
    • Post-baccalaureate Research Scholars Program

Directory

You are here

  1. Home
  2. / Directory
  3. / McDermott, Josh Ph.D.
McDermott, Josh
Ph.D.
Associate Professor
Brain & Cognitive Sciences

Building: 

46-4065
Email: jhm@mit.edu

Phone: 

6172537437

Administrative Asst: 

Canfield, John-Elmer
Lab website

Profile Bottom

About

Josh McDermott is a perceptual scientist studying sound and hearing in the Department of Brain and Cognitive Sciences at MIT, where he is an Associate Professor and heads the Laboratory for Computational Audition. His research addresses human and machine audition using tools from experimental psychology, engineering, and neuroscience. McDermott obtained a BA in Brain and Cognitive Science from Harvard, an MPhil in Computational Neuroscience from University College London, a PhD in Brain and Cognitive Science from MIT, and postdoctoral training in psychoacoustics at the University of Minnesota and in computational neuroscience at NYU. He is the recipient of a Marshall Scholarship, a James S. McDonnell Foundation Scholar Award, and an NSF CAREER Award.

Research

Computational Audition

Our lab studies how people hear. Sound is produced by events in the world, travels through the air as pressure waves, and is measured by two sensors (the ears). The brain uses the signals from these sensors to infer a vast number of important things: what someone said, their emotional state when they said it, and the whereabouts and nature of events we cannot see, to name but a few. Humans make such auditory judgments hundreds of times a day, but their basis in our acoustic sensory input is often not obvious, and reflects many stages of sophisticated processing that remain poorly characterized.

We seek to understand the computational basis of these impressive yet routine perceptual inferences. We hope to use our research to improve devices for assisting those whose hearing is impaired, and to design more effective machine systems for recognizing and interpreting sound, which at present perform dramatically worse in real-world conditions than do normal human listeners.

Our work combines behavioral experiments with computational modeling and tools for analyzing, manipulating and synthesizing sounds. We draw particular inspiration from machine hearing research: we aim to conduct experiments in humans that reveal how we succeed where machine algorithms fail, and to use approaches in machine hearing to motivate new experimental work. We also have strong ties to auditory neuroscience. Models of the auditory system provide the backbone of our perceptual theories, and we collaborate actively with neurophysiologists and cognitive neuroscientists. The lab thus functions at the intersection of psychology, neuroscience, and engineering.

Current research in our lab explores how humans recognize real-world sound sources, segregate particular sounds from the mixture that enters the ear (the cocktail party problem), separate the acoustic contribution of the environment (e.g. room reverberation) from that of the sound source, and remember and/or attend to particular sounds of interest. We also study music perception and cognition, both for their intrinsic interest, and because music often provides revealing examples of basic hearing mechanisms at work.

Teaching

9.35 Perceptual Systems

9.285 Neural Coding and Perception of Sound

Publications

Norman-Haignere, S.V., McDermott, J.H. (2018) Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex. PLoS Biology, 16, e2005127. 

Kell, A.J.E., Yamins, D.L.K., Shook, E.N., Norman-Haignere, S.V., McDermott, J.H. (2018) A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98, 630-644.

McWalter, R., McDermott, J.H. (2018) Adaptive and selective time-averaging of auditory scenes. Current Biology, 28, 1405-1418.

Traer, J., McDermott, J.H. (2016) Statistics of natural reverberation enable perceptual separation of sound and space. Proceedings of the National Academy of Sciences, 113, E7856--E7865.

Norman-Haignere, S., *Kanwisher, N.G., *McDermott, J.H. (2015) Distinct cortical pathways for music and speech revealed by hypothesis-free voxel decomposition. Neuron, 88, 1281-1296.

  • Directions
  • Contact Us
  • Staff Resources
  • Employment
  • Be a Test Subject
  • Login
  • McGovern
  • Picower

MIT Department of Brain and Cognitive Sciences

Massachusetts Institute of Technology

77 Massachusetts Avenue, Room 46-2005

Cambridge, MA 02139-4307 | (617) 253-5748

For Emergencies | Accessibility | Adapting to COVID