
Understanding Auditory Cortical Computation
Description
Just by listening, humans can determine who is talking to them, whether a window in their house is open or shut, or what their child dropped on the floor in the next room. This ability to derive information from sound is enabled by a cascade of neuronal processing stages that transform the sound waveform entering the ear into cortical representations that are presumed to make behaviorally important sound properties explicit. Although much is known about the peripheral processing of sound, the auditory cortex is less understood, particularly in humans, with little consensus even about its coarse-scale organization. This talk will describe our recent efforts to develop and test models of auditory cortical computation, to delineate function within auditory cortex, and to understand the role of the cortex in robust sound recognition.
Speaker Bio
Josh McDermott is a perceptual scientist studying sound and hearing in the Department of Brain and Cognitive Sciences at MIT, where he is an Assistant Professor and heads the Laboratory for Computational Audition. His research addresses human and machine audition using tools from experimental psychology, engineering, and neuroscience. McDermott obtained a BA in Brain and Cognitive Science from Harvard, an MPhil in Computational Neuroscience from University College London, a PhD in Brain and Cognitive Science from MIT, and postdoctoral training in psychoacoustics at the University of Minnesota and in computational neuroscience at NYU. He is the recipient of a Marshall Scholarship, a James S. McDonnell Foundation Scholar Award, and an NSF CAREER Award.