Adaptive and Selective Time-Averaging of Auditory Scenes
Description
To overcome variability, estimate scene characteristics, and compress sensory input, perceptual systems pool data into statistical summaries. Despite growing evidence for statistical representations in perception, the underlying mechanisms remain poorly understood. One example of such representations occurs in auditory scenes, where background texture appears to be represented with time-averaged sound statistics. I will describe a set of experiments characterizing the averaging mechanism underlying sound texture representations. The results suggest an integration process that operates over several seconds but that adapts to stimulus characteristics, extending integration when it benefits statistical estimation of variable signals, and selectively integrating stimulus components likely to have a common cause in the world.
UPCOMING COG LUNCH TALKS:
11/14/17 - Kevin Ellis (Tenenbaum Lab)
11/21/17 - Dian Yu (Rosenholtz Lab)
11/28/17 - Yang Wu (Schulz Lab)
12/05/17 - Melissa Kline, Ph.D.