Intuitive Statistics and Metacognition in Children and Adults | Auditory Texture Models Derived from Task-Optimized Deep Neural Network Representations
Description
Madeline Pelz
Intuitive Statistics and Metacognition in Children and Adults
Much of the power of human learning stems from our metacognitive abilities: we recognize when problems are difficult and can identify the contexts in which we need more information to answer our questions. Using both behavioral and computational methods, we examine how children and adults respond to different statistical discrimination problems and if they request more data for more difficult discriminations in a graded way. The results suggest that even young children engage in metacognitive monitoring of the relative difficulty of discrimination problems and adjust their pursuit of information in response to it. This suggests that young children have rich intuitive statistical abilities, and know how to allocate their cognitive resources to allow for more effective learning.
Jenelle Feather
Auditory Texture Models Derived from Task-Optimized Deep Neural Network Representations
Auditory textures, such as rain, wind, or fire, are distinguished from other sound signals by homogeneity in time. The brain is believed to take advantage of this homogeneity by representing textures with statistics that average information across time. Models of auditory texture are often evaluated by synthesizing a stimulus that produces a set of time-averaged statistics that match those of a natural sound, with the logic that if the model captures the representations underlying human perception the synthesized sound and natural sound will evoke the same percept. I will discuss our recent improvements to auditory texture models, which use a set of statistics derived from the representations learned via training a convolutional neural network to do a behaviorally relevant task. The texture models discussed require only a single set of simple statistics, and unlike previous auditory texture models do not require a set of statistics measured from a peripheral model of the cochlea. The results suggest that the learned filters incorporate peripheral information that matters for a task and that matters for perception, and that texture information could be represented at a single stage of cortical representation.
Additional Info
Upcoming Cog Lunches
- October 23, 2018 - Eli Pollock & Mika Braginsky
- October 30, 2018 - Peng Qian, Jon Gauthier, & Maxwell Nye
- November 13, 2018 - Anna Ivanova, Halie Olson, & Junyi Chu
- November 20, 2018 - Mark Saddler, Jarrod Hicks, & Heather Kosakowski
- November 27, 2018 - Tuan Le Mau
- December 4, 2018 - Daniel Czegel
- December 11, 2018 - Malinda McPherson