Binaural Localization in Natural Scenes | Iconicity and the evolution of signal structure in artificial languages
Description
Binaural Localization in Natural Scenes
The ability to localize a sound source by listening is a core component of audition. Sound localization has traditionally been studied using simple sounds and listening conditions that are often at odds with what humans encounter in the world. We propose to study localization in natural scenes through three approaches. First, we seek to build computational models optimized to localize sounds in natural scenes. Second, we are collecting human behavioral data in a realistic localization task to better characterize human abilities in real-world conditions. Lastly, we are developing methods to measure the empirical distribution of sound sources and localization cues in real auditory scenes.
Iconicity and the evolution of signal structure in artificial languages
Human language uses a finite number of building blocks to produce a (theoretically) unbounded set of novel meaningful utterances. This combinatorial structure exists both in the vocal-auditory domain of spoken language as well as in the manual-visual domain of sign language. Where do these building blocks come from? In my talk, I will focus on the role of iconicity (the resemblance between signals and their meanings) at different stages in the transition from holistic to combinatorial languages. Iconicity plays an important role in earlier stages of language evolution where few linguistic conventions exist and recent work has demonstrated that various forms of non-arbitrariness persist in fully combinatorial languages. I present evidence from a series of artificial language learning experiments with human participants that simulate cultural transmission processes. Participants acquire signals using a slide whistle instrument and their reproductions are subsequently transmitted to the next generation of learners. I show that biases in signal acquisition and production lead to iconicity emerging rapidly but disappearing as signals gradually become more combinatorial. I additionally demonstrate how these biases in production are far outmatched by biases in signal interpretation and draw conclusions about the role of pragmatic pressures in language evolution.
Speaker Bio
Andrew is a third year graduate student in the McDermott Lab. He is interested in building computational models of perception and is funded by the NSF Graduate Research Fellowship Program.
Matthias is broadly interested in phenomena surrounding the grounding of language (both linguistic forms and meanings) in perception and action, with current work focusing on the emergence of structured communication systems from continuous signaling spaces. Before joining MIT, he obtained a M.Sc. in Cognitive Science from the University of Vienna and worked as a research assistant at the Adaptive Behavior and Cognition group at the Max Planck Institute for Human Development in Berlin.
Additional Info
Upcoming Cog Lunches
- October 16, 2018 - Jenelle Feather & Maddie Pelz
- October 23, 2018 - Eli Pollock & Mika Braginsky
- October 30, 2018 - Peng Qian, Jon Gauthier, & Maxwell Nye
- November 13, 2018 - Anna Ivanova, Halie Olson, & Junyi Chu
- November 20, 2018 - Mark Saddler, Jarrod Hicks, & Heather Kosakowski
- November 27, 2018 - Tuan Le Mau
- December 4, 2018 - Daniel Czegel
- December 11, 2018 - Malinda McPherson