Cog Lunch: Sihan Chen "Investigating the communicative efficiency of spatial deictic adverbs" & Fernanda De La Torre Romo "Towards understanding multi-modal perception"
Description
Sihan Chen
Title: Investigating the communicative efficiency of spatial deictic adverbs
Lab(s): TedLab
Abstract: Spatial deictic adverbs are adverbs that denote spatial relations between the speaker(s) and the referent. For instance, in English, there are words such as “here”, “there”, “from here”, and “from there”. In this work, based on the spatial deictic lexicons from 5 regions worldwide (Africa, Americas, Asia, Europe, Oceania) compiled by Nintemann et al. (2020), we argue from an information-theoretic approach (Shannon, 1948, 1959) that spatial deictic adverbs in human languages generally fall on an efficiency frontier balancing informativity and complexity. We show that information theory alone cannot explain the patterns demonstrated by human languages, as there are theoretically efficient lexicons that are unattested. We then introduce the notion of systematicity and show that real lexicons are systematic in addition to balancing between informativity and complexity. Our findings add a new semantic domain to previous works (Kemp & Regier, 2012; Zaslavsky et al., 2018; Xu et al., 2020, inter alia) showing that languages encode various semantic domains in an efficient manner.
Fernanda De La Torre Romo
Title: Towards understanding multi-modal perception
Lab(s): Josh McDermott and Robert Yang
Abstract: To make sense of our environment, we integrate signals from multiple sensory modalities, seamlessly generating the unitary perception of a world composed of objects and events. Scientists have studied multisensory integration for decades, but much of what we know derives from an era where naturalistic stimuli were difficult to create and where stimulus-computable models were mostly beyond reach. In contrast to the substantial recent advances in models of individual sensory systems within vision and audition, we lack models of multi-modal perception in realistic conditions. This talk will outline the beginnings of a research program that has two main goals: 1) to use modern tools to generate naturalistic audio-visual scenes and study how the two modalities influence each other in humans and 2) to move closer to models of real world perception by building and testing next-generation models of multisensory perception. I will talk about our work in progress generating such stimuli and developing audio-visual models.