CogLunch: Gal Raz "Understanding and Accelerating Looking Time Studies"
Description
Speaker: Gal Raz
Title: Understanding and Accelerating Looking Time Studies
Abstract:
From birth, humans learn actively. Developmental psychologists have long capitalized on this fact, probing infants' mental representations through their looking behavior. Despite being a key measure, we do not have a rigorous, formal framework for why infants look longer at some stimuli than others. To address this, we developed a rational learning model that decides how long to look at sequences of stimuli based on its expectation to gain information. The model captures key patterns of looking time documented in the literature. By using a CNN-derived embedding space, the model can operate on raw images and generate novel predictions for previously untested stimuli. We validate these predictions by collecting two large infant looking time datasets (N = 145), and comparing model and infant behaviors. We argue that our model is a general and interpretable framework for the rational analysis of looking time.
A central limitation in evaluating computational models of infant cognition, like ours, is the availability of data. Both data collection and data annotation are slow, so testing models’ fine-grained predictions about infants’ behavior is costly. To address these bottlenecks, we developed and evaluated an automated workflow which uses parent control, asynchronous testing and automatic annotation of videos. We tested this automated workflow (N = 134), using a classic violation-of-expectation effect: that infant look longer at agents taking inefficient actions, compared to efficient actions. We are able to replicate this finding and find that our method achieves 60% of the in-lab effect size and 75% of the Zoom effect size, with about 15x faster data acquisition. Beyond testing model predictions, we hope that adoption of our automated workflow will help researchers collect larger samples, conduct replications and move the field towards a more robust science.