
Cog Lunch: Michael Lee "Rapid object learning in humans via low dimensional perceptual representations"
Description
Speaker: Michael Lee
Lab: DiCarlo
Title: Rapid object learning in humans via low dimensional perceptual representations
Abstract: Humans are commonly assumed to be efficient object learners. I will present behavioral data supporting this claim directly, and show that a simple model family - consisting of a single layer of plasticity, downstream of fixed IT-like representations - is a strong, but imperfect, account of how humans accomplish rapid learning of novel objects. I will then show how these models have minor, but statistically significant, failures to to learn as quickly as humans in the few-shot learning regime - a failure which we hypothesize has its origins in excess representational dimensionality in current DCNN models, relative to humans.
To test this hypothesis, we developed statistical methods and performed online psychophysics experiments to empirically characterize the dimensions underlying human object perception, and I will present evidence that human object perception is indeed describable by a low dimensional representational space.
Finally, we created new "perceptually aligned" models based on direct optimization to these behavioral data, and I will show this lead to models which performed few-shot learning as efficiently as humans. Overall, these results provide support for a simple view of rapid object learning in humans: learning linear projections of a fixed, low-dimensional representation.