
Taking artificial neural networks beyond the training distribution.
Description
Our minds run on biological neural networks that generalize in ways that are still out of reach for their artificial counterparts. For example, we (i) generalize to novel compositions of known mechanisms without the need for retraining, (ii) identify and rely on key invariances, (iii) learn abstract models, and (iv) plan effectively over long horizons. Can we bridge these gaps to build intelligent agents that generalize beyond the training distribution, and gain a better understanding of how we think? My talk will focus on contributions my collaborators and I have made on the four topics mentioned above.
In the end, I will outline topics of my future research that explore further aspects of generalization: (i) neural open-ended reasoning, (ii) unexplored degrees of freedom of artificial neural networks, and (iii) sample efficiency using language as a substrate for reasoning in RL.
Please use the zoom link below to attend this seminar: