Communicating generalizations: Probability, vagueness, and context
Description
Knowledge that extends beyond the present context is crucial to thrive in our open-ended, dynamic world. Yet such knowledge can be difficult to extract from the environment. Fortunately, we are not limited to acquiring generalizations on our own; language allows us to communicate generalizations to each other (e.g., “Asparagus berries are poisonous”, “John hikes”, “Drinking milk makes your bones strong"). In this talk, I’ll argue that three ingredients are necessary to understand generalizations: Probability, vagueness, and context. I formalize these ingredients in an information-theoretic probabilistic model that makes quantitative predictions about human understanding of generalizations. Across diverse domains, I find that the model explains the gradience in multiple dependent measures, while simpler models fall short. This is the first formal theory that makes accurate and precise predictions about human understanding of generalizations in language and is the first step towards building models that can learn abstract knowledge from language.