Description
Like much of the perceptual input that humans experience, language is highly variable. Two speakers may produce the same phoneme with different acoustics, and a sentence can have multiple syntactic parses. How do listeners (typically) choose the correct interpretation when the same sentence can map onto several potential meanings? In this talk, I will provide evidence that comprehenders navigate this variability by tracking distributional information in the input and using it to constrain the possible interpretations. For example, listeners can rapidly learn, from exposure to co-occurrence statistics, that a verb is much more likely to be followed by one syntactic structure than another structure (though both are grammatical). Thus, language representations are continuously shaped by experience even in adulthood. However, the underlying learning mechanisms and their neural underpinnings remain an open question. I will discuss recent efforts aimed at testing an error-based learning account, including work with patients with hippocampal amnesia, as well as evidence that the relevant linguistic representations may change over the lifespan.