
Memory and Locality in Natural Language
Description
When understanding a sentence, humans process the sentence incrementally word-by-word to build a representation of meaning. In this talk I explore how memory constraints affect incremental processing difficulty and can shape human language. First, I use the noisy-channel theory of language comprehension to produce a unified model of processing difficulty that jointly predicts effects of probabilistic expectations and memory constraints. I show that the memory constraints in the model give a simple prediction for language processing, which we call information locality: processing difficulty occurs when words that predict one another are far from one another. Next, I take the view that languages might be shaped by a pressure for processing efficiency, in which case information locality predicts that syntactically related words should be close. I provide evidence that this prediction is true in parsed corpora of 37 languages. Finally, I show that this model gives the first formal explanation of a long-standing puzzle in psycholinguistics: the structural forgetting effect, which creates ungrammatical sentences which sound more acceptable than grammatical ones.
UPCOMING COG LUNCH TALKS
- 11/22/16 - Malinda McPherson, McDermott Lab
- 11/29/16 - Lindsey Powell, Saxe Lab
- 12/5/16 - Dimitrios Pinotsis, Miller Lab