Modeling Hierarchical Structure in Language with Neural Control | A rational model of syntactic bootstrapping | Learning to Learn Program Sketches from Examples
Description
Peng Qian
Modeling Hierarchical Structure in Language with Neural Control
Human language involves sequences of symbols, but its structure is not just linear: it has hierarchy that is well described using the symbolic grammars of linguistic theory. Despite this hierarchical structure, the leading models in natural language processing today generally involve recurrent neural network (RNN) architectures that process linguistic input on a strictly sequential basis. Although leading RNNs learn an impressive variety of dependencies, recent work (Linzen et al., 2016; Wilcox et al., 2018; Futrell et al., 2018; Marvin & Linzen, 2018) has highlighted some of their limitations in achieving important human-like syntactic generalization even when trained over a human lifetime’s worth of linguistic input. How to best combine the strengths of of symbolic hierarchical structures with RNN architectures remains an open problem. Here we explore a recent hybrid model, Recurrent Neural Network Grammar (RNNG; Dyer et al. 2016), which learns to generate a sentence jointly with its hierarchical syntactic tree structure via a control module parametrized by neural networks. We plan to 1) investigate whether RNNG learns the structural constraints in language better than standard, sequential RNNs; and 2) explore RNNGs as cognitively plausible models of human incremental sentence processing.
Jon Gauthier
A rational model of syntactic bootstrapping
Children use the syntactic structures in which novel verbs appear in order to predict their meanings. This theory of *syntactic bootstrapping* has developed alongside other research programs in psychology and linguistics, such as the theories of verb classes and construction grammar. We present a computational model which unifies these theories of the syntax–semantics link in a grounded word learning task, and share preliminary results on a synthetic dataset produced following the study of Beth Levin (1993). The model jointly acquires the syntactic and semantic properties of words in its language. It detects semantic coherence among classes of verbs and argument structures, and uses this knowledge to (1) refine its own syntactic representations and thus (2) better predict the meanings of novel words from syntactic cues. We plan to model syntactic bootstrapping at scale with this computational approach, and check the power of the syntax–semantics link on naturalistic data without linguistic annotations.
Maxwell Nye
Learning to Learn Program Sketches from Examples
From few examples, humans are able to quickly synthesize programs which perform desired behavior in a wide variety of domains, however, researchers have not yet built systems which mimic the human ability to flexibly incorporate recognition of learned patterns and explicit reasoning. In this work, we describe a novel neuro-symbolic system which synthesizes programs from examples by attempting to mimic this essential human ability.
Additional Info
Upcoming Cog Lunches
- October 23, 2018 - Eli Pollock & Mika Braginsky
- October 30, 2018 - Peng Qian, Jon Gauthier, & Maxwell Nye
- November 13, 2018 - Anna Ivanova, Halie Olson, & Junyi Chu
- November 20, 2018 - Mark Saddler, Jarrod Hicks, & Heather Kosakowski
- November 27, 2018 - Kelsey Allen
- December 4, 2018 - Daniel Czegel
- December 11, 2018 - Malinda McPherson