Skip to main content

Main navigation

  • About BCS
    • Mission
    • History
    • Building 46
      • Building 46 Room Reservations
    • Leadership
    • Employment
    • Contact
      • BCS Spot Awards
      • Building 46 Email and Slack
    • Directory
  • Faculty + Research
    • Faculty
    • Areas of Research
    • Postdoctoral Research
      • Postdoctoral Association and Committees
    • Core Facilities
    • InBrain
      • InBRAIN Collaboration Data Sharing Policy
  • Academics
    • Course 9: Brain and Cognitive Sciences
    • Course 6-9: Computation and Cognition
      • Course 6-9 MEng
    • Brain and Cognitive Sciences PhD
      • How to Apply
      • Program Details
      • Classes
      • Research
      • Student Life
      • For Current Students
    • Molecular and Cellular Neuroscience Program
      • How to Apply to MCN
      • MCN Faculty and Research Areas
      • MCN Curriculum
      • Model Systems
      • MCN Events
      • MCN FAQ
      • MCN Contacts
    • Computationally-Enabled Integrative Neuroscience Program
    • Research Scholars Program
    • Course Offerings
  • News + Events
    • News
    • Events
    • Recordings
    • Newsletter
  • Community + Culture
    • Community + Culture
    • Community Stories
    • Outreach
      • MIT Summer Research Program (MSRP)
      • Post-Baccalaureate Research Scholars
      • Conferences, Outreach and Networking Opportunities
    • Get Involved (MIT login required)
    • Resources (MIT login Required)
  • Give to BCS
    • Join the Champions of the Brain Fellows Society
    • Meet Our Donors

Utility Menu

  • Directory
  • Apply to BCS
  • Contact Us

Footer

  • Contact Us
  • Employment
  • Be a Test Subject
  • Login

Footer 2

  • McGovern
  • Picower

Utility Menu

  • Directory
  • Apply to BCS
  • Contact Us
Brain and Cognitive Sciences
Menu
MIT

Main navigation

  • About BCS
    • Mission
    • History
    • Building 46
    • Leadership
    • Employment
    • Contact
    • Directory
  • Faculty + Research
    • Faculty
    • Areas of Research
    • Postdoctoral Research
    • Core Facilities
    • InBrain
  • Academics
    • Course 9: Brain and Cognitive Sciences
    • Course 6-9: Computation and Cognition
    • Brain and Cognitive Sciences PhD
    • Molecular and Cellular Neuroscience Program
    • Computationally-Enabled Integrative Neuroscience Program
    • Research Scholars Program
    • Course Offerings
  • News + Events
    • News
    • Events
    • Recordings
    • Newsletter
  • Community + Culture
    • Community + Culture
    • Community Stories
    • Outreach
    • Get Involved (MIT login required)
    • Resources (MIT login Required)
  • Give to BCS
    • Join the Champions of the Brain Fellows Society
    • Meet Our Donors

Events

News Menu

  • News
  • Events
  • Newsletters

Breadcrumb

  1. Home
  2. Events
  3. Modeling Hierarchical Structure in Language with Neural Control | A rational model of syntactic bootstrapping | Learning to Learn Program Sketches from Examples
Department of Brain and Cognitive Sciences (BCS)
Cog Lunch

Modeling Hierarchical Structure in Language with Neural Control | A rational model of syntactic bootstrapping | Learning to Learn Program Sketches from Examples

Speaker(s)
Peng Qian (Levy Lab)
Jon Gauthier (Levy/Tenenbaum Labs)
Maxwell Nye (Tenenbaum Lab)
Add to CalendarAmerica/New_YorkModeling Hierarchical Structure in Language with Neural Control | A rational model of syntactic bootstrapping | Learning to Learn Program Sketches from Examples10/30/2018 4:00 pm10/30/2018 5:00 pmMcGovern Seminar Room (46-3189)
October 30, 2018
4:00 pm - 5:00 pm
Location
McGovern Seminar Room (46-3189)
Contact
Matthew Regan
    Description

    Peng Qian

    Modeling Hierarchical Structure in Language with Neural Control

    Human language involves sequences of symbols, but its structure is not just linear: it has hierarchy that is well described using the symbolic grammars of linguistic theory. Despite this hierarchical structure, the leading models in natural language processing today generally involve recurrent neural network (RNN) architectures that process linguistic input on a strictly sequential basis. Although leading RNNs learn an impressive variety of dependencies, recent work (Linzen et al., 2016; Wilcox et al., 2018; Futrell et al., 2018; Marvin & Linzen, 2018) has highlighted some of their limitations in achieving important human-like syntactic generalization even when trained over a human lifetime’s worth of linguistic input. How to best combine the strengths of of symbolic hierarchical structures with RNN architectures remains an open problem. Here we explore a recent hybrid model, Recurrent Neural Network Grammar (RNNG; Dyer et al. 2016), which learns to generate a sentence jointly with its hierarchical syntactic tree structure via a control module parametrized by neural networks. We plan to 1) investigate whether RNNG learns the structural constraints in language better than standard, sequential RNNs; and 2) explore RNNGs as cognitively plausible models of human incremental sentence processing.

     

    Jon Gauthier

    A rational model of syntactic bootstrapping

    Children use the syntactic structures in which novel verbs appear in order to predict their meanings. This theory of *syntactic bootstrapping* has developed alongside other research programs in psychology and linguistics, such as the theories of verb classes and construction grammar. We present a computational model which unifies these theories of the syntax–semantics link in a grounded word learning task, and share preliminary results on a synthetic dataset produced following the study of Beth Levin (1993). The model jointly acquires the syntactic and semantic properties of words in its language. It detects semantic coherence among classes of verbs and argument structures, and uses this knowledge to (1) refine its own syntactic representations and thus (2) better predict the meanings of novel words from syntactic cues. We plan to model syntactic bootstrapping at scale with this computational approach, and check the power of the syntax–semantics link on naturalistic data without linguistic annotations.

     

    Maxwell Nye

    Learning to Learn Program Sketches from Examples

    From few examples, humans are able to quickly synthesize programs which perform desired behavior in a wide variety of domains, however, researchers have not yet built systems which mimic the human ability to flexibly incorporate recognition of learned patterns and explicit reasoning. In this work, we describe a novel neuro-symbolic system which synthesizes programs from examples by attempting to mimic this essential human ability.

    Additional Info

    Upcoming Cog Lunches​

    • October 23, 2018 - Eli Pollock & Mika Braginsky
    • October 30, 2018 - Peng Qian, Jon Gauthier, & Maxwell Nye
    • November 13, 2018 - Anna Ivanova, Halie Olson, & Junyi Chu
    • November 20, 2018 - Mark Saddler, Jarrod Hicks, & Heather Kosakowski
    • November 27, 2018 - Kelsey Allen
    • December 4, 2018 - Daniel Czegel
    • December 11, 2018 - Malinda McPherson

    Upcoming Events

    Jul
    Tue
    15
    McGovern Institute for Brain Research

    Special Seminar with Liset M. de la Prida

    10:00am to 11:00am
    Add to CalendarAmerica/New_YorkSpecial Seminar with Liset M. de la Prida07/15/2025 10:00 am07/15/2025 11:00 amBuilding 46,3310
    See All Events
    Don't miss our next newsletter!
    Sign Up

    Footer menu

    • Contact Us
    • Employment
    • Be a Test Subject
    • Login

    Footer 2

    • McGovern
    • Picower
    Brain and Cognitive Sciences

    MIT Department of Brain and Cognitive Sciences

    Massachusetts Institute of Technology

    77 Massachusetts Avenue, Room 46-2005

    Cambridge, MA 02139-4307 | (617) 253-5748

    For Emergencies | Accessibility

    Massachusetts Institute of Technology