The highest-performing natural language processing models generally solve language tasks by deriving statistical regularities of sequences of arbitrary tokens supplied as training data. Humans have a much richer notion of language, however. For one thing, they understand that language refers to objects and actions in the real world, which enables them to use language to efficiently transmit instructions on how to accomplish goals. For another, they learn to focus their attention on only those spans of text important for accomplishing the task at hand. In this thesis, we attempt to improve machine models of language by taking inspiration from these aspects of human language.
The first half of this thesis concerns understanding instructional ``how-to'' language, such as ``Add remaining flour. Then mix.'' The meaning is ambiguous without context: Add how much flour to what? Mix what, using what tools, until when? We show how to successfully parse this language by maintaining a distribution over the state of a theoretical kitchen as the instructions are parsed. We also show how to aid interpretation if videos of the task are also available by training a joint vision-language model over 300,000 Youtube videos on how to cook.
The second half discusses taking advantage of people's ability to focus on important parts of a passage in a multiple-choice reading comprehension task to enhance the performance of an automatic question-answering system. We record the gaze location of hundreds of subjects as they read and answer questions about newspaper articles. We then train a state-of-the-art transformer model to predict human attention as well correct answers and find this leads to a substantial boost in performance over merely training the model to predicting correct answers.