
About
Roger Levy joined the Department of Brain and Cognitive Sciences in 2016. Levy received his BS in mathematics from the University of Arizona in 1996, followed by a year as a Fulbright Fellow at the Inter-University Program for Chinese Language Study, Taipei, Taiwan and a year as a research student in biological anthropology at the University of Tokyo. In 2005, he completed his doctoral work at Stanford University under the direction of Christopher Manning, and then spent a year as a UK Economic and Social Research Council Postdoctoral Fellowship at the University of Edinburgh. Before his appointment at MIT he was faculty in the Department of Linguistics at the University of California, San Diego, where he founded the world’s first Computational Psycholinguistics Laboratory. Levy's awards include the Alfred P. Sloan Research Fellowship, the NSF Faculty Early Career Development (CAREER) Award, a Fellowship at the Center for Advanced Study in the Behavioral Sciences, and the Guggenheim Fellowship. He was awarded the MIT School of Science Teaching Prize for Undergraduate Education in 2023, serves as the President of the Cognitive Science Society in 2024–2025, and will serve as Chair of the MIT Faculty in 2025–2027.
Research
Roger Levy asks theoretical and applied questions about the processing and acquisition of natural language, with a focus on how linguistic communication resolves uncertainty over a potentially unbounded set of possible signals and meanings. His research sits in cognitive science at the intersection of artificial intelligence, cognitive psychology, and linguistics. Levy’s research program investigates how knowledge and cognitive resources are deployed to manage uncertainty and derive meaning to support language comprehension and production, the character of representations in the brain that supports these operations, and how the relevant knowledge is acquired. Combining computational modeling of large data sets with psycholinguistic experimentation, Levy’s work furthers our understanding of the cognitive underpinning of language processing, and helps us design models and algorithms that will allow machines to process human language.
Publications
Representative Publications (please see Google Scholar for a comprehensive list):
Shain, C., Meister, C., Pimentel, T., Cotterell, R., & Levy, R. P. (2024). Large-scale evidence for logarithmic effects of word predictability on reading time. Proceedings of the National Academy of Sciences, 121(10), e2307876121.
Jiang, G., Hofer, M., Mao, J., Wong, L., Tenenbaum, J. B., & Levy, R. P. (2024). Finding structure in logographic writing with library learning. Proceedings of the 46th Annual Meeting of the Cognitive Science Society.
Wilcox, E. G., Futrell, R., & Levy, R. (2023). Using Computational Models to Test Syntactic Learnability. Linguistic Inquiry, 1–44.
Hu, J., & Levy, R. (2023). Prompting is not a substitute for probability measurements in large language models. In H. Bouamor, J. Pino & K. Bali (Eds.), Proceedings of the 2023 conference on empirical methods in natural language processing (pp. 5040–5060). Association for Computational Linguistics.
Olausson, T., Gu, A., Lipkin, B., Zhang, C., Solar-Lezama, A., Tenenbaum, J., & Levy, R. (2023, December). LINC: A neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. In H. Bouamor, J. Pino & K. Bali (Eds.), Proceedings of the 2023 conference on empirical methods in natural language processing (pp. 5153–5176). Association for Computational Linguistics.
Hahn, M., Futrell, R., Levy, R. P., & Gibson, E. (2022). A resource-rational model of human processing of recursive linguistic structure. Proceedings of the National Academy of Sciences, 119(43), e2122602119.
Meister, C., Pimentel, T., Haller, P., Jäger, L., Cotterell, R., & Levy, R. (2021). Revisiting the Uniform Information Density hypothesis. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 963–980.
Futrell, R., Gibson, E., & Levy, R. P. (2020). Lossy-context surprisal: An information-theoretic model of memory effects in sentence processing. Cognitive Science, 44, 1–54.
Boyce, V., Futrell, R., & Levy, R. (2020). Maze made easy: Better and easier measurement of incremental processing difficulty. Journal of Memory and Language, 111, 1–13.
Hu, J., Gauthier, J., Qian, P., Wilcox, E., & Levy, R. P. (2020). A systematic assessment of syntactic generalization in neural language models. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 1725–1744.
Gauthier, J., & Levy, R. P. (2019). Linking artificial and human neural representations of language. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, 529–539.
Bergen, L., Levy, R., & Goodman, N. (2016). Pragmatic reasoning through semantic inference. Semantics and Pragmatics, 9(20).
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255–278.
Levy, R. (2008). A noisy-channel model of rational human sentence comprehension under uncertain input. Proceedings of the 13th Conference on Empirical Methods in Natural Language Processing, 234–243.
Levy, R. (2008). Expectation-based syntactic comprehension. Cognition, 106(3), 1126–1177.
Levy, R., & Jaeger, T. F. (2007). Speakers optimize information density through syntactic reduction. Proceedings of the 20th Conference on Neural Information Processing Systems (NIPS).