The language of the law
If there is a conventional path to advanced studies in cognitive sciences, Eric Martínez certainly didn’t take it.
Martínez was in his second year at Harvard Law School in the spring of 2018 when he enrolled in his first class at MIT, Lab and Psycholinguistics taught by Professor of Brain and Cognitive Science Ted Gibson. Martínez was struck by the way people spoke to each other in law school, and he just couldn’t shake his curiosity.
“Everyone was speaking in this lawyerly style, and this way of communicating seemed to extend to how the law was drafted and interpreted. I thought it was curious – what’s going on here?” Martínez recalled. “It drew my interest to the intersection between the fields of law and cognitive science, and to empirical questions of how law is interpreted and drafted by humans.”
In search of answers, Martínez enrolled in Gibson’s class. He continued his law studies, receiving his J.D. and passing the Massachusetts Bar in 2019. He started working in Gibson’s lab, and in 2020 he was accepted into the BCS PhD program.
His research has since been published in several journals, including one piece, “Poor writing, not specialized concepts, drives processing difficulty in legal language,” that won the satiric Ig Nobel Prize. Other works include, “Even Lawyers Don't Like Legalese,” and most recently, an analysis of Chat GPT’s bar exam performance. Martínez is expected to complete his studies this spring.
Here, Martínez discusses his research and experiences in the Department of Brain and Cognitive Sciences:
What was the journey that brought you to MIT?
The journey started when I was in law school. I was as interested in legal doctrine, particularly how it was interpreted and drafted. I thought that to get more clarity on those issues, it would be useful to learn more about human cognition. I took a couple of classes here at MIT, including one with Ted Gibson, who is now my advisor. We started working on a project related to just that -- how legal language is interpreted by people. We enjoyed working together, and afterward, Ted encouraged me to apply for the graduate program here.
I didn't know this at the time, but when I signed up for the class, Ted had been thinking about legal language for a long time. It was kind of serendipitous that I was also interested in studying it from an empirical perspective. At that time, though, I didn't have any sense of how I could leverage my skills and connect these areas. In Ted's class, it started to become clear how different tools could be applied to law. Ted is such an open-minded researcher. He saw the same potential that I did and was immediately a willing collaborator.
What has been the focus of your research?
The big question we've been working on surrounds the complexity of the law and legal language. How and why is it that laws are complicated for lawyers and nonlawyers to understand, and why? We’ve written some papers on this topic. More recently, I've been interested in what that complexity looks like in the brain. How is legal language different from ordinary language in how it's processed? And also, what does it look like when we're reasoning about the law, cognitively and neurally?
How are you going about answering these questions?
One of the first steps was to just figure out: What does legal language look like? And a method that we used was something called corpus analysis, where we just looked at a bunch of legal documents, laws and contracts, and analyzed them for a wide variety of linguistic features based on prior theoretical work by others that looked at the features that characterize legal language, and what it is that makes it difficult to understand. We used computational corpus analysis for that. The second step was to figure out which of those features make it hard to understand. And for that, we have used behavioral experiments.
The next step was to see why it is written this way. There are legal doctrines that either expressly dictate or implicitly assume that laws need to be understandable to those who are required to comply with them, so this question has a lot of interesting and important legal implications.
From a policy standpoint, there have been efforts to try to simplify legal documents for the public at large. And one of our projects has been trying to figure out to what extent those have been effective. It turns out that they haven't. But just regardless of the results, it's important to see to what extent they have been effective. From a cognitive science perspective, a lot of Ted's work and others have shown that language in general is optimized for communicative efficiency. In legal language, that doesn't seem to be the case. Beyond the field of law, figuring out why this is the case can offer insight into the role of language and what it's designed for.
What are some of the potential future benefits of this research?
Empirical methods of corpus analysis are entering into legal practice. Judges at the highest levels – the appellate level and the Supreme Court – have been appealing to corpus analyses or experiments as a way to get at the meaning of legal text, both old legal texts like the Constitution, and also just laws passed more recently. So, there’s clearly a benefit if these methods can be improved.
In terms of societal benefits, the law does not need to be so needlessly esoteric. I think there is a potential to simplify law so people understand the laws that they're supposed to comply with. Large language models and AI systems offer a potentially promising method of being able to simplify legal text, possibly even eventually in official capacities.
How has your time at BCS prepared you for the future?
I came to graduate school largely to develop an empirical toolkit to answer those surrounding law and cognitive science. It’s been unquestionably successful. I feel very confident I can go and apply my skills to questions to study questions related to how lawyers and non-lawyers understand and apply the law and the implications of that for broader questions of law and cognitive science.
Martínez was among the presenters at Brains on Brains, the Department of Brain and Cognitive Sciences' biennial symposium.