Color Decoder: Looking deeper into the brain to understand how color meshes with forms and faces

October 19, 2016

Color Decoder: Looking deeper into the brain to understand how color meshes with forms and faces

by 

Elizabeth Dougherty | MIT Spectrum

Rosa Lafer-Sousa studies how the brain processes complex visual images. Photo: Ken Richardson

Tags: 

Rosa Lafer-Sousa exhibted her first piece of artwork at age four. She’s been painting ever since—for enjoyment and for the challenge—but it wasn’t until college that put her passion for art together with an abiding interest in science. “Color was the most difficult topic covered in my undergraduate neuroscience classes,” says Lafer-Sousa, a fourth-year MIT brain and cognitive sciences doctoral student. “It is also a challenging aspect of painting.”

Aspiring painters often start by learning to draw in black and white. Color adds complexity. There are variations in hue—the quality most people are referring to when they use the word “color”—but also in saturation and brightness. And the perception of a color changes depending on the surroundings. “The moment you take a color from palette to canvas, it changes based on the colors already there,” says Lafer-Sousa. The brain’s visual system has specialized neurons to process these phenomena. Signals from the retina separate into parallel tracks representing lines, motion, and color. Deeper in the visual system, the compartmentalization becomes more sophisticated, with regions dedicated to recognizing faces, such as the fusiform face area (FFA) discovered by MIT neuroscientist Nancy Kanwisher ’80, PhD ’86, as well as regions for bodies, shapes, and places. LEARN MORE McGovern Institute for Brain Research

As an undergraduate at Wellesley College, Lafer-Sousa and her advisor Bevil Conway, a visual neuroscientist and artist, decided to look at how color is processed in these deeper regions of the visual pathway. Using functional magnetic resonance imaging (fMRI) in macaque monkeys, she found that the ventral visual pathway, deep in the visual cortex, is systematically organized into non-overlapping sets of regions that are specialized not only for faces and places, but also for color.

The finding of independent regions so deep in the cortex biased to process color signals was surprising, says Lafer-Sousa, because researchers historically have considered color as a low-level feature of visual input, similar to lines. “The assumption was that by the time you’re in this high-level cortex that deals with objects, color is already bound to form,” she says. “So why do we see a segregation? What are these color signals doing?”

To answer these questions, Lafer-Sousa came to the MIT McGovern Institute for Brain Research to work with Kanwisher, who is the Walter A. Rosenblith Professor of Cognitive Neuroscience. Kanwisher’s longstanding expertise in functional imaging was one draw, but her lab also is exploring novel imaging methods that will allow Lafer-Sousa to study how the brain processes complex visual images, such as faces with different skin tones that convey information about mood or health status.

The finding of regions so deep in the cortex biased to process color signals was surprising, says Lafer-Sousa. “What are these color signals doing?”

First, Lafer-Sousa had to determine whether the organization she found in the macaque brain was the same in the human brain. With Kanwisher and Conway, she repeated her fMRI experiment on humans and found the same patterns. “Color regions were neatly sandwiched between face and scene regions, without overlap, mirroring what we saw in the macaque,” says Lafer-Sousa, who published the findings in February in the Journal of Neuroscience.

Her next step involves using diffusion tensor imaging (DTI) to reveal the connections between these specialized regions. The work is on the cutting edge of imaging because the regions are small and close together, but Lafer-Sousa has help. Zeynep Saygin PhD ’12, a postdoc in the Kanwisher Lab, recently had success in using this approach to look at the connectivity of the FFA and the visual word form area, which are as close spatially as Lafer-Sousa’s sandwiched regions.

“I was apprehensive, not sure we’d be able to separate face and color streams, but Zeynep showed you can separate such fine streams and see what’s connected,” says Lafer-Sousa, who will also be working with experts in DTI at Massachusetts General Hospital to do this work. “If it’s possible with these technologies, I’m in a position to do it.”