“Our study tested the extent to which this plasticity, or compensation, between seeing and hearing exists by encoding basic visual patterns into auditory patterns with the aid of a technical device we refer to as a sensory substitution device. With the use of functional magnetic resonance imaging (fMRI), we can determine where in the brain this compensatory plasticity is taking place.”
of specialized cortical regions. How these regions develop has remained controversial.
Due to their importance for social behavior, many researchers believe that the neural mechanisms for face recognition are innate in primates or depend on early visual experience with faces.
Advertisement
“Our results from people who are blind imply that fusiform face area development does not depend on experience with actual visual faces but on exposure to the geometry of facial configurations, which can be conveyed by other sensory modalities,” Rauschecker adds.
Paula Plaza, Ph.D., one of the lead authors of the study, who is now at Universidad Andres Bello, Chile, says, “Our study demonstrates that the fusiform face area encodes the ‘concept’ of a face regardless of the input channel, or the visual experience, which is an important discovery.”
Six people who were blind and 10 sighted people, who served as control subjects, went through three rounds of functional MRI scans to see what parts of the brain were being activated during the translations from image into sound.
The scientists found that brain activation by sound in people who are blind was found primarily in the left fusiform face area while face processing in sighted people occurred mostly in the right fusiform face area.
Revolutionizing Perception
“We believe the left/right difference between people who are and aren’t blind may have to do with how the left and right sides of the fusiform area process face – either as connected patterns or as separate parts, which may be an important clue in helping us refine our sensory substitution device,” says Rauschecker, who is also co-director of the Center for Neuroengineering at Georgetown University.
Currently, with their device, people who are blind can recognize a basic ‘cartoon’ face (such as an emoji happy face) when it is transcribed into sound patterns. Recognizing faces via sounds was a time-intensive process that took many practice sessions.
Each session started with getting people to recognize simple geometrical shapes, such as horizontal and vertical lines; complexity of the stimuli was then gradually increased, so the lines formed shapes, such as houses or faces, which then became even more complex (tall versus wide houses and happy faces versus sad faces).
Ultimately, the scientists would like to use pictures of real faces and houses in combination with their device, but the researchers note that they would first have to greatly increase the device’s resolution.
“We would love to find out whether it is possible for people who are blind to learn to recognize individuals from their pictures. This may need a lot more practice with our device but now that we’ve pinpointed the region of the brain where the translation is taking place, we may have a better handle on how to fine-tune our processes,” Rauschecker concludes.
Reference :
- Sound-encoded faces activate the left fusiform face area in the early blind
– (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0286512)
Source: Eurekalert