Learning new identities is crucial for effective social interaction. A critical aspect of this process is the integration of different images from the same face into a view-invariant representation that can be used for recognition. The representation of symmetrical viewpoints has been proposed to be a key computational step in achieving view-invariance. The aim of this study was to determine whether the representation of symmetrical viewpoints in face-selective regions is directly linked to the perception and recognition of face identity. In Experiment 1, we measured fMRI responses while male and female human participants viewed images of real faces from different viewpoints (−90, −45, 0, 45, and 90° from full-face view). Within the face regions, patterns of neural response to symmetrical views (−45 and 45° or −90 and 90°) were more similar than responses to nonsymmetrical views in the fusiform face area and superior temporal sulcus, but not in the occipital face area. In Experiment 2, participants made perceptual similarity judgements to pairs of face images. Images with symmetrical viewpoints were reported as being more similar than nonsymmetric views. In Experiment 3, we asked whether symmetrical views also convey an advantage when learning new faces. We found that recognition was best when participants were tested with novel face images that were symmetrical to the learning viewpoint. Critically, the pattern of perceptual similarity and recognition across different viewpoints predicted the pattern of neural response in face-selective regions. Together, our results provide support for the functional value of symmetry as an intermediate step in generating view-invariant representations.
The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala’s response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design functional magnetic resonance imaging paradigm, in which we compared the response to face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared with the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion, and demonstrate the critical importance of the choice of comparison condition to the pattern of results.
The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently.
Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area – OFA, fusiform face area – FFA, superior temporal sulcus – STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.