The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently.
Existing measures of breast size dissatisfaction have poor ecological validity or have not been fully evaluated in terms of psychometric properties. Here, we report on the development of the Breast Size Rating Scale (BSRS), a novel measure of breast size dissatisfaction consisting of 14 computer-generated images varying in breast size alone. Study 1 (N=107) supported the scale’s construct validity, insofar as participants were able to correctly order the images in terms of breast size. Study 2 (N=234) provided evidence of the test-retest reliability of BSRS-derived scores after 3 months. Studies 3 (N=730) and 4 (N=234) provided evidence of the convergent validity of BSRS-derived breast size dissatisfaction scores, which were significantly associated with a range of measures of body image. The BSRS provides a useful tool for researchers examining women’s breast size dissatisfaction.
Baron-Cohen’s extreme male brain theory proposes that autism results from elevated prenatal testosterone levels. In the present study, we assessed possible correlated effects of androgen exposure on adult morphology and, in particular, the development of facial features associated with masculinity. We created composite images capturing statistical regularities in facial appearance associated with high and low Autism-Spectrum Quotient (AQ) scores. In three experiments, we assessed correlations between perceived facial masculinity and AQ scores. In Experiment 1, observers selected the high-AQ males as more masculine. We replicated this result in Experiment 2, using different photographs, composite-image methods, and observers. There was no association of masculinity and AQ scores for female faces in either study. In Experiment 3, we created high- and low-AQ male composites from the five AQ subscales. High-AQ images were rated more masculine on each of the subscales. We discuss these findings with respect to the organizational-activational hypothesis of testosterone activity during development.
A central research question in natural vision is how to allocate fixation to extract informative cues for scene perception. With high quality images, psychological and computational studies have made significant progress to understand and predict human gaze allocation in scene exploration. However, it is unclear whether these findings can be generalised to degraded naturalistic visual inputs. In this eye-tracking and computational study, we methodically distorted both man-made and natural scenes with Gaussian low-pass filter, circular averaging filter and Additive Gaussian white noise, and monitored participants’ gaze behaviour in assessing perceived image qualities. Compared with original high quality images, distorted images attracted fewer numbers of fixations but longer fixation durations, shorter saccade distance and stronger central fixation bias. This impact of image noise manipulation on gaze distribution was mainly determined by noise intensity rather than noise type, and was more pronounced for natural scenes than for man-made scenes. We furthered compared four high performing visual attention models in predicting human gaze allocation in degraded scenes, and found that model performance lacked human-like sensitivity to noise type and intensity, and was considerably worse than human performance measured as inter-observer variance. Furthermore, the central fixation bias is a major predictor for human gaze allocation, which becomes more prominent with increased noise intensity. Our results indicate a crucial role of external noise intensity in determining scene-viewing gaze behaviour, which should be considered in the development of realistic human-vision-inspired attention models.
Why attention lapses during prolonged tasks is debated, specifically whether errors are a consequence of under-arousal or exerted effort. To explore this, we investigated whether increased impulsivity is associated with effortful processing by modifying the demand of a task by presenting it at a quiet intensity. Here, we consider whether attending at low but detectable levels affects impulsivity in a population with intact hearing. A modification of the Sustained Attention to Response Task was used with auditory stimuli at two levels: the participants’ personal “lowest detectable” level and a “normal speaking” level. At the quiet intensity, we found that more impulsive responses were made compared with listening at a normal speaking level. These errors were not due to a failure in discrimination. The findings suggest an increase in processing time for auditory stimuli at low levels that exceeds the time needed to interrupt a planned habitual motor response. This leads to a more impulsive and erroneous response style. These findings have important implications for understanding the nature of impulsivity in relation to effortful processing. They may explain why a high proportion of individuals with hearing loss are also diagnosed with Attention Deficit Hyperactivity Disorder.
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ – a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Matching two different images of a face is a very easy task for familiar viewers, but much harder for unfamiliar viewers. Despite this, use of photo-ID is widespread, and people appear not to know how unreliable it is. We present a series of experiments investigating bias both when performing a matching task and when predicting other people’s performance. Participants saw pairs of faces and were asked to make a same/different judgement, after which they were asked to predict how well other people, unfamiliar with these faces, would perform. In four experiments we show different groups of participants familiar and unfamiliar faces, manipulating this in different ways: celebrities in experiments 1-3 and personally familiar faces in experiment 4. The results consistently show that people match images of familiar faces more accurately than unfamiliar faces. However, people also reliably predict that the faces they themselves know will be more accurately matched by different viewers. This bias is discussed in the context of current theoretical debates about face recognition, and we suggest that it may underlie the continued use of photo-ID, despite the availability of evidence about its unreliability.
Research has suggested that altering the perceived shape and size of the body image significantly affects perception of somatic events. The current study investigated how multisensory illusions applied to the body altered tactile perception using the somatic signal detection task. Thirty-one healthy volunteers were asked to report the presence or absence of near-threshold tactile stimuli delivered to the index finger under three multisensory illusion conditions: stretched finger, shrunken finger and detached finger, as well as a veridical baseline condition. Both stretching and shrinking the stimulated finger enhanced correct touch detections; however, the mechanisms underlying this increase were found to be different. In contrast, the detached appearance reduced false touch reports—possibly due to reduced tactile noise, as a result of attention being directed to the tip of the finger only. These findings suggest that distorted representations of the body could have different modulatory effects on attention to touch and provide a link between perceived body representation and somatosensory decision-making.
There is a plethora of cross-sectional work on maternal perceptions of child weight status showing that mothers typically do not classify their overweight child as being overweight according to commonly used clinical criteria. Awareness of overweight in their child is regarded as an important prerequisite for mothers to initiate appropriate action. The gap in the literature is determining whether, if mothers do classify their overweight child’s weight status correctly, this is associated with a positive outcome for the child’s body mass index (BMI) at a later stage.
Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast.