Low‐quality images are problematic for face identification, for example, when the police identify faces from CCTV images. Here, we test whether face averages, comprising multiple poor‐quality images, can improve both human and computer recognition. We created averages from multiple pixelated or nonpixelated images and compared accuracy using these images and exemplars. To provide a broad assessment of the potential benefits of this method, we tested human observers (n = 88; Experiment 1), and also computer recognition, using a smartphone application (Experiment 2) and a commercial one‐to‐many face recognition system used in forensic settings (Experiment 3). The third experiment used large image databases of 900 ambient images and 7,980 passport images. In all three experiments, we found a substantial increase in performance by averaging multiple pixelated images of a person’s face. These results have implications for forensic settings in which faces are identified from poor‐quality images, such as CCTV.
Researchers have long been interested in how social evaluations are made based upon first impressions of faces. It is also important to consider the level of agreement we see in such evaluations across raters and what this may tell us. Typically, high levels of inter-rater agreement for facial judgements are reported, but the measures used may be misleading. At present, studies commonly report Cronbach’s α as a way to quantify agreement, although problematically, there are various issues with the use of this measure. Most importantly, because researchers treat raters as items, Cronbach’s α is inflated by larger sample sizes even when agreement between raters is fixed. Here, we considered several alternative measures and investigated whether these better discriminate between traits that were predicted to show low (parental resemblance), intermediate (attractiveness, dominance, trustworthiness), and high (age, gender) levels of agreement. Importantly, the level of inter-rater agreement has not previously been studied for many of these traits. In addition, we investigated whether familiar faces resulted in differing levels of agreement in comparison with unfamiliar faces. Our results suggest that alternative measures may prove more informative than Cronbach’s α when determining how well raters agree in their judgements. Further, we found no apparent influence of familiarity on levels of agreement. Finally, we show that, like attractiveness, both trustworthiness and dominance show significant levels of private taste (personal or idiosyncratic rater perceptions), although shared taste (perceptions shared with other raters) explains similar levels of variance in people’s perceptions. In conclusion, we recommend that researchers investigating social judgements of faces consider alternatives to Cronbach’s α but should also be prepared to examine both the potential value and origin of private taste as these might prove informative.
Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area – OFA, fusiform face area – FFA, superior temporal sulcus – STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.
Background: Infants and children travel using passports that are typically valid for five years (e.g. Canada, United Kingdom, United States and Australia). These individuals may also need to be identified using images taken from videos and other sources in forensic situations including child exploitation cases. However, few researchers have examined how useful these images are as a means of identification.
Methods: We investigated the effectiveness of photo identification for infants and children using a face matching task, where participants were presented with two images simultaneously and asked whether the images depicted the same child or two different children. In Experiment 1, both images showed an infant (<1 year old), whereas in Experiment 2, one image again showed an infant but the second image of the child was taken at 4–5 years of age. In Experiments 3a and 3b, we asked participants to complete shortened versions of both these tasks (selecting the most difficult trials) as well as the short version Glasgow face matching test. Finally, in Experiment 4, we investigated whether information regarding the sex of the infants and children could be accurately perceived from the images.
Results: In Experiment 1, we found low levels of performance (72% accuracy) for matching two infant photos. For Experiment 2, performance was lower still (64% accuracy) when infant and child images were presented, given the significant changes in appearance that occur over the first five years of life. In Experiments 3a and 3b, when participants completed both these tasks, as well as a measure of adult face matching ability, we found lowest performance for the two infant tasks, along with mixed evidence of within-person correlations in sensitivities across all three tasks. The use of only same-sex pairings on mismatch trials, in comparison with random pairings, had little effect on performance measures. In Experiment 4, accuracy when judging the sex of infants was at chance levels for one image set and above chance (although still low) for the other set. As expected, participants were able to judge the sex of children (aged 4–5) from their faces.
Discussion: Identity matching with infant and child images resulted in low levels of performance, which were significantly worse than for an adult face matching task. Taken together, the results of the experiments presented here provide evidence that child facial photographs are ineffective for use in real-world identification.
A fundamental issue in testing body image perception is how to present the test stimuli. Previous studies have almost exclusively used images of bodies viewed in front-view, but this potentially obscures key visual cues used to judge adiposity reducing the ability to make accurate judgements. A potential solution is to use a three-quarter view, which combines visual cues to body fat that can be observed in front and profile. To test this hypothesis, 20 female observers completed a 2-alternative forced choice paradigm to determine the smallest difference in body fat detectable in female bodies in front, three-quarter, and profile view. There was a significant advantage for three-quarter and profile relative to front-view. Discrimination accuracy is predicted by the saliency of stomach depth, suggesting that this is a key visual cue used to judge body mass. In future, bodies should ideally be presented in three-quarter to accurately assess body size discrimination.
Individuals vary in perceptual accuracy when categorising facial expressions, yet it is unclear how these individual differences in non-clinical population are related to cognitive processing stages at facial information acquisition and interpretation. We tested 104 healthy adults in a facial expression categorisation task, and correlated their categorisation accuracy with face-viewing gaze allocation and personal traits assessed with Autism Quotient, anxiety inventory and Self-Monitoring Scale. The gaze allocation had limited but emotion-specific impact on categorising expressions. Specifically, longer gaze at the eyes and nose regions were coupled with more accurate categorisation of disgust and sad expressions, respectively. Regarding trait measurements, higher autistic score was coupled with better recognition of sad but worse recognition of anger expressions, and contributed to categorisation bias towards sad expressions; whereas higher anxiety level was associated with greater categorisation accuracy across all expressions and with increased tendency of gazing at the nose region. It seems that both anxiety and autistic-like traits were associated with individual variation in expression categorisation, but this association is not necessarily mediated by variation in gaze allocation at expression-specific local facial regions. The results suggest that both facial information acquisition and interpretation capabilities contribute to individual differences in expression categorisation within non-clinical populations.
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants’ expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features.
Parents tend to visually assess children to determine their weight status and typically underestimate child body size. A visual tool may aid parents to more accurately assess child weight status and so support strategies to reduce childhood overweight. Body image scales (BIS) are visual images of people ranging from underweight to overweight but none exist for children based on UK criteria. Our aim was to develop sex- and age-specific BIS for children, based on British growth reference (UK90) criteria. Methods: BIS were developed using 3D surface body scans of children, their associated weight status using UK90 criteria from height and weight measurements, and qualitative work with parents and health professionals. Results: Height, weight and 3D body scans were collected (211 4-5 years; 177 10-11 years). 12 qualitative sessions were held with 37 participants. Four BIS (4-5 year old girls and boys, 10-11 year old girls and boys) were developed. Conclusions: This study has created the first sex- and age-specific BIS, based on UK90 criteria. The BIS have potential for use in child overweight prevention and management strategies, and in future research. This study also provides a protocol for the development of further BIS appropriate to other age groups and ethnicities.
Previous research has shown that displaying the color red can increase attractiveness. As a result, women display red more often when expecting to meet more attractive men in a laboratory context. Here, we carried out a field study by analyzing 546 daters from the “First Dates” television series. Each participant was filmed in a pre-date interview and during a real first date, allowing direct comparison of the clothing worn by each person in these two contexts. Analysis of ratings of the amount of red displayed showed that both men and women wore more red clothing during their dates. This pattern was even stronger for black clothing, while the amount of blue clothing did not differ across the two contexts. Our results provide the first real-world demonstration that people display more red and black clothing when meeting a possible mate for the first time, perhaps seeking to increase their attractiveness and/or reveal their intentions to potential partners.
Research has systematically examined how laboratory participants and real-world practitioners decide whether two face photographs show the same person or not using frontal images. In contrast, research has not examined face matching using profile images. In Experiment 1, we ask whether matching unfamiliar faces is easier with frontal compared with profile views. Participants completed the original, frontal version of the Glasgow Face Matching Test, and also an adapted version where all face pairs were presented in profile. There was no difference in performance across the two tasks, suggesting that both views were similarly useful for face matching. Experiments 2 and 3 examined whether matching unfamiliar faces is improved when both frontal and profile views are provided. We compared face matching accuracy when both a frontal and a profile image of each face were presented, with accuracy using each view alone. Surprisingly, we found no benefit when both views were presented together in either experiment. Overall, these results suggest that either frontal or profile views provide substantially overlapping information regarding identity or participants are unable to utilise both sources of information when making decisions. Each of these conclusions has important implications for face matching research and real-world identification development.