Ritchie, K. L., Mireku, M. O., & Kramer, R. S. S. (2019). Face averages and multiple images in a live matching task. British Journal of Psychology. Advance online publication.

We know from previous research that unfamiliar face matching (determining whether two simultaneously presented images show the same person or not) is very error‐prone. A small number of studies in laboratory settings have shown that the use of multiple images or a face average, rather than a single image, can improve face matching performance. Here, we tested 1,999 participants using four‐image arrays and face averages in two separate live matching tasks. Matching a single image to a live person resulted in numerous errors (79.9% accuracy across both experiments), and neither multiple images (82.4% accuracy) nor face averages (76.9% accuracy) improved performance. These results are important when considering possible alterations which could be made to photo‐ID. Although multiple images and face averages have produced measurable improvements in performance in recent laboratory studies, they do not produce benefits in a real‐world live face matching context.

Read it here

Kramer, R. S. S., Mohamed, S., & Hardy, S. C. (2019). Unfamiliar face matching with driving licence and passport photographs. Perception, 48(2), 175-184.

Matching two different images of an unfamiliar face is difficult, although we rely on this process every day when proving our identity. Although previous work with laboratory photosets has shown that performance is error-prone, few studies have focussed on how accurately people carry out this matching task using photographs taken from official forms of identification. In Experiment 1, participants matched high-resolution, colour face photos with current UK driving licence photos of the same group of people in a sorting task. Averaging 19 mistaken pairings out of 30, our results showed that this task was both difficult and error-prone. In Experiment 2, high-resolution photographs were paired with either driving licence or passport photographs in a typical pairwise matching paradigm. We found no difference in performance levels for the two types of ID image, with both producing unacceptable levels of accuracy (around 75%–79% correct). The current work benefits from increased ecological validity and provides a clear demonstration that these forms of official identification are ineffective and alternatives should be considered.

Read it here

Kramer, R. S. S., & Prior, J. Y. (2019). Colour associations in children and adults. The Quarterly Journal of Experimental Psychology. Advance online publication.

A growing body of research has investigated how we associate colours and social traits. Specifically, studies have explored the links between red and perceptions of qualities like attractiveness and anger. Although less is known about other colours, the prevailing framework suggests that the specific context plays a significant role in determining how a particular colour might affect our perceptions of a person or item. Importantly, this factor has yet to be considered for children’s colour associations, where researchers focused on links between colours and emotions, rather than social traits. Here, we consider whether context-specific colour associations are demonstrated by 5- to 10-year-old children and compare these associations with adult data collected on the same task. We asked participants to rank order sets of six identical images (e.g., a boy completing a test), which varied only in the colour of a single item (his T-shirt). Each question was tailored to the image set to address a specific context, for example, “Which boy do you think looks the most likely to cheat on a test?” Our findings revealed several colour associations shared by children, and many of these were also present in adults, although some had strengthened or weakened by this stage of life. Taken together, our results demonstrate the presence of both stable and changing context-specific colour associations during development, revealing a new area of study for further exploration.

Read it here

Mohamed, S., & Hunter, M. S. (2018). Transgender women’s experiences and beliefs about hormone therapy through and beyond mid-age: An exploratory UK study. International Journal of Transgenderism,1-10.

Little is known about transgender women’s beliefs and experiences of hormone therapy (HT), as part of their transition process, and particularly as they grow older. Aims: This study aimed to investigate: (i) transgender women’s experiences and attitudes to HT, and (ii) expectations of what might occur and/or what occurred after they reached “menopausal age.” Participants were recruited through invitations to an online survey sent to 138 Lesbian, gay, bisexual, transgender plus (LGBT+) support groups across the UK. Sixty-seven transgender women consented and completed the questionnaire; responses were analyzed using a mixed-methods approach. The beliefs about medicines questionnaire (BMQ) was used to assess beliefs about HT, while an inductive thematic qualitative approach was used to explore participants’ personal expectations and experiences of HT and their views about the menopause. Participants were aged on average 49 years ranging from 20 to 79 years old. Most (96%) were taking HT. BMQ scores revealed strong beliefs about the necessity of HT and some concerns. Positive views about HT were expressed, with themes including treatment importance, personal and mental health benefits, but concerns about long-term effects, side effects, and maintaining access to the treatment were also mentioned. Views about menopause included uncertainty and questioning of its relevance; some mentioned changes to HT dosage, but most expected to use HT indefinitely. This study provides exploratory qualitative and quantitative information about transgender women’s views about HT and menopause. Practical implications include improving access to HT and provision of evidence-based information about long-term use.

Read it here

Ritchie, K. L., White, D., Kramer, R. S. S., Noyes, E., Jenkins, R., & Burton, A. M. (2018). Enhancing CCTV: Averages improve face identification from poor-quality images. Applied Cognitive Psychology, 32(6), 671-680.

Low‐quality images are problematic for face identification, for example, when the police identify faces from CCTV images. Here, we test whether face averages, comprising multiple poor‐quality images, can improve both human and computer recognition. We created averages from multiple pixelated or nonpixelated images and compared accuracy using these images and exemplars. To provide a broad assessment of the potential benefits of this method, we tested human observers (n = 88; Experiment 1), and also computer recognition, using a smartphone application (Experiment 2) and a commercial one‐to‐many face recognition system used in forensic settings (Experiment 3). The third experiment used large image databases of 900 ambient images and 7,980 passport images. In all three experiments, we found a substantial increase in performance by averaging multiple pixelated images of a person’s face. These results have implications for forensic settings in which faces are identified from poor‐quality images, such as CCTV.

Read it here

Kramer, R. S. S., Mileva, M., & Ritchie, K. L. (2018). Inter-rater agreement in trait judgements from faces. PLoS ONE, 13(8), e0202655.

Researchers have long been interested in how social evaluations are made based upon first impressions of faces. It is also important to consider the level of agreement we see in such evaluations across raters and what this may tell us. Typically, high levels of inter-rater agreement for facial judgements are reported, but the measures used may be misleading. At present, studies commonly report Cronbach’s α as a way to quantify agreement, although problematically, there are various issues with the use of this measure. Most importantly, because researchers treat raters as items, Cronbach’s α is inflated by larger sample sizes even when agreement between raters is fixed. Here, we considered several alternative measures and investigated whether these better discriminate between traits that were predicted to show low (parental resemblance), intermediate (attractiveness, dominance, trustworthiness), and high (age, gender) levels of agreement. Importantly, the level of inter-rater agreement has not previously been studied for many of these traits. In addition, we investigated whether familiar faces resulted in differing levels of agreement in comparison with unfamiliar faces. Our results suggest that alternative measures may prove more informative than Cronbach’s α when determining how well raters agree in their judgements. Further, we found no apparent influence of familiarity on levels of agreement. Finally, we show that, like attractiveness, both trustworthiness and dominance show significant levels of private taste (personal or idiosyncratic rater perceptions), although shared taste (perceptions shared with other raters) explains similar levels of variance in people’s perceptions. In conclusion, we recommend that researchers investigating social judgements of faces consider alternatives to Cronbach’s α but should also be prepared to examine both the potential value and origin of private taste as these might prove informative.

Read it here

Mattavelli, G., Sormaz, M., Flack, T., Asghar, A.U.R., Fan, S., Frey, J., Manssuer, L., Usten, D., Young, A.W. & Andrews, T.J. (2014). Neural responses to facial expressions support the role of the amygdala in processing threat. Social Cognitive and Affective Neuroscience, 9(11), 1684-1689.

The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala’s response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design functional magnetic resonance imaging paradigm, in which we compared the response to face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared with the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion, and demonstrate the critical importance of the choice of comparison condition to the pattern of results.

Read it here

Flack, T.R., Andrews, T.J., Hymers, M., Al-Mosaiwi, M., Marsden, S.P., Strachan, J.W.A., Trakulpipat, C., Wang, L., Wu, T. & Young, A.W. (2015). Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression. Cortex, 69, 14-23.

The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently.

Read it here

Weibert, K., Flack, T.R., Young, A.W. & Andrews, T.J. (2018). Patterns of neural response in face regions are predicted by low-level image properties. Cortex, 103, 199-210.

Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area – OFA, fusiform face area – FFA, superior temporal sulcus – STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.

Read it here

Swami, V., Cavelti, S., Taylor, D. & Tovée, M. J. (2015). The Breast Size Rating Scale: Development and psychometric evaluation. Body Image, 14, 29-38.

Existing measures of breast size dissatisfaction have poor ecological validity or have not been fully evaluated in terms of psychometric properties. Here, we report on the development of the Breast Size Rating Scale (BSRS), a novel measure of breast size dissatisfaction consisting of 14 computer-generated images varying in breast size alone. Study 1 (N=107) supported the scale’s construct validity, insofar as participants were able to correctly order the images in terms of breast size. Study 2 (N=234) provided evidence of the test-retest reliability of BSRS-derived scores after 3 months. Studies 3 (N=730) and 4 (N=234) provided evidence of the convergent validity of BSRS-derived breast size dissatisfaction scores, which were significantly associated with a range of measures of body image. The BSRS provides a useful tool for researchers examining women’s breast size dissatisfaction.

Read it here