Medically unexplained symptoms (MUS) are increasingly being thought of as resulting from dysfunctional modulation of interoceptive sensory signals by top-down cognitive processes. The current study investigated whether individuals with a tendency toward MUS would be more susceptible to visual illusions that suggest tactile sensation on the skin in the absence of any actual somatosensory input.
Participants viewed real-time-mediated reality video images of their own hand, either un-manipulated or digitally altered to display moving pixelated ‘static’ effect, the crawling skin illusion. The strength of various physical sensations during each condition were rated on a numeric scale and compared to standard measures of somatoform dissociation (Somatoform Dissociation Questionnaire 20).
Participants reporting a higher degree of somatoform dissociation were found to be more susceptible to somatic sensations across all conditions. Interestingly, participants who reported more visually induced somatosensory sensations also felt less ownership over their digitally presented hands.
These findings support the proposed link between MUS and disturbances in body representation, and suggest that an over-reliance on top-down knowledge may interfere with current sensory inputs, contributing to symptom formation and maintenance in susceptible individuals.
We often feel that people’s first names suit their faces in some way. Evidence
has already shown that we share common stereotypes about how people with
particular names should look. Here, we investigate whether there is any accuracy
to these beliefs. Simply, can we match people’s names to their faces? Across two
experiments, we tested whether American (Experiment 1) and British participants
(Experiment 2) were able to match the first names of strangers with photographs
of their faces. Although Experiment 1 provided some initial support for accuracy
in female participants, we were unable to replicate this result in Experiment 2.
Therefore, we find no overall evidence to suggest that particular names and faces
are associated with each other.
Research on ensemble encoding has found that viewers extract summary information from sets of similar items. When shown a set of four faces of different people, viewers merge identity information from the exemplars into a representation of the set average. Here, we presented sets containing unconstrained images of the same identity. In response to a subsequent probe, viewers recognized the exemplars accurately. However, they also reported having seen a merged average of these images. Importantly, viewers reported seeing the matching average of the set (the average of the four presented images) more often than a nonmatching average (an average of four other images of the same identity). These results were consistent for both simultaneous and sequential presentation of the sets. Our findings support previous research suggesting that viewers form representations of both the exemplars and the set average. Given the unconstrained nature of the photographs, we also provide further evidence that the average representation is invariant to several high-level characteristics.
Recent research progress on the topic of human visual attention allocation in scene perception and its simulation is based mainly on studies with static images. However, natural vision requires us to extract visual information that constantly changes due to egocentric movements or dynamics of the world. It is unclear to what extent spatio-temporal regularity, an inherent regularity in dynamic vision, affects human gaze distribution and saliency computation in visual attention models. In this free-viewing eye-tracking study we manipulated the spatio-temporal regularity of traffic videos by presenting them in normal video sequence, reversed video sequence, normal frame sequence, and randomised frame sequence. The recorded human gaze allocation was then used as the ‘ground truth’ to examine the predictive ability of a number of state-of-the-art visual attention models. The analysis revealed high inter-observer agreement across individual human observers, but all the tested attention models performed significantly worse than humans. The inferior predictability of the models was evident from indistinguishable gaze prediction irrespective of stimuli presentation sequence, and weak central fixation bias. Our findings suggest that a realistic visual attention model for the processing of dynamic scenes should incorporate human visual sensitivity with spatio-temporal regularity and central fixation bias.
Research on non-offending heterosexual participants has indicated that men’s gaze allocation reflects their sexual preference. In this exploratory pilot study we investigated whether naturalistic gaze behaviour is sensitive to deviant sexual preferences. We compared gaze patterns of convicted heterosexual child sex offenders (CSOs; n = 13) with female victims to heterosexual non-offending men (n = 13) in a task of free-viewing images of clothed male and female figures aged 10, 20 and 40 years. CSOs dedicated more fixations to the upper body of the female child than male child figures. The pattern was different for the control sample, whose gaze pattern to male and female figures could only be differentiated when viewing adult figures. CSOs showed significantly greater difference in their gaze towards the upper body of male and female children than non-offenders. Our findings provide preliminary evidence for eye-tracking as a potential method of assessing deviant sexual interest.
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants’ expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.
In this cross-sectional study, we investigated the influence of personal BMI on body size estimation in 42 women who have symptoms of anorexia (referred to henceforth as anorexia spectrum disorders, ANSD), and 100 healthy controls. Low BMI control participants over-estimate their size and high BMI controls under-estimate, a pattern which is predicted by a perceptual phenomenon called contraction bias. In addition, control participants’ sensitivity to size change declines as their BMI increases as predicted by Weber’s law. The responses of women with ANSD are very different. Low BMI participants who have ANSD are extremely accurate at estimating body size and are very sensitive to changes in body size in this BMI range. However, as BMI rises in the ANSD participant group, there is a rapid increase in over-estimation concurrent with a rapid decline in sensitivity to size change. We discuss the results in the context of signal detection theory.