Recent research progress on the topic of human visual attention allocation in scene perception and its simulation is based mainly on studies with static images. However, natural vision requires us to extract visual information that constantly changes due to egocentric movements or dynamics of the world. It is unclear to what extent spatio-temporal regularity, an inherent regularity in dynamic vision, affects human gaze distribution and saliency computation in visual attention models. In this free-viewing eye-tracking study we manipulated the spatio-temporal regularity of traffic videos by presenting them in normal video sequence, reversed video sequence, normal frame sequence, and randomised frame sequence. The recorded human gaze allocation was then used as the ‘ground truth’ to examine the predictive ability of a number of state-of-the-art visual attention models. The analysis revealed high inter-observer agreement across individual human observers, but all the tested attention models performed significantly worse than humans. The inferior predictability of the models was evident from indistinguishable gaze prediction irrespective of stimuli presentation sequence, and weak central fixation bias. Our findings suggest that a realistic visual attention model for the processing of dynamic scenes should incorporate human visual sensitivity with spatio-temporal regularity and central fixation bias.
Nature is gorgeous for her imbalance. The innate bias from non-human to human results in a wonderful yet mysterious biological foundation towards the inspirational thinking for real application. The vision researchers evidenced that the left gaze bias in humans and non-humans. Nevertheless, the acousticians observed the right ear advantages in both non-humans and humans. Unlike the vision and acoustician researchers investigating the underlying mechanisms of human innate bias, we are more interested in mimicking these characteristics. In this paper, we propose two simple yet effective methods to generate the left eye gaze bias and the right ear advantage. We further discuss the potential applications, e.g., real life driving, from these inherent phenomena. We believe that this paper could bring an inspirational impact for future cognitive transportation, by implementing these human innate biases properly.
Northumbria University has developed Northumbria Research Link (NRL) to enable users to
access the University’s research output. Copyright © and moral rights for items on NRL are
retained by the individual author(s) and/or other copyright owners. Single copies of full items
can be reproduced, displayed or performed, and given to third parties in any format or
medium for personal research or study, educational, or not-for-profit purposes without prior
permission or charge, provided the authors, title and full bibliographic details are given, as
well as a hyperlink and/or URL to the original metadata page. The content must not be
changed in any way. Full items must not be sold commercially in any format or medium
without formal permission of the copyright holder.
Get a quick, expert overview of the many key facets of obesity management with this concise, practical resource by Dr. Jolanta Weaver. Ideal for any health care professional who cares for patients with a weight problem. This easy-to-read reference addresses a wide range of topics – including advice on how to “unpack” the behavioral causes of obesity in order to facilitate change, manage effective communication with patients suffering with weight problems and future directions in obesity medicine.
Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people’s ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to ‘trained’ human viewers—i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security.
First impressions of social traits, such as attractiveness, from faces are often claimed to be made automatically, given their speed and reliability. However, speed of processing is only one aspect of automaticity. Here we address a further aspect, asking whether impression formation is mandatory. Mandatory formation requires that impressions are formed about social traits even when this is task-irrelevant, and that once formed, these impressions are difficult to inhibit. In two experiments, participants learned what new people looked like for the purpose of future identification, from sets of images high or low in attractiveness. They then rated middle-attractiveness images of each person, for attractiveness. Even though instructed to rate the specific images, not the people, their ratings were biased by the attractiveness of the learned images. A third control experiment, with participants rating names, demonstrated that participants in Experiments 1 and 2 were not simply rating the people, rather than the specific images as instructed. These results show that the formation of attractiveness impressions from faces is mandatory, thus broadening the evidence for automaticity of facial impressions. The mandatory formation of impressions is likely to have an important impact in real-world situations such as online dating sites.
Research on face learning has tended to use sets of images that vary systematically on dimensions such as pose and illumination. In contrast, we have proposed that exposure to naturally varying images of a person may be a critical part of the familiarization process. Here, we present two experiments investigating face learning with “ambient images”—relatively unconstrained photos taken from internet searches. Participants learned name and face associations for unfamiliar identities presented in high or low within-person variability—that is, images of the same person returned by internet search on their name (high variability) versus different images of the same person taken from the same event (low variability). In Experiment 1 we show more accurate performance on a speeded name verification task for identities learned in high than in low variability, when the test images are completely novel photos. In Experiment 2 we show more accurate performance on a face matching task for identities previously learned in high than in low variability. The results show that exposure to a large range of within-person variability leads to enhanced learning of new identities.
In our ageing society, residential care homes provide an essential service for older adults who require daily care or who choose the companionship of other people.
Recent figures indicated that 17,678 residential care facilities provided a home for 426,000 older adults, approximately 16% of people over 65 years of age in the UK (Laing and Buisson 2014). Around 70% of people living in all care settings for older adults are thought to have dementia (Alzheimer’s Society 2014), although prevalence estimates for residential care homes are generally lower than those for nursing homes – 55.8% and 77% respectively, according to one study (Stewart et al 2014).
The dynamic flexibility of body representation has been highlighted through numerous lines of research that range from clinical studies reporting disorders of body ownership, to experimentally induced somatic illusions that have provided evidence for the embodiment of manipulated representations and even fake limbs. While most studies have reported that enlargement of body parts alters somatic perception, and that these can be more readily embodied, shrunken body parts have not been found to consistently alter somatic experiences, perhaps due to reduced feelings of ownership over smaller body parts. Over two experiments, we aimed to investigate the mechanisms responsible for altered somatic representations following exposure to both enlarged and shrunken body parts. Participants were given the impression that their hand and index finger were either longer or shorter than veridical length and asked to judge veridical finger length using online and offline size estimation tasks, as well as to report the degree of ownership towards the distorted finger and hand representations. Ownership was claimed over all distorted representations of the hand and finger and no differences were seen across ownership ratings, while the online and offline measurements of perceived size demonstrated differing response patterns. These findings suggest that ownership towards manipulated body representations is more bidirectional than previously thought and also suggest differences in perceived body representation with respect to the method of measurement suggesting that online and offline tasks may tap into different aspects of body representation
Researchers have suggested that dogs are able to recognise human faces, but conclusive evidence has yet to be found. Experiment 1 of this study investigated whether dogs can recognise humans using visual information from the face/head region, and whether this also occurs in conditions of suboptimal visibility of the face. Dogs were presented with their owner’s and a stranger’s heads, protruding through openings of an apparatus in opposite parts of the experimental setting. Presentations occurred in conditions of either optimal or suboptimal visibility; the latter featured non-frontal orientation, uneven illumination and invisibility of outer contours of the heads. Instances where dogs approached their owners with a higher frequency than predicted by chance were considered evidence of recognition. This occurred only in the optimal condition. With a similar paradigm, Experiment 2 investigated which of the alterations in visibility that characterised the suboptimal condition accounted for dogs’ inability to recognise owners. Dogs approached their owners more frequently than predicted by chance if outer head contours were visible, but not if heads were either frontally oriented or evenly illuminated. Moreover, male dogs were slightly better at recognition than females. These findings represent the first clear demonstration that dogs can recognise human faces and that outer face elements are crucial for such a task, complementing previous research on human face processing in dogs. Parallels with face recognition abilities observed in other animal species, as well as with human infants, point to the relevance of these results from a comparative standpoint.