What makes faces special




















We outline potential cognitive and brain mechanisms underlying oculomotor capture by faces. Human faces are highly relevant to our social lives, and it is crucial to detect them efficiently to produce adaptive responses. At the neurological level, faces benefit from cerebral networks especially tuned to their processing Gauthier et al.

However, it is still unclear exactly what information is extracted from the face in this brief window or how that information contributes to detection. Here, we explore whether a critical aspect of faces—their emotional expressions—can facilitate detection and capture attention.

The physical properties of our visual system limit the amount of information in a scene that can be processed at once, and so selection must take place. This is achieved through attentional mechanisms, which select stimuli that will benefit from further processing while filtering others that will be ignored.

If attentional capture by specific facial characteristics is demonstrated, it would suggest that the features and configural information that constitute these characteristics can be processed preattentively, before selection takes place.

The primary goal of the current study was to assess whether a facial attribute that makes faces particularly important—namely, emotional expression—can affect early selection processes and so capture the eyes.

Some previous studies suggest that emotional faces are prioritized over neutral ones e. However, because these studies used manual response times as an indirect index of attentional capture, the attentional biases they report can be difficult to interpret. Moreover, the attentional stages at which these biases arise remain unclear. Such paradigms do not allow a genuine test of whether emotional expressions drive attentional selection in a bottom-up fashion.

But, it is less clear whether emotional faces are more effective than neutral faces at driving early attentional selection processes when they are task-irrelevant and share no features with targets. Here, we examined eye movements executed in the presence of irrelevant emotional faces in order to uncover the mechanisms supporting a potential selection bias in their favour.

Can they capture the eyes more effectively than neutral faces under these circumstances? We modified the eye-tracking paradigm used by Devue et al. In this paradigm, participants see a circular array of coloured dots and have to make a saccade towards a colour singleton; a simple task that relies on parallel search.

Photographs of irrelevant objects including faces appear in a concentric circle inside the dot array; participants are instructed to ignore these see Fig. This paradigm remedies many of the problems inherent in previous research. Eye movements closely parallel attentional processes and so provide a more direct measure of attention than do manual response times.

Moreover, the task allows us to examine the impact of faces when they are peripheral to fixation and entirely irrelevant to the task. Example of search display.

Participants had to make a saccade towards the circle with a unique colour while ignoring the objects. One critical object an angry face, a neutral face, or a butterfly in the current experiment was always present among the six objects.

In their original experiment, Devue and colleagues found that the mere presence of a face changed performance. These effects were attenuated but not eliminated when faces were inverted, suggesting that both salient visual features apparent in both upright and inverted faces and configural information apparent only in upright faces play a role in oculomotor capture.

In the present study, we used the same paradigm but manipulated the expression of the irrelevant faces. If facial expressions do affect early attentional selection processes in a bottom-up fashion, then emotional faces in the present experiment should capture and guide the eyes more effectively than neutral faces. We also assessed the role of low-level visual features in a second experiment presenting inverted faces. How should emotional faces affect eye movements? Further limitations are imposed on peripheral faces because of decreased acuity at greater eccentricities e.

Similarly, race and gender do not automatically attract attention. In sum, facial aspects such as familiarity, identity, or race may be formed by a combination of visual information that is too complex to influence early selection processes. Some evidence suggests that facial expressions may be similarly unable to capture attention.

However, in this study and some others described above , faces were schematic stimuli, which lack facial information that may normally be used for detection by the visual system. It is possible that any capture by emotional expression would be driven by visual information available in natural faces that is not present in schematic faces.

It would therefore be important to determine whether these null effects extend to photographs of emotional faces. While it may appear unlikely, there remain several reasons why emotional expressions may still drive attention in a bottom-up fashion, perhaps more so than other facial characteristics. First, emotional information, including facial expressions, is largely carried by low spatial frequencies LSF; Vuilleumier et al.

Indeed, the processing of arousing emotional stimuli e. Second, the facial characteristics that contribute to a given emotional expression are fairly consistent across individuals and potentially less variable than subtle facial deviations making up identity and possibly even age or gender.

Third, it is thought that emotional information can be processed very fast, and even be prioritized over neutral information, through specific neuronal pathways including the amygdala, which is primarily sensitive to LSF Alorda et al. To test whether irrelevant angry faces capture attention, we examined the percentage of trials in which the first saccade was erroneously directed at an angry face instead of at the target relative to neutral faces and butterflies, an animate control object.

We also examined the effect of the spatial location of angry faces, neutral faces, and butterflies by comparing performance on match versus mismatch trials on four measures of oculomotor behaviour: correct saccade latency, saccade accuracy, search time, and number of saccades required to reach the target.

Latency measures also allowed us to address a second question, which arises from previous observations that, although neutral faces capture attention more than other objects, they do not do so consistently e. We thus expect mismatch trials in which faces capture attention to be characterised by shorter latencies than those in which the target was correctly reached because faces and perhaps especially angry faces should compete with the target most successfully when control is poor. Finally, correct saccades on trials where faces compete with the target mismatch trial should require more control, indexed by longer latencies, than on match trials.

The calculation yielded a sample size of 13 participants to achieve power of. Because the effect of facial expression that is, angry versus neutral faces may be more subtle than the effect of face inversion upright versus inverted faces , we aimed to double that number while anticipating for data loss. We therefore recruited 29 participants four men , at Victoria University of Wellington. They signed an informed consent prior to their inclusion in the study and received course credits or movie vouchers as compensation for their time.

A viewing distance of 60 cm was maintained by a chin rest. The left eye was tracked with an EyeLink plus desktop mount eye-tracking system at a Hz sampling rate. Calibration was performed before the experimental trials and halfway through the task using a nine-point grid. Stimulus presentation and eye-movement recording was controlled by E-Prime 2.

Displays consisted of six coloured circles with a diameter of 1. They were all the same colour green or orange except for one orange or green , which varied randomly on each trial. Six greyscale objects, each fitting within a 2. One of the six objects was always a critical object of interest: an angry face, a neutral face, or a butterfly the animate control condition. The five remaining objects were inanimate filler objects belonging to clear distinct categories toys, musical instruments, vegetables, clothing, drinkware, and domestic devices; eight exemplars per category.

Participants were instructed to make an eye movement to the circle that was a unique color and to ignore the objects. There was no mention of faces. Eight angry and eight neutral male face stimuli, photographed in a frontal position, were taken from the Nim Stim Face Stimulus Set Models 20, 21, 23, 24, 25, 34, 36 and 37; www.

Hair beyond the oval shape of the head was removed with image manipulation software Gimp; www. Brightness and contrast of the faces were adjusted with Gimp to visually match each other, butterflies, and the remaining set of objects. Each combination was repeated 10 times per critical object, producing trials per critical object type. There were thus 1, trials in total, presented in a random order.

For each critical object type, there were 60 trials in which its position matched that of the target circle; that is, they were aligned along the same radius of the virtual circle. On the remaining trials, the positions mismatched. Each trial started with a drift correction screen triggered by a press of the space bar, followed by a jittered fixation cross with a duration between 1, and 1, ms, presented in black against a white background.

The cross was followed by a ms blank white screen before the presentation of the target display, which lasted 1, ms. Participants heard a high-toned beep if they moved their eyes away from the central area before the presentation of the display and a low-toned beep if they had not moved their eyes ms after the display appeared.

Participants took breaks and received feedback on their mean correct response time every 54 trials. Before the experimental task, they performed 24 practice trials without critical objects. Oculomotor capture and fixation duration. First, we examined the percentage of trials in which participants looked first at the critical object instead of the target during mismatch trials, and fixation duration, that is, the time spent fixating these critical objects after they captured the eyes.

We expected faces to capture the eyes more often than butterflies Devue et al. Angry faces may or may not capture the eyes more often than neutral ones but may be fixated longer when capture does occur Belopolsky et al.

Oculomotor behaviour. Second, we examined the effect of the spatial location of angry and neutral faces as compared to the butterfly control object on oculomotor behaviour. We analysed four different eye-movements measures i. Differences between critical objects in their ability to attract attention were indicated by an interaction between critical object type and matching conditions. These were followed up by planned comparisons to test the effect of matching on each of the three critical objects.

If angry and neutral faces are prioritized, we expect better performance on match than on mismatch trials when the critical object is a face but not when it is a butterfly. For each of the four measures, we then directly compared the effect of angry and neutral faces on performance. Again, we report the critical interaction between facial expression angry, neutral and matching, which tests whether angry and neutral faces differ in their ability to attract attention.

If angry faces are more potent than neutral faces, the impact of matching should be stronger for angry faces than for neutral ones. Oculomotor control. In a third set of analyses, we examined the impact of faces on oculomotor control, as reflected by saccade latency. We calculated mean latency for each saccade outcome correct or incorrect in each matching condition and for each critical object type separately.

For any given match trial, there are two possible outcomes: correct saccade to the target i. For any given mismatch trial, there are three possible outcomes: correct saccade to the target i. Combining matching conditions and performance thus gives five possible saccadic outcomes in total for each critical object type. Note that this analysis partly overlaps with the analyses of correct saccade latency reported above, but it targets a different question—specifically, whether saccade latency is a predictor of saccade outcome.

Overall, we expected correct saccades to have longer latencies than incorrect saccades. We made three main predictions. First, if instances of oculomotor capture by faces are due to lapses in oculomotor control, we expected the associated latencies to be shorter than latencies of correct saccades.

Second, if faces trigger automatic shifts of covert attention in their direction, correct saccades in the presence of mismatching faces should be more difficult to program and require more control than in the presence of a mismatching butterfly: This would be reflected by longer latencies in the former case than in the latter.

Third, on match trials, faces and the target are in the same segment and do not compete for attention, so these trials should require less control than mismatch trials. We thus expected shorter latencies on match trials than on mismatch trials containing faces. Similar logic holds for the comparison of angry to neutral faces. In all analyses, degrees of freedom are adjusted Greenhouse—Geisser for sphericity violation where necessary.

We performed a one-way analysis of variance ANOVA with critical object type angry face, neutral face, butterfly as a within-subjects factor on the mean percentage of oculomotor capture trials and associated fixation durations. Results, visible on the left panels of Fig. Mean percentage of oculomotor capture a and mean fixation duration following oculomotor capture b in Experiment 1 upright faces, left panels and Experiment 2 inverted faces, right panels. Results are presented on the left panels of Fig.

Mean correct saccade latency a , mean accuracy b , mean search time c and mean number of saccades to reach the target d , for each type of critical object type included in the display angry face, neutral face, or butterfly.

For saccade accuracy, there was a marginal predicted interaction between matching and critical object type, F 1. For search time, the interaction between matching and critical object type was significant, F 1.

In sum, this experiment replicates previous findings that irrelevant faces drive the eyes to their location Devue et al. These effects were observed across multiple eye-movement measures. Importantly, we found that both angry and neutral faces capture the eyes more often than butterflies, but angry faces do not have a greater impact than neutral faces on any measure.

Although both faces captured the eyes more often than butterflies, they did not hold them any longer. Results are shown in Fig. The analysis showed that saccade outcome was significantly associated with saccade latency, F 2. Pairwise comparisons collapsed across critical object type with Bonferroni corrections adjusted p values are reported; i. Although there was a main effect of critical object type, F 1.

This is not in keeping with the oculomotor behaviour measures above that showed that butterflies do not affect oculomotor behaviour whereas faces do. This is likely due to large differences in latencies between correct and incorrect saccades, combined with a highly consistent pattern of latencies for error saccades across critical object conditions, washing out any subtle differences across critical object types. Saccade latency as a function of the critical object type. Note that correct saccades took significantly longer to be initiated than incorrect ones, even in the presence of matching faces.

For all three types of incorrect trials i. This series of analyses shows that unlike correct saccades, incorrect saccades occur on occasions where insufficient control is exerted to maintain a task-related goal. All the incorrect saccades, including saccades captured by critical objects, were characterized by comparably short latencies. Follow-up analyses suggest that the effect whereby latencies are shorter on correct match trials than on correct mismatch trials is driven by the presence of faces significantly so by angry faces nearby the target on match trials.

The aim of the second experiment was to evaluate the role of low-level visual features associated with angry and neutral faces in driving oculomotor behaviour.

Inversion makes the discrimination of various facial aspects difficult, including facial expression, whereas it has little effect on the processing of individual features. Inversion is thought to disrupt holistic or configural processing of faces that convey their meaning e. Hence, if the effects of faces on oculomotor behaviour are driven by configural information, then inversion should reduce attentional capture see Devue et al. Further, if some low-level visual features displayed by neutral faces are more potent than those in angry faces as suggested by the slightly more frequent capture by neutral than angry faces in Experiment 1 , we should observe the same pattern of results here, that is, stronger oculomotor capture by neutral than by angry faces during mismatch trials.

In contrast, if the small difference in capture by angry and neutral faces is somehow due to their different affective meaning, inversion should decrease or even abolish the difference between angry and neutral faces.

We recruited 26 new participants from the Victoria University of Wellington community. They were between ages 18 and 30 years and reported normal or corrected-to-normal vision. They received course credits or movie or shopping vouchers for their participation. Procedure and stimuli were exactly the same as in the previous experiment except that angry and neutral faces were now inverted by flipping the images on the horizontal axis.

In addition, we formally compared capture rates by the different types of faces across experiments. We discarded the data of three participants: one who only had These participants had Results are presented in the right panels of Fig. Analyses above indicate that neutral faces capture attention more than angry faces, even when they are inverted.

Results are shown in the right panels of Fig. This indicates that inverted faces did not significantly affect oculomotor behaviour. Overall, it seems that inversion dramatically reduces but does not completely abolish attentional capture by faces.

Results are presented in the bottom panel of Fig. As in Experiment 1 , saccade latencies were significantly linked to the saccade outcome, F 1. There was no significant effect of critical object type, F 1. This experiment again shows that successful saccades require more control than incorrect ones. Just like instances of capture by upright faces and incorrect saccades to other objects, instances of capture by inverted faces are the product of reflexive saccades.

The presence of an inverted face within the display matching or mismatching does not affect the amount of control exerted to correctly program a saccade towards the target, showing that unlike upright faces, inverted faces do not have facilitatory effect when in proximity of the target. Using five different eye-movement measures, we replicate our previous finding using the same paradigm that irrelevant faces capture the eyes in a bottom-up fashion Devue et al.

However, angry and neutral faces did not differ on any measure except one, and then in an opposite direction to predictions. Both types of faces captured the first saccade more often than a butterfly but, surprisingly, neutral faces captured the eyes slightly more often than angry faces. The second experiment with inverted faces shows a drastic attenuation of the effect of faces on all measures, confirming the important contribution of configural aspects that make upright faces meaningful Devue et al.

Sets of facial features that are seen more frequently are encoded more robustly, and therefore could be more diagnostic for face detection Nestor et al.

Stronger capture by neutral faces than by angry ones may also suggest avoidance. This interpretation is inconsistent, however, with all the other oculomotor measures.

Alternatively, despite our efforts to balance low-level features, some artefact might remain in the specific stimuli that we used, making neutral faces slightly more salient than angry ones, irrespective of their orientation.

Importantly, regardless of the underlying mechanism, the fact that neutral faces captured the eyes slightly more often than angry ones ensures that the absence of difference between angry and neutral faces on other measures does not reflect low power to detect effects of facial expression. The equivalence of angry and neutral faces as distractors may seem at odds with the common claim that emotional stimuli capture attention. However, the current findings add to a growing body of evidence with faces Fox et al.

These studies all suggest that the processing of emotional information is not automatic but depends on the availability of attentional resources, and is partly guided by top-down components such as expectation, motivation, or goal-relevance.

In a study using spider-fearful participants, Devue et al. They used a visual search task in which task-irrelevant black spiders were presented as distractors in arrays consisting of green diamonds and one green target circle.

Thus, spiders did not capture attention because they were identified preattentively but because the blocked presentation created the expectation that any black singleton in the array would be a spider. Finally, Hunt et al. They compared the ability of schematic angry and happy distractor faces to attract the eyes when emotion was task-irrelevant and when it was task-relevant.

They found that angry and happy distractor faces interfered with the search when targets were of the opposite valence but that neither emotional face captured the eyes more than other distractors when emotion was an irrelevant search feature. The paradigm used in the present experiment strives to eliminate top-down and other confounds that could explain apparent bottom-up capture by emotional stimuli in many previous studies: angry and neutral faces are presented randomly; they are completely irrelevant to the task, in that their position does not predict the position of the target, they never appear in possible target locations, and the target-defining feature i.

Simultaneously however, the presentation and task conditions maximise potential for angry faces to capture the eyes more than neutral ones if emotion was indeed processed preattentively: displays present one face at a time, avoiding competition between several faces Bindemann et al. We are therefore confident that we established optimal conditions to test whether emotion modulates attentional selection of faces and can be confident in our demonstration that it does not.

We posit that plausible adaptive cognitive and neural mechanisms can account for oculomotor capture by faces as a class. Face detection, which presumably results from processing of low spatial frequencies in the superior colliculus, pulvinar, and amygdala Johnson, , could then trigger very fast reflexive orienting responses through rapid integration between regions responsible for oculomotor behaviour i.

Understanding this pattern of neural firing allowed Chang to create an algorithm by which he could actually reverse engineer the patterns of just neurons firing as the monkey looked at a face to create what faces the monkey was seeing without even knowing what face the monkey was seeing.

Like a police sketch artist working with a person to combine facial features, he was able to take the features suggested by the activity of each individual neuron and combine them into a complete face. In nearly 70 percent of cases, humans drawn from the crowdsourcing website Amazon Turk matched the original face and the recreated face as being the same.

Bevil Conway , a neuroscientist at the National Eye Institute, said the new study impressed him. He added that such work can help us develop better facial recognition technologies, which are currently notoriously flawed.

Sometimes the result is laughable , but at other times the algorithms these programs rely on have been found to have serious racial biases. In the future, Chang sees his work as potentially being used in police investigations to profile potential criminals from witnesses who saw them. Ed Connor , a neuroscientist at Johns Hopkins University, envisions software that could be developed to adjust features based on these 50 characteristics.

Such a program, he says, could allow witnesses and police to fine-tune faces based on the characteristics humans use to distinguish them, like a system of 50 dials that witnesses could turn to morph faces into the once they remember most.

One way in which our brains could process faces is to analyse them as a collection of these separate, individual features. If that were the case though, we might expect to be easier to pick out any discrepancies in an upside-down face. For the most part, these all look about right — the mouth looks like a mouth, the eyes look like eyes. Why is this important? Well, the illusion is a really neat little way to show just how special faces are to humans and monkeys.

In the years since the original illusion was published, a wealth of research has shown how the assumptions that the brain makes about facial configuration allow us to discern minor differences between faces — the differences that each make us unique — almost effortlessly.



0コメント

  • 1000 / 1000