Two new lab papers assess how the visual properties of the face and the task being performed influence eye movements

Which features do we use to guide our precision eye movements to faces, and do different task demands recruit different features and gaze strategies?

September 15, 2021
(A) A "Region of Interest" Ideal Observer (RIO) maps the drastic ways in which information for identity (left) and ethnic group (right) are represented by the face stimulus. (B) A “Foveated” Ideal Observer (FIO) maps predicted performance for different fixation locations, showing how the task-specific differences in information layout interact with the change in visual processing from the fovea to the periphery.
(A) A "Region of Interest" Ideal Observer (RIO) maps the drastic ways in which information for identity (left) and ethnic group (right) are represented by the face stimulus. (B) A “Foveated” Ideal Observer (FIO) maps predicted performance for different fixation locations, showing how the task-specific differences in information layout interact with the change in visual processing from the fovea to the periphery.

Earlier research from the lab1 has shown that for many common and important face recognition tasks, the first eye movement consistently targets the same location on the face (the preferred fixation location; PFL). This face-specific oculomotor strategy has a deep computational connection to the adult brain's face-specific information processes, evidenced by face identification performance peaking strongly when looking at the PFL and deteriorating rapidly as one looks away from the PFL. However, little is known about whether humans also use specialized oculomotor strategies for other specialized face categorization tasks. In this first paper, VIU graduate student Puneeth Chakravarthula, along with co-authors Yuliy Tsank and Miguel Eckstein, characterized how performance of Indian subjects varied with gaze position on the face for a North Indian vs. South Indian ethnic group categorization task. They found that North and South Indian faces vary subtly, and that choice of fixation had little influence on task performance. Accordingly, the first fixations of Indian observers on this task were similar to those in a simple face identification task. These results suggest that in the absence of a clear benefit of changing eye movement strategies, humans default their first eye movement to the location they normally use for face identification.

 

Average first fixations for each face condition. The dashed white line is the eyes’ position reference in the intact face. The white dots are the individual subject’s average first fixations. The red dot is the average across all subjects.
Average first fixations for each face condition. The dashed white line is the eyes’ position reference in the intact face. The white dots are the individual subject’s average first fixations. The red dot is the average across all subjects.

In the second paper, VIU graduate student Nicole Han, along with the just-mentioned Puneeth and Miguel as co-authors, investigated how people guide their first eye movements to faces in the periphery. A majority of people direct their first eye movement to a featureless point just below the eyes that maximizes accuracy in recognizing faces 2, 3. However, the exact properties of the face that guide this crucial initial eye movement are unknown. To assess the roles that individual features and their configurations play in eye movement guidance, images of faces were manipulated such that some features were either missing or jumbled. Results from a face identification task on these manipulated faces showed that subjects utilized the face outline, individual features, and spatial configuration to guide their eye movements. The position of the eyes region was a good predictor of fixation location and reduced fixation variability. Eliminating the eyes or altering their position also resulted in the largest detriment to face identification performance. While presence of the eyes region most strongly reduced fixation variability, the nose and the mouth also helped to a lesser degree. Measuring the detectability of single facial features showed that the eyes remained the most visible in the periphery, providing a strong sensory signal to guide the oculomotor system.