I'm a Santa Barbara native, graduating from Santa Barbara High School before earning my bachelor's (B.S. in Electrical & Computer Engineering, ECE), master's (M.S. in ECE under Umesh Mishra), and doctoral degrees (Ph.D. in Psychological & Brain Sciences), all at UCSB.
My path to studying cognitive and perceptual science was neither short nor direct. After passing my qualifying exam two years into the ECE Ph.D. program, I took a year to explore other fields, careers, and, most importantly, myself. During this sabbatical of sorts, I had the great fortune of taking a course on human perception with Jack Loomis. I immediately fell in love with the prospect of understanding how the brain gives rise to our thoughts, experiences, and actions; parts of ourselves that often elude introspective deconstruction yet become tractable and coherent with the languages of math and science. Dr. Loomis, on the cusp of retirement, directed me to Miguel Eckstein, a young professor with a computational bent. After talking with Dr. Eckstein about my interest in human perception and my computational expertise, I began working on a project looking at the mechanisms underlying the perceptual learning of faces (see Peterson, Abbey, & Eckstein, 2009). I then officially entered the Ph.D. program in the Department of Psychology (soon to become the Department of Psychological and Brain Sciences) in the Fall of 2005 and received my doctorate in 2012.
I continued with the VIU lab as a postdoc for about a year while I plotted my next career move. This patience paid off, and I (finally) left Santa Barbara for Boston and a five-year postdoc position working with Nancy Kanwisher at MIT in the Brain & Cognitive Sciences Department and the NSF Center for Brains, Minds, & Machines. But the pull of the west coast proved strong, and I returned to Santa Barbara and UCSB to take my current position in the VIU lab as a Project Scientist with the Institute for Collaborative Biotechnologies in June of 2019.
My research has and continues to focus on questions arising from my PhD work in the VIU. We found that each person has a particular place they consistently look at when trying to identify someone (termed their preferred fixation location, PFL). Some people look up towards the eyes, while others look down towards the nose or mouth. We also found that your ability to recognize a face depends strongly on where on the face you are looking; each person has a particular place where their face recognition ability is maximized when they fixate there (termed the optimal fixation location, OFL). Critically, these stable individual differences in face-looking behavior (PFL) and gaze-point-specific face recognition ability (OFL) are tightly linked: Each individual recognizes faces best when they look at their preferred fixation location (i.e., the PFL and OFL are matched).
This paradigm touches on several major topics in cognition, perception, and action, and continues to offer novel explanations for phenomena that have not previously been well understood. For instance, what is the encoding format of face recognition? Our results suggest that contrary to standard models, faces are encoded with a high-degree of retinotopic-specificity, even in higher levels of processing. Further, the synergistic relationship between gaze behavior and the retinotopic tuning of face encoding suggests that face recognition is optimized through a dedicated, face-specialized network that spans the visual processing and eye movement systems (at the very least). Consistent fixation of the PFL means that the input the brain receives when looking at a face, a stimulus that dominates our visual experience from birth, varies dramatically between individuals. Taken together, we are now attacking the question of how this inter-individual variation in retinotopic experience may shape how we look at the world, how our brains encode visual information, and ultimately, how we experience and perceive the world.