VIU postdoc Weimin Zhou presents neural network/reinforcement learning-based eye movement model at SPIE Medical Imaging 2022

The work combines RL with Q-network to approximate, and then generalize, ideal visual search fixation strategies

March 16, 2022
a) Fixation distributions for three search models: maximum a posteriori (MAP; top), Q-network (Q-net; middle), and the Ideal Searcher (IS; bottom).   b) Target localization performance, in terms of proportion correct responses (PC), for the MAP, Q-net, and IS models. Q-net was superior to MAP and approached the IS performance ceiling.
a) Fixation distributions for three search models: maximum a posteriori (MAP; top), Q-network (Q-net; middle), and the Ideal Searcher (IS; bottom). b) Target localization performance, in terms of proportion correct responses (PC), for the MAP, Q-net, and IS models. Q-net was superior to MAP and approached the IS performance ceiling.

VIU postdoc Dr. Weimin Zhou presented a reinforcement learning (RL) method that employs Q-network for approximating the Bayesian Ideal Searcher (IS) in visual search tasks at the SPIE Medical Imaging 2022 conference. The work showed that with dynamic noise backgrounds for which the IS can be determined analytically, the Q-network searcher produced eye movement decisions consistent with the ideal model. Crucially, the data-driven nature of RL methods generalizes the proposed Q-network to cases in which the IS cannot be analytically determined. The work demonstrates the great potential RL-based methods hold for estimating optimal eye movement plans for clinically-relevant visual tasks with real anatomical backgrounds.