Much of the visual system is organized according to visual retinotopic space, and activation patterns within each retinotopically-defined region (e.g., V1) can be considered as neural ‘priority maps’ – maps of the relative importance of different elements in the visual environment. In my lab’s research, we seek to understand how visual regions index priority aspects based on image-computable stimulus salience and an observer’s behavioral goals. To accomplish this goal, we develop and apply computational neuroimaging methods to reconstruct and quantify population-level neural representations and assay predictive neural encoding models. In this talk, I will describe the methods we’ve developed and show results from several key empirical tests of priority map theory establishing how different retinotopic visual regions in the human cortex differentially compute priority maps based on stimulus properties (luminance contrast, salience-defining feature) and task demands (behaviorally-relevant location or feature). Additionally, based on data acquired in the absence of visual stimulation, I will show how Bayesian generative models can be used to show how activation patterns in these priority maps support performance on tasks requiring visual working memory. Overall, I hope to convince you that these results support a theoretical framework whereby visual-spatial cognition can operate via multiple interacting neural priority maps, with different regions preferentially indexing stimulus and task-related priority aspects.
*Light refreshments will be served.
