How does one know what to look at in a scene? Imagine a "Where's Waldo" game - it's challenging to find Waldo because there are many 'salient' locations in the picture, each vying for one's attention. One can only attend to a small location on the picture at a given moment, so to find Waldo, one needs to direct their attention to different locations. One prominent theory about how one accomplishes this claims that important locations are identified based on distinct feature types (for example, motion or color), with locations most unique compared to the background most likely to be attended. An important component of this theory is that individual feature dimensions (again, color or motion) are computed within their own 'feature maps', which are thought to be implemented in specific brain regions. However, whether and how specific brain regions contribute to these feature maps remains unknown. The goal of this study is to determine how brain regions that respond strongly to different feature types (color and motion) and which encode spatial locations of visual stimuli extract 'feature dimension maps' based on stimulus properties, including feature contrast. The investigators hypothesize that feature-selective brain regions act as neural feature dimension maps, and thus encode representations of salient location(s) based on their preferred feature dimension. The investigators will scan healthy human participants using functional MRI (fMRI) in a repeated-measures design while they view visual stimuli made salient based on different combinations of feature dimensions. The investigators will employ state-of-the-art multivariate analysis techniques that allow them to reconstruct an 'image' of the stimulus representation encoded by each brain region to dissect how neural tissue identifies salient locations. Each participant will perform a challenging task at the center of the screen to ensure they keep their eyes still and ignore the stimuli presented in the periphery, which are used to gauge how the visual system automatically extracts important locations without confounding factors like eye movements. Across trials and experiments the investigators will manipulate 1) the 'strength' of the salient locations based on how different the salient stimulus is compared to the background, 2) the number of salient locations, and 3) the feature value(s) used to make each location salient. Altogether, these manipulations will help the investigators fully understand these critical salience computations in the healthy human visual system.
See this in plain English?
AI-rewrites the medical criteria so a patient or caregiver can understand them. Always confirm with the trial site.
Blood Oxygenation Level Dependent (BOLD) fMRI signal
Timeframe: Through study completion, an average of two weeks
Gaze position
Timeframe: Through study completion, an average of two weeks
Behavioral response (button press)
Timeframe: Through study completion, an average of two weeks