How do we know what's important to look at in the environment? Sometimes, we need to look at objects because they are 'salient' (for example, bright flashing lights of a police car, or the stripes of a venomous animal), while other times, we need to ignore irrelevant salient locations and focus only on locations we know to be 'relevant'. These behaviors are often explained by the use of 'priority maps' which index the relative importance of different locations in the visual environment based on both their salience and relevance. In this research, we aim to understand how these factors interact when determining what's important to look at. Specifically, we are evaluating the extent to which the visual system considers locations that are known to be irrelevant when considering the salience of objects. We're testing the hypothesis that the visual system always computes maps of salient locations within 'feature maps', but that activity from these maps is not read out to guide behavior for task-irrelevant locations. We'll have people look at displays containing colored shapes and/or moving dots and report aspects of the visual stimulus (e.g., orientation of a line within a particular stimulus). We'll measure response times across conditions in which we manipulate the presence/absence of salient distracting stimuli and provide various kinds of cues about the potential relevance of different locations on the screen. The rationale is that by measuring changes in visual search behavior (and thus inferring computations performed on brain representations), we will determine how these aspects of simplified visual environments impact the brain's representation of important object locations. This will support future studies using brain imaging techniques aimed at identifying the neural mechanisms supporting the extraction of salient and relevant locations from visual scenes, which can inform future diagnosis/treatment of disorders which can impact our ability to perform visual search (e.g., schizophrenia, Alzheimer's disease).
See this in plain English?
AI-rewrites the medical criteria so a patient or caregiver can understand them. Always confirm with the trial site.
Behavioral response (button press)
Timeframe: Through study completion, an average of two weeks
Gaze position
Timeframe: Through study completion, an average of two weeks