Adaptive Representation and Control in Vision
24th Symposium: June 18-20, 2004
View Symposium 2004 Photos
Perceptual Learning—Richard Aslin, Session Chair
Yang Dan, University of California, Berkeley
Timing-dependent plasticity in visual cortex
In spike-timing-dependent plasticity (STDP), the direction and magnitude of synaptic modification depend on the relative timing of pre- and postsynaptic spiking. I will discuss our recent results on stimulus-induced functional modification in the visual cortex that is believed to be mediated by STDP of intracortical connections. First I will describe functional changes in the adult cortex induced by asynchronous visual conditioning. Then I will focus on two recently discovered V1 receptive field properties that are likely to result from STDP during development. These receptive field properties may account for two known visual illusions.
Charles Gilbert, Rockefeller University
Neural mechanisms of perceptual learning
The representation of form along the visual pathway is usually thought of in feedforward terms, with primary visual cortex (V1) representing simple stimuli such as oriented line segments, and higher order areas representing complex shapes by linking the simpler components. On the contrary, cortical circuits enable neurons to integrate information over large parts of visual space, and to be selective for complex visual stimulus configurations. The relationship between these circuits, the higher order receptive field properties of neurons in V1 and the geometry of natural scenes suggests that visual experience early in life encodes information about the structure of the natural world. The plasticity of these circuits, seen in axonal sprouting and synaptogenesis accompanying functional recovery following retinal lesions, suggests a mechanism that may be general to experience dependent cortical plasticity. Visual cortical plasticity in the normal adult brain allows one to discriminate learned shapes through a process known as perceptual learning. In visual search tasks, learned shapes become recognized rapidly and in parallel with numerous distractors, a process thought to involve retinotopic maps. Trained shapes take on a "pop-out" quality, whereas untrained shapes require a more effortful, serial search. fMRI studies suggest that while before training lateral occipital cortex integrates information from V1/V2 in a process coordinated by the attentional network, after training it becomes less involved as the shapes become more fully represented in the earlier, retinotopic visual cortical areas. At the level of individual V1 neurons in animals trained on a shape discrimination task, we see a central role of top-down influences in the representation of trained information. With training, V1 neurons adopt novel functional properties related to the attributes of the trained shapes. These properties are only present, however, when the animal performs the trained task, and neurons respond very differently to an identical visual stimulus when the animal performs a different task. Our findings suggest that the output from V1 reflects both sensory and behavioural context, which may reflect an interaction between feedback pathways to V1 and local circuits within V1. These contextual influences are subject to learning, leading to plasticity of function and circuits, even within V1, that extends throughout life.
Takeo Watanabe, Boston University
Perceptual learning without perception is not passive
The brain demonstrates an amazing ability to become increasingly sensitive to important stimuli. It is often claimed that we become more sensitive only to the critical signals in the tasks we attend to. However, our recent series of experiments have shown that perceptual learning occurs with little attention. First, mere exposure to sub-threshold and task-irrelevant motion coherence signals led to enhancement in sensitivity to the motion direction. This finding indicates that attention is not necessary for perceptual learning (Watanabe, Nanez & Sasaki, 2001). Second, exposure to two types of task-irrelevant motion that are processed at different levels of visual processing improved sensitivity only at the lower-level. These results suggest that task-irrelevant perceptual learning occurs at a very low-level (Watanabe et al, 2002). Third, we addressed the question as to whether such task-irrelevant learning occurs purely passively (caused by stimulus-exposure). During exposure, we presented four different directions of motion an equal number of times, but the direction of interest (DOI) was paired with the task targets. If learning is purely passive, thresholds should improve equally for all the presented directions. Surprisingly, the threshold improved only for the DOI. These results show that learning of a task-irrelevant and sub-threshold feature is not purely passive, but it occurs only when the feature is correlated with a task target (Seitz & Watanabe, 2003). Based on these findings, we propose a model in which diffuse reinforcement learning signals perform an important role, complementary to focused attention in perceptual learning.
Marvin Chun, Yale University
Attentional modulation of repetition attenuation effects
Described variably as the repetition suppression, attenuation, or adaptation effect, the reduction in neural activity to repeated stimuli versus novel stimuli provides a marker for stimuli-specific perceptual learning. This "adaptive filtering" mechanism allows the visual system to distinguish familiar and novel stimuli (Li, Miller, & Desimone, 1993). One fundamental question concerns whether this type of perceptual learning is obligatory and passive or whether it can be influenced by task demands. Three functional magnetic resonance imaging (fMRI) experiments will demonstrate that 1) attention and task-relevance modulates the attenuation effect to repeated images, 2) attention is important for both encoding and expression of learning, and 3) the learning is dependent on perceptual load but not working memory load. In sum, attention and task-relevance actively governs when neural representations become altered to repeated perceptual input.
Memory Signals & Representation—Charlie Duffy, Session Chair
Tatiana Pasternak, University of Rochester
Cognitive influences in cortical area MT
During a visual working memory task many neurons in area MT are active while the monkeys remember the motion of the previously viewed sample stimulus in preparation for comparing it to the upcoming test stimulus. Activity during the 1.5 sec memory period (the delay) consists of a brief activation early in the delay, followed by prolonged suppression and subsequent reactivation in anticipation of the upcoming test. Throughout most of the delay this activity reflects the direction of the remembered sample and some aspects of this activity are modulated by the time of occurrence and the spatial location of the anticipated test. Furthermore, many neurons respond in a direction selective fashion to motion stimuli presented at distant locations contralateral to their classical receptive fields. These remote responses have substantially longer latencies than the responses to the same stimuli placed in the receptive field. The presence of long latency stimulus specific responses to remote targets and modulation of delay activity by the demands of the behavioral task point to "top-down" influences from regions monitoring large portions of the visual field and informed about the cognitive demands of the task (e.g. prefrontal cortex). Our results demonstrate that this retinotopic mid-level sensory cortical area participates in all aspects of the task requiring processing and retaining of information about visual motion, even when behaviorally relevant stimuli appear at locations far removed from its classical receptive fields.
Carlos Brody, Cold Spring Harbor Laboratory
Dynamical systems in biology: a neural model of sequential discrimination
Sequential discrimination tasks are widely used in psychophysical studies. In a typical such task, a subject is presented with a first stimulus (f1), and then, after a delay of a few seconds, with a second stimulus (f2), after which the subject must make a decision based on a comparison of the two (f2 > f1?). Sequential discrimination thus requires at least three components: loading working memory with a particular value (f1), storing that value over a few seconds, and then computing with the stored value, by comparing the second stimulus (f2) to the memory of f1. We present a remarkably simple model, consistent with neurophysiological recordings in prefrontal cortex of monkeys performing a sequential discrimination task, that is able to carry out all three components of the task within a single, integrated, framework.
Wendy Suzuki, New York University
Associative memory signals in the monkey medial temporal lobe
To examine the patterns of neural activity during associative memory formation, we trained two monkeys to perform a location-scene association task. The activity of individual hippocampal neurons was recorded as monkeys learned which one of four identical targets superimposed on a complex visual scene was associated with reward. Hippocampal neurons signaled new learning with dramatic changes in their stimulus-selective response properties. This changing neural activity was significantly correlated with the animal's behavioral learning curve. We call these cells "changing" cells. We found that about half the hippocampal changing cells change their neural activity before the animal learns the associations, while the other half changes after learning. In contrast, preliminary data in the perirhinal cortex suggest that most perirhinal cells tend to change after learning. Taken together, these findings suggest that of the two areas, the hippocampus may be more involved in the early formation of new associative memories compared to the perirhinal cortex.
Mark D'Esposito, University of California, Berkeley
Neural mechanisms of top-down control of goal directed behavior
Evidence from neuropsychological, electrophysiological and functional neuroimaging supports the notion that the prefrontal cortex (PFC) is necessary for temporarily maintaining relevant information in an active state, a process that is critical for the voluntary control of behavior. The extensive reciprocal connections from PFC to virtually all cortical and subcortical structures places it in a unique anatomical position to monitor and manipulate diverse cognitive processes. However, little is known about the differential contribution of PFC versus other brain regions in implementing cognitive control. In this talk, I will present evidence from several event-related fMRI and ERP studies that support a model of cognitive control in which PFC biases activity in posterior stimulus-specific association cortex in favor of behaviorally relevant information. Moreover, the temporal dynamics of the signal from the PFC vs. posterior stimulus-specific association cortex is consistent with perceptually-driven bottom-up flow of information when encoding representations that must be maintained, and internally-driven top-down flow of information when decisions and actions are made based on maintained representations.
Working Memory & Control—Daphne Bavelier, Session Chair
Jun Tanji, Tohoku University
Participation of the dorsolateral prefrontal cortex in time keeping and categorized motor-sequence planning
Although the role of the dorsolateral prefrontal cortex (dlPF) in the selection, retention, and modification of sensory or memorized information has been reported extensively, participation of this part of the cortex in a great deal more of cognitive processes remains to be explored. From this viewpoint, I introduce results of two recent studies in our laboratory. In the first series of experiments, monkeys were trained to receive a visual signal and perform a key release movement in accordance with three different time requirements: 1) within 1 s after the termination of a visual signal, 2) with minimum waiting time of 4 s, and 3) with minimum waiting time of 8 s. We found that dlPF neurons represented the time requirement in two different manners. The first group differentiated the reception of time signals, while the second group signaled the termination of the waiting time. In the second series of experiments, monkeys were trained to perform a series of four movements in eleven different temporal orders. Correct sequences were initially instructed with visual signals but had to be memorized during performance of a particular sequence four times. Thereafter, the sequential movements were performed based on memory. During a period when monkeys were preparing to start the memorized 4-sequence movements, we found changes in activity of dlPF neurons. Most interesting properties of the preparatory activity was that they were selective to a particular category of sequences to be performed. The category included the followings; four repetitions of a movement, alternate sequence of two movements, and a repetition of a pair of two movements. This finding suggests the participation of the dlPF in preparation or planning of a category of motor sequence, rather than a particular sequence itself.
Earl Miller, Massachusetts Institute of Technology
The prefrontal cortex: concepts, rules and cognitive control
What controls your thoughts? How do you focus attention? How do you know how to act while dining in a restaurant? This is cognitive control, the ability to organize thought and action around goals. Results from our laboratory have shown that PFC neurons have properties commensurate with a role in "executive" brain function. They are involved in directing attention, in recalling stored memories, predicting reward value, and they integrate the diverse information needed for a given goal. Perhaps most importantly, they transmit acquired knowledge. Their activity reflects learned task contingencies, concepts and rules. In short, they seem to underlie our internal representations of the "rules of the game." This may provide the foundation for the complex behavior of primates, in whom this structure is most elaborate.
Todd Braver, Washington University
Dual mechanisms of cognitive control
Two distinct mechanisms of cognitive control - proactive and reactive - are described. Proactive control involves the sustained maintenance of task-relevant context in order to prepare attention towards upcoming events or actions. Reactive control involves the transient re-activation of this information on a just-in-time basis in order to resolve interference or conflict when it is detected. Evidence from a number of studies will be presented that suggests these two control mechanisms can be distinguished along a number of different dimensions, including neural substrates (lateral prefrontal cortex, anterior cingulate cortex), temporal dynamics, and computational properties. Both forms of control strategy may be in place during performance of many cognitive tasks, such as those involving selective attention and working memory, but subtle task and individual difference factors affect their relative weighting. I will argue that such situational and individual variability in control processes is a critical factor that must be successfully incorporated into any viable mechanistic model of cognitive control.
Yuhong Jiang, Harvard University
Role of learning in visual attention and working memory
Extensive research in psychology and neuroscience has been devoted to understanding visual attention and working memory for a randomly generated, novel scene. Yet in everyday life, the visual environment often stays constant for an extended period of time, providing amply opportunity for learning. This talk synopsizes how visual attention is changed when a display is presented for a second time, and how the capacity of visual working memory is changed on repeatedly encountered displays. I show that (1) the visual system is extremely efficient at learning from repeated displays, and (2) such learning enhances attention and working memory only when the repeated display is predictive of the target information. Repeated displays do not support more efficient visual search or change detection if the target object changes from trial to trial. I will also present preliminary neuroimaging evidence for the dissociation between visual perceptual familiarity and associative learning.
Decision Making—Alex Pouget, Session Chair
Michael Shadlen, University of Washington
A neural mechanism for making decisions
Neurobiology is beginning to furnish an understanding of the brain mechanisms that give rise to such higher cognitive functions as planning, remembering, and deciding. Progress has come about mainly by measuring the electrical activity from parts of the brain that lie between the sensory and motor areas. The neurons in these brain areas operate on a time scale that is not controlled by external events: their electrical activity can outlast sensory input for many seconds, and they do not cause any overt change in behavior. Put simply, these neurons play neither a purely sensory nor a purely motor role but appear instead to control mental states. My lecture will focus on neurons in the parietal lobe that underlie a simple kind of decision-making—forming a commitment to one of two competing hypotheses about a visual scene. We have discovered that these neurons work by accumulating "evidence" from the sensory cortex as a function of time. The brain makes a decision when the accumulated evidence represented by the electrical discharge from these neurons reaches a criterion level. These neurons therefore explain both what is decided and when a decision is reached. Interestingly, the neural computations that underlie such a decision process were anticipated during WWII by Alan Turing and Abraham Wald. Turing applied this tool to break the German Navy's enigma cipher, while Wald invented the field of sequential analysis. In addition to mathematical elegance and winning wars, our experiments suggest that this computational strategy may lie at the root of higher brain function.
Hauke Heekeren, Berlin NeuroImaging Center
Perceptual decision-making in the human brain
Findings from single-cell recording studies suggest that a comparison of the outputs of different pools of selectively tuned lower-level sensory neurons may be a general mechanism by which higher-level cortical regions compute perceptual decisions. For example, when monkeys must decide whether a noisy field of dots is moving upward or downward, a decision can be formed by computing the difference in responses between lower-level neurons sensitive to upward motion and those sensitive to downward motion. I will present fMRI evidence that even for high-level object categories, the comparison of the outputs of different pools of selectively tuned neurons could be a general mechanism by which the human brain computes perceptual decisions. Furthermore I will argue that the posterior dorsolateral prefrontal cortex has general decision-making functions, independent of stimulus and response modalities.
Jeffrey Schall, Vanderbilt University
On the neural basis of deciding, choosing, and acting
Recent research has provided significant new insights into how the brain arrives at decisions, makes choices, produces actions and evaluates whether goals are achieved. Neural correlates of each of these processes have been identified through noninvasive and invasive methods of monitoring and manipulation. As illustrated with results from our laboratory collected during visual search and stop signal tasks, single neurons can convey signals apparently sufficient to mediate these processes. These results will be used to highlight two overarching issues. First, I will discuss particular limits of claims of correspondence between neural events, overt behavior and hypothesized covert processes. Second, I will distinguish effortful deciding from simple choosing and suggest that much of the neurophysiological data framing this field has been obtained from monkeys while they are choosing but not deciding. These issues must be clarified to reconcile intentional reasons with neural causes.
Daeyeol Lee, University of Rochester
Prefrontal cortex and dynamic decision making
Decision making refers to an evaluative process by which one of alternative actions is selected. When an animal interacts with the same static environment repeatedly, an optimal decision that delivers the maximum payoff can be easily discovered. However, dynamic environments require the animal to adjust its strategy continually based on the outcomes of its previous decisions. Reinforcement learning algorithms provide a framework to study the decision-making processes and often converge on optimal strategies even in dynamic environments. Nevertheless, whether and how reinforcement learning algorithms are implemented in the brain is not well understood. An interesting class of dynamic decision-making problems occurs when the environment includes multiple decision-making agents that can adjust their individual decision-making strategies based on the observation of each other's behavior. Although game theory provides normative solutions for such decision making (e.g., Nash equilibrium), people often deviate systematically from such predictions. This might be due to the fact that equilibrium strategies are difficult to find with the dynamics of learning mechanisms. We investigated the role of primate prefrontal cortex in decision making during a strictly competitive game, known as matching pennies. Monkeys performed an oculomotor free-choice task in which a saccade to one of the two alternative visual targets was rewarded only when the computer opponent selected the same target. The computer was programmed to exploit statistical biases in the animal's choice and reward sequences. As in reinforcement learning, the animal's choice during this game was biased by its choice and reward history in addition to the strategies of its opponent. Furthermore, neurons in the dorsolateral prefrontal cortex encoded the animal's past decisions and payoffs as well as their conjunctions, providing signals necessary to update the estimates of expected reward. Thus, prefrontal cortex might play a key role in optimizing decision-making strategies.
Cortico-cortical Interactions—Dana Ballard, Session Chair
Matt Wilson, Massachusetts Institute of Technology
Peter Dayan, University College London
Acetylcholine, norepinephrine, uncertainty and attention
The neuromodulators acetylcholine and norepinephrine have long been supposed to be critically involved in attentional processes; however, there is little consensus on their precise computational functions. We propose that these neuromodulators act as uncertainty signals: acetylcholine reports expected uncertainty, coming from known variability or indeterminacy within a behavioral context; norepinephrine signals unexpected uncertainty, as when sensory observations strongly contradict expectations. A precise interaction between these formally distinct sorts of uncertainty is suggested as influencing attentional selection, enabling near-optimal cortical inference and learning. We illustrate these ideas using a novel task that generalizes well-studied attentional paradigms known to interact with acetylcholine and norepinephrine. We make physiological, behavioral, and pharmacological predictions, based on the model. In particular, simulations of pharmacological intervention, affecting either or both neuromodulators, reveal their part-opponent, part-synergistic interaction.
Shaul Hochstein, Hebrew University
Reverse hierarchy theory of perceptual learning and conscious visual perception
While processing of visual information follows a hierarchical route from the eye, via the LGN, to the primary and then extrastriate cortical regions, recent evidence relating to perceptual learning and to the perceptual elements available to consciousness suggests that the feedback projections may be responsible for an orderly reverse hierarchy of information transfer. Initial feed-forward processing leading to increasingly complex representations is implicit and not under conscious control, but explicit perception begins at the large generalizing receptive fields of high cortical levels and later cascades in reverse direction along the same hierarchy, seeking detailed information available at lower levels. This Reverse Hierarchy Theory (RHT) explains quite naturally a variety of perceptual and learning phenomena. In particular, conscious vision begins with a rapid "vision at a glance" mode, and then progresses to a slower " vision with scrutiny." Feature search "pop-out" is attributed to higher cortical areas, where large receptive fields underlie spread attention and detection of "what without where." RHT predicts rapid categorization, and the initial explicit percept of a single, globally determined image when presented with an ambiguous figure, an ambiguous apparent motion, or a Gestalt form. Temporal and spatial acuity limitations in high-level receptive fields may underlie change- and repetition-blindness, and subsequent visual routines incorporate fine details and overcome illusory conjunctions by focussing attention to specific, active, low-level units. Perceptual learning also follows the Reverse Hierarchy route, with early learning generalizing to new spatial conditions, while later, top-down guided learning incorporates finer details, but is specific to training conditions.
Robert Desimone, National Institutes of Health
Top-down attentional control of synchronized neural activity in visual cortex
A complex visual scene will typically contain many different objects, few of which are currently relevant to behavior. Thus, attentional mechanisms are needed to select the relevant objects from the scene and to reject the irrelevant ones. Brain imaging studies in humans as well as neurophysiological studies in monkeys have identified some of the neural mechanisms of attentional selection within the ventral, "object recognition", stream of the cortex. The results support a Biased Competition model of attention, according to which multiple stimuli in the visual field activate their corresponding neural representations throughout the cortical areas of the ventral stream. These representations engage in mutually suppressive interactions, which are strongest for stimuli occupying the same receptive field. The suppressive interactions are then biased in favor of one of the competing populations by "top-down" signals specifying the properties of the relevant stimulus in a given behavioral context. This top-down bias may originate in parietal and prefrontal cortex, and it is expressed in visual cortex at least in part through an increase in high-frequency (gamma) synchronization of neurons carrying the relevant information. Recent evidence suggests that lower-frequency (beta) synchronization of neural activity may be relevant for suppressing distracters.
Daeyeol Lee, Chair
Symposium supported by NIH Grant R13MH070450