Sabine Kastner, Princeton University
Our natural environments contain too much information for the visual system to represent. Therefore, attentional mechanisms are necessary to mediate the selection of behaviorally relevant information. Much progress has been made to further our understanding of the modulation of neural processing in visual cortex. However, our understanding of how these modulatory signals are generated and controlled is still poor. In the first part of my talk, I will discuss recent functional magnetic resonance imaging and transcranial magnetic stimulation studies directed at topographically organized frontal and parietal cortex in humans to reveal the mechanisms underlying space-based control of selective attention. In the second part of my talk, I will discuss recent monkey physiology studies that suggest an important function of a thalamic nucleus, the pulvinar, in controlling the routing of information through visual cortex during spatial attention. Together, these studies indicate that a large-scale network of high-order cortical as well as thalamic brain regions is involved with the control of space-based selection of visual information from the environment.
David Sullivan, Schepens Eye Research Institute
Dry eye disease (DED), which occurs predominantly in women, is one of the most prevalent eye diseases in the world. This disorder afflicts tens of millions of Americans, and as many as 20-50% of the elderly populations in Asia. However, there is no safe and effective global treatment for both the signs and symptoms of DED. A number of new therapeutic approaches have recently been proposed to treat the mucin, aqueous and/or lipid deficiencies, as well as the ocular surface damage, associated with DED. However, because the common signs and symptoms of DED do not correlate, it has been exceedingly difficult to demonstrate the clinical efficacy of DED therapeutics. A primary reason for this difficulty has been the lack of appropriate clinical endpoints. In order to help solve the puzzle of DED treatment, new diagnostic approaches are needed. One such diagnostic, osmolarity, shows a linear relationship to the severity of DED and also correlates with symptoms. The use of osmolarity and/or other clinical endpoints (e.g. interferometry, visual acuity) may permit the development of safe and effective therapeutic approaches for the treatment of DED throughout the world.
James Bisley, UCLA
Visual attention is the mechanism the nervous system uses to highlight specific locations, objects or features within the visual field. This can be accomplished by making an eye movement to bring the object onto the fovea (overt attention) or by increased processing of visual information in more peripheral regions of the visual field (covert attention). We have hypothesized that neurons within the lateral intraparietal area (LIP) of posterior parietal cortex create a priority map, which is used to guide these processes. In this talk, I will use evidence from a visual foraging task and a change detection task to illustrate the role that LIP plays in targeting eye movements and to how this role may intersect with LIP’s role in guiding the allocation of covert attention.
Steve Shevell, University of Chicago
A piece of charcoal on a sun-drenched beach appears black while an eggshell in a dimly lit kitchen appears white, yet the amount of light reaching the eye from the charcoal is far greater than from the eggshell. This illustrates a well-known feature of visual perception: percepts from black to white depend on relative, not absolute, amounts of light in a scene. Because our percepts depend on a relative neural representation, which by definition is a comparison between two quantities, a fundamental question is what quantities from the image are compared? One possibility is that light from each point is compared to the light in neighboring retinal areas, as in classical center-surround receptive-field organization. Alternatively, the comparison may depend on an object-oriented representation of the scene that takes account of whether the light in nearby retinal regions is from the same object or different objects. The second possibility would imply that relative neural coding follows perceptual segmentation of objects, which occurs at a cortical level. Normally these two possibilities are confounded, but here they are distinguished with a stimulus that manipulates only the perceptual separation between a center and its surround, while keeping both fields' physical light levels identical and their retinal proximity constant. Experiments show that lateral interactions depend on perceptual segmentation of center and surround, and not on their retinal proximity.
Kevin Ivers, University of Houston
Glaucoma is a leading cause of blindness worldwide and results in the degeneration of retinal ganglion cell axons and the death of retinal ganglion cells, ultimately leading to irreversible losses in vision. The lamina cribrosa is suggested to be the location of initial insult to retinal ganglion cell axons in glaucoma. Our purpose is to examine longitudinal changes in the lamina cribrosa and optic nerve head in vivo in non-human primates with experimental glaucoma (EG) and determine whether these changes precede in vivo measurements of axonal loss. Adaptive optics scanning laser ophthalmoscope and spectral domain optical coherence tomography images of the anterior lamina cribrosa surface and optic nerve head were acquired before and every 2 weeks after inducing unilateral experimental glaucoma in 5 rhesus monkeys to increase intraocular pressure. Mean retinal nerve fiber layer thickness (RNFLT), anterior lamina cribrosa surface depth (ALCSD), and mean lamina cribrosa pore area and elongation were quantified at Follow-Up 1 (FU1; first change in pore geometry, ALCSD, or RNFLT from baseline), and at Follow-Up 2 (FU2; most recent time-point) throughout the progression of the disease. In early EG (i.e. at FU1), a posterior deformation of the ALCS preceded, or was concurrent with, measured axon loss in all early EG eyes. Increases in ALCSD and mean pore parameters occurred prior to a change in RNFLT in 3 monkeys, an increase in ALCSD occurred before a change in RNFLT and mean pore parameters in 1 monkey, and simultaneous changes in ALCSD, mean pore parameters, and RNFLT occurred in 1 monkey. Significant alterations in laminar beams and pores accompanied these early ALCSD changes in most EG eyes. For example, pores tended to be larger and more circular at FU1 (compared to baseline) and became slightly larger and more elongated (or elliptical) by FU2. However, the mean area for pores located closer to the optic nerve head center decreased in 3 of 5 EG eyes at FU2. In our future work, we aim to correlate structural changes in the lamina cribrosa and ONH to functional changes in vision assessed using standard automated perimetry and electroretinography.
Davis Glasser, U. Rochester
The perception of motion is a fundamental visual process. Observers are amazingly sensitive to motion information, and use it for a number of purposes besides motion perception per se. In this talk I will present a line of ongoing research that seeks to clarify the properties and functional organization of the systems that allow us to perceive and use motion information. First, we demonstrated perceptual adaptation to motion following just tens of milliseconds of stimulation, roughly two orders of magnitude less than has been previously reported (Glasser et al., 2011). Our behavioral findings match physiological data from cortical area MT, and suggest that this rapid adaptation could play an important role in everyday vision. Next, we show that while psychophysical spatial suppression prevents observers from discriminating large, high-contrast moving stimuli (Tadin et al., 2003), the processing of second-order motion cues is spared (Glasser & Tadin, 2011). Similarly, the oculomotor system is able to take advantage of perceptually suppressed motion signals (Glasser & Tadin, in prep). We believe these tuning differences reflect distinct functional roles for first- and second-order motion, as well as the different goals of the perceptual and oculomotor systems. In the last part of the talk I will discuss ongoing and future research that seeks to use a surprising motion illusion to clarify the computations underlying psychophysical spatial suppression.
Avi Ben Simon, Pasternak lab
Archerfish are known for their remarkable behavior of shooting water jets at prey hanging on vegetation above water. Motivated by the fish’s capacity to knock down small prey, either static or moving, as high as two meters above water level, we studied the different factors that contribute to their excellent visual acuity. First, we studied, behaviorally, the static visual acuity in the fish; found it to be ~0.1 deg and to be facilitated by the area centralis - the only part of the retina with a convergence ratio of pohotoreceptor and ganglion cells. Then by combining anatomy of the area centralis and physiology of retinal ganglion cells we showed that ganglion cells in the area centralis stream information to the brain at a comparable resolution with which it is sampled by the photoreceptors. Moreover, this resolution is approximately the same as that observed behaviorally in the visual acuity test.
Moving to dynamic visual acuity, we studied how the archerfish, vertebrates with saccadic but without smooth pursuit eye movements, employs their area centralis for the interception of moving targets. We tracked gaze direction of fish during interception of moving targets and found that they employ saccadic eye movements based on prediction of target position when it is hit. These saccades (with gain range of 1-2) are used to confine the target image within the area centralis long enough to prepare and perform a shot. Together these results not only explain the archer fish ability to exhibit its unique visual behavior, but they also make it an excellent animal model for studying how fine visual details are streamed to the brain by retinal output.
Jesse Schallek, U. Rochester
The neurons of the retina are one of the most metabolically active tissues in the human body. To fulfill this demand, a network of capillaries delivers nutrients and removes waste products from the neurons they serve. The vascular network is dynamic and locally shunts blood to more active neural regions when needed, providing a hemodynamic signal which is the basis for popular imaging technologies such as functional magnetic resonance imaging (fMRI) and optical imaging of intrinsic signals. However the mechanisms by which this blood flow regulation occurs is still not fully understood. The cellular circuits that mediate this control reside deep in the optically opaque brain and the fine details of the capillary network in the retina have been obscured by insufficient spatial and temporal resolution.
In this line of work, we develop high-resolution adaptive optics technology to objectively report capillary blood flow by directly imaging the movement of single blood cells as they flow through the retinal circulation, an extension of the brain. In the first part of this talk, I will demonstrate new capabilities for direct and non-invasive assessment of retinal blood flow without the use of contrast agents. Second, I will discuss how we are applying these novel strategies to better understand how neurons communicate with surrounding vasculature by imaging the neurovascular system the living eye. As an initial step toward this goal, we image transgenic mice expressing fluorescent pericytes, a contractile cell that surrounds capillaries and is hypothesized to control blood flow at the level of capillaries, and simultaneously image blood flow in the same instrument.
Len Zheleznyak, Yoon Lab
FEI monthly research seminar series (4th Fridays of every month)
Location: G-3111 (Flaum Eye Institute ground floor conference room)
noon-12:30pm: social with pizza
12:30-1:30pm: presentation and Q&A
Adam Pallus, U Rochester
Smooth pursuit eye movements allow humans and other primates to keep small moving objects centered on the fovea and are an essential part of human vision and navigation. The majority of research into pursuit attempts to isolate eye movements by restraining the head from moving. When the head is free to move, subjects choose to follow moving targets with coordinated movements of the eyes and head, highlighting the importance of coordination and head movement to this visual task. Some studies investigate comparisons between head-restrained and head-unrestrained pursuit, but few focus on the factors influencing head movement during pursuit. In this talk, I will present research and ongoing analysis of head movement dynamics during pursuit movements made during large-amplitude ramp and step-ramp target presentations. Single-unit recordings of neurons in the nucleus reticularis gigantocellularis (NRG) made while subjects perform this task attempt to confirm the role of this region in premotor pursuit circuitry. Finally, microstimulation of NRG during visual pursuit tests hypotheses of eye-head coordination.
Kevin Dieter, University of Rochester
Binocular rivalry (BR) is a unique perceptual phenomenon in which sensation and perception are dissociated. Despite the continuous presentation of unchanging sensory input (disparate images in each of the two eyes), an observer's perceptual experience fluctuates between the two equally plausible interpretations of the input. Perhaps surprisingly, one consistent finding is that this process is largely inaccessible to selective attentional control--in other words, observers cannot willfully cause greater predominance of one or the other image (Meng & Tong 2004), except under certain special circumstances (see Dieter & Tadin 2011). Because this paradigm so starkly illustrates the limits of voluntary attention, it represents a unique opportunity for studying the factors that unlock attentional control.
In a series of studies, we have investigated ways to overcome the limitations of attention in the control of BR. First, we demonstrate that profound voluntary control can be achieved through repeated practice of an attentionally demanding task during BR (Dieter, Melnick, & Tadin 2010, OSA). We also show that individuals with highly trained attentional systems (action video game players) show enhanced attentional control of BR (Dieter, Levi, Bavelier, & Tadin 2012, VSS). Preliminary results from a new study suggest that context can influence predominance, and that the content of contextual items may interact with the ability to attentionally control rivalry. These studies allow the illumination of factors that unlock attentional control over perception, and allow for the study of changes to the visual system in response to prolonged attentional training.
Andrew Hollingworth, University of Iowa
Visual attention and visual working memory (VWM) interact closely. However, the precise relation between these two systems is controversial. Several theorists have proposed that visual attention and VWM are equivalent. In this view, VWM is simply the application of visual attention to the internal representation of no-longer-visible stimuli. In the present talk, I will argue that in order to support everyday visual behavior, visual attention and VWM must be dissociable, as many common tasks require retaining particular objects in VWM as attention is directed to different objects and locations in the visual field. I will discuss several lines of research indicating that manipulations of visual attention have a negligible effect on probability of retention and precision for items maintained in VWM. Attention and VWM certainly interact, but they are distinct cognitive systems.
Len Zheleznyak, U. Rochester
Monovision has become a common strategy for patients over the age of 45 years to overcome presbyopia, the loss of accommodation. In traditional monovision, the dominant eye is assigned to distance vision, whereas the non-dominant eye is assigned to near vision with a refractive shift of 1-2 diopters. Recently, we have proposed a modified approach to traditional monovision, in which ocular depth of focus is extended with higher order aberrations to reduce the difference in interocular image quality and improve through-focus visual performance, stereoacuity and binocular summation. Clinically, eye dominance is assessed in monovision candidates using sighting tasks, such as the “pointing-a-finger” or “hole-in-card” tests. However, sighting dominance has been shown to have a poor correlation with patient satisfaction in monovision and its impact on visual performance is unclear. In this presentation, we will introduce a sensory eye dominance test based on binocular rivalry, enabling the quantification of the degree of eye dominance. The degree of eye dominance was defined as the difference in interocular contrast which resulted in perceiving two gratings (4 cyc/deg) for equal portions of time. Subsequently, through-focus high-contrast visual acuity (VA) was measured in modified and traditional monovision with (1) the dominant and (2) non-dominant eye assigned to distance vision. A binocular adaptive optics vision simulator was used to correct 9 cyclopleged subjects’ native aberrations and induce the aberrations for modified and traditional monovision. Sensory eye dominance ranged from weak to strong dominance in the tested subjects. The average difference in through-focus VA between switching eye assignment for distance in modified and traditional monovision was -0.01±0.04 and 0.00±0.03 logMAR, respectively. The subject with strongest dominance experienced a loss in intermediate (0.5-1.5D) VA of 0.16±0.09 and 0.05±0.01 logMAR with modified and traditional monovision, respectively, when the dominant eye was assigned to near as opposed to distance vision. Therefore, eye assignment in monovision for subjects with low and moderate levels of sensory eye dominance does not effect through-focus high-contrast VA, however, strong dominant subjects should have their dominant eye assigned to distance vision for optimal through-focus VA. It remains to be seen if visual tasks near contrast threshold exhibit a dependence on the degree of eye dominance.
Revathi Balasubramanian, Gan Lab
Ben Masella, University of Rochester
Recent advances in high-resolution retinal imaging have led to the discovery of retinal changes caused by visible light exposures below previously published damage thresholds. These effects were discovered by imaging the retinal pigment epithelium (RPE) using adaptive optics (AO). The changes observed were a transient decrease in RPE autofluorescence (AF reduction) and an apparent disorganization of RPE autofluorescence (RPE disruption). The origins of these changes are not fully understood and their functional consequences have not been previously investigated. Understanding these phenomena is critical for the field of ophthalmic imaging; yet, research to date has been limited to imaging techniques that measure retinal structure and provide little information about cellular function.
A technique for measuring retinal function that can be implemented in a reflectance imaging system is photopigment densitometry. Photopigment densitometry has been applied to the study of retinal function for over 60 years. With the advent of near diffraction-limited ophthalmoscopes, which can resolve individual photoreceptors, it is now possible to apply this technique to investigations requiring high spatial resolution. This thesis describes a study that combined high-resolution retinal imaging with photopigment densitometry to investigate the functional consequences of AF reduction and RPE disruption. An adaptive optics scanning laser ophthalmoscope was adapted to measure the density and regeneration rate of the photopigment rhodopsin. Rhodopsin kinetics were measured before and after a series of retinal radiant exposures that caused various degrees of RPE disruption. No measurable change in rhodopsin recovery rate was found at any exposure level and RPE disruption was found to be visible at exposure levels that did not produce a significant reduction in rhodopsin density.
Additionally, a new effect caused by near-infrared (NIR) illumination was discovered; specifically, a decrease in infrared autofluorescence (IRAF) measured after exposure to NIR illumination at levels well below current recommended limits. Because many retinal imaging systems rely on NIR illumination as a means of avoiding potentially harmful exposures at shorter wavelengths, understanding the source of this IRAF reduction is important for both scientific and clinical imaging. This thesis provides the first description of IRAF reduction, as well as an examination of its basic properties.
Anasuya Das, Huxlin Lab
Patrice Tankam, Optics, University of Rochester
This talk will focus on studying problems related to human vision, specifically the optical design and characterization of novel diffractive bifocal Intra-Corneal Lenses (ICLs) for the correction of presbyopia. A holy grail of this correction would be to establish a procedure that is less-invasive and reversible. The simulation of the optical properties of the ophthalmic implant in term of diffractive efficiencies and Modulation Transfer Function (MTF) is compared to the experimental characterization. Results show that the novel concept is a promising solution for the correction of presbyopia.
Ethan Rossi, University of Rochester
Aixa Alarcon Heredia, Yoon Lab
Peter Bex, Schepens Eye Research Institute
Letter acuity, the predominant clinical assessment of vision, is relatively insensitive to many eye diseases. While the contrast sensitivity function (CSF) has demonstrated the potential to monitor the slow progress of many blinding eye diseases, current CSF tests lack the reliability or ease-of-use to capture subtle or gradual changes in vision. To improve the current state of clinical vision testing, we have developed and validated a computerized adaptive test and implemented it on a commercial tablet device (iPad) that provides an efficient, easy-to-use and mobile vision assessment.
Celeste Kidd, University of Rochester
Efficient attentional choices require accurate expectations about what is likely to happen in the future. Adults' attention is guided by their substantial experience in the world. Very young children, however, possess far less data. In this talk, I will discuss work that explores the mechanisms that guide young children's early visual attention decisions and subsequent learning. I present eye-tracking experiments in both human and non-human primates which combine behavioral methods and computational modeling in order to test competing theories of attentional choice. I present evidence that young learners rely on rational utility maximization both to build complex models of the world starting from very little knowledge and, more generally, to guide their behavior. I will also discuss recent results from related on-going projects about learning and attention in macaque learners.
Antoine Barbot, NYU
Spatial covert attention is the selective processing of visual information at a given location in space in the absence of eye movements. Covert attention can either be automatically captured by a sudden stimulus (exogenous attention), or voluntarily allocated to a given part of our visual field (endogenous attention). I will present a series of studies that extend our knowledge of how these two types of attention affect visual processing and of the extent to which they differ.
The first part reveals that both exogenous and endogenous attention affect contrast sensitivity to texture-defined information, an intermediate stage of visual processing supporting figure/ground segregation. However, exogenous and endogenous attention have differential effects on texture sensitivity, with endogenous attention being more flexible.
The second part examines how exogenous and endogenous attention affect performance in texture segmentation tasks constrained in spatial resolution. In such tasks, exogenous attention automatically increases resolution, improving performance where resolution is too low (periphery) and impairing perception where resolution is too high (central locations). Here we show that the increase in resolution at the attended area comes with a decrease in resolution at unattended areas, impairing performance where resolution is too low but actually improving performance where resolution is too high. This finding demonstrates that exogenous attention automatically trades off resolution, uncovering a counterintuitive situation in which diverting attention benefits performance. Interestingly, in the same task, endogenous attention has been shown to benefit performance at all eccentricities, suggesting a more flexible mechanism. We investigated the way by which endogenous attention benefits performance across eccentricities using selective adaptation to spatial frequencies. We show that endogenous attention can adjust (increase or decrease) resolution depending on task demands, and does so by affecting the sensitivity of small, high-frequency spatial filters.
Finally, in the third part, I will briefly describe ongoing projects that aim at characterizing the underlying mechanisms of attention on sensitivity and spatial resolution using classification techniques. Altogether, these studies reveal important differences in the mechanisms of exogenous and endogenous attention.
Lu Yin and Jesse Schallek, Merigan Lab & Williams Lab
Gabriel Diaz, Assistant Professor, RIT
The ability with which humans are able to produce rapid, accurate interceptive movements is especially extraordinary, when one considers that there is a delay of approximately 160 milliseconds before new visual information can be incorporated into an ongoing action. How is it that, despite these delays, humans regularly demonstrate the ability to catch or hit quickly moving balls? One possibility is that subjects are able to predict future ball position by combining recent sensory information with a learned model of likely target dynamics. Although there is substantial evidence in support of visual prediction, little is known about the underlying mechanisms, or the ways in which prediction shapes our visual and motor behavior. To investigate, subjects were immersed in a virtual racquetball court viewed through a head-mounted display that was fitted with an eye-tracking device. On each trial, a virtual ball was launched towards the subject, and bounced once during its approach. The subject's task was to use a real-world racquet that was tracked using motion-capture, and visually represented in the virtual world, to hit the bounced virtual ball at a nearby target. Just before the ball's bounce on the ground, subjects were found to make predictive eye movements to a location where the ball would arrive shortly after the bounce. By using these eye-movements as a behavioral measure of human predictive capabilities, and by manipulating the virtual ball's trajectory and dynamics, we can address fundamental questions regarding visual prediction of a moving object. Here, we present the results from an initial investigation, which provides insight into some of the basic parameters of human predictive mechanisms, including the accuracy of visual prediction, the temporal duration of the prediction, the information upon which prediction is based.
Rebecca Lowery, University of Rochester
Synaptic plasticity is critical for normal and efficient operation of the nervous system throughout the lifespan. While much is known about the functional and structural changes that occur at the synapse during synaptic plasticity, the mechanisms implementing these changes are poorly understood. Despite being classically characterized as immune cells, microglia have recently been shown to play a role in normal brain function by restructuring and removing synapses. Given the novelty of this role, few details are known about the interaction between microglia and synaptic elements, including the signal facilitating recruitment of a microglial process to a synapse. Fractalkine, a chemokine well studied in neuroinflammatory signaling, is particularly well positioned for this role as it signals specifically between neurons and microglia and is known to affect microglial state and chemotaxis. To determine how microglia play a role during synaptic plasticity we studied ocular dominance plasticity in the developing visual system, a well characterized model of activity-dependent synaptic remodeling. We examined microglial morphology, density, motility, and response properties of neurons in binocular visual cortex of mice that were monocularly deprived and their non-deprived littermates. We found microglial behavioral changes consistent with the idea that microglia play a role in ocular dominance plasticity via a non-traditional activation pathway.. Additionally, we observed defects in visual system development in fractalkine receptor knockout mice suggesting that microglial effects on neural circuit development may be mediated via fractalkine signaling.
Kenny Cheong, Merigan Lab Postdoc Candidate, University of Sydney, Save Sight Institute
In the retina, visual signals are processed such that different aspects of the visual scene are broken up and transmitted along parallel visual pathways to the lateral geniculate nucleus (LGN) of the thalamus. In the LGN, the parallel visual pathways are anatomically segregated into three distinct regions, the parvocellular (P), magnocellular (M) and koniocellular (K) layers, from which the pathways get their names. The P pathway is involved with red-green colour vision and high acuity vision. The M pathway is involved with motion processing, low contrast vision and sparse vision. The K pathway is comprised of a heterogeneous collection of cells with diverse functional properties. The most studied K cell type is the blue-ON cell that carries colour signals along the-blue yellow colour axis. The K pathway is evolutionarily older than the P and M pathways, however, the K pathway is least understood. For instance, which subpopulations of K cells project to the primary visual cortex, and thus are likely to contribute to perception? What are the functional properties of different subpopulation of K cells? And if the K pathway is evolutionarily ancient, is this reflected in the functional properties of the cells? I will present results from three studies completed as part of my PhD at the University of Sydney, Australia that address these questions. We studied the LGN in anaesthetised common marmosets (Callithrix jacchus) using extracellular recording and stimulation techniques.
Daniel Savage, Knox/Huxlin Lab
James Feeks, University of Rochester
Adaptive optics combined with a scanning laser opthalmoscope provides the ability to image single cells in the living retina. Further, two-photon fluorescence imaging has recently been achieved as a novel technique for investigating primate and mouse retina, opening up new frontiers for imaging molecules which are implicated in cellular function. However, all intensity-based methods are subject to system instability, complicating the process of determining changes from longitudinal studies. By analyzing instead the fluorescence lifetime, it is possible to gather robust data which can be attributed to the fluorophores in the retina. A theoretical analysis of challenges associated with implementing fluorescence lifetime imaging for the first time in AOSLO has been carried out. A novel two-photon adaptive optics scanning laser opthalmoscope (2PAOSLO) has been designed and is in the process of being built for imaging mouse retina. The instrument will provide the imaging capabilities necessary to implement this novel imaging technique.
Ranjini Kottaiyan, Aquavella/Yoon Lab
Adam Pallus, University of Rochester
The brain employs a variety of eye movements to achieve goals of the visual system. In natural settings, the brain incorporates coordinated movements of the eyes and head, presumably to better achieve these goals. In gaze pursuit, humans and other primates coordinate head movements with smooth pursuit eye movements to keep the image of a target object on the fovea as it moves. There is no consensus in the literature regarding how these head movements are programmed, but published models consider gaze pursuit an extension of smooth pursuit and suggest that the velocity of the target is the only input to the system. In this talk, I will present research challenging this idea, and suggest an alternative mechanism for driving head movement during gaze pursuit. I will also discuss evidence that a region of the brainstem known as the nucleus reticularis gigantocellularis (NRG) is involved in driving these head movements. In particular, evidence obtained by recording the activity of neurons in NRG during head movements in awake, behaving monkeys and stimulation of the region to alter ongoing head trajectories.