Talk Series Abstracts 2001
Bruno Averbeck, Univ. of Minnesota, Neural activity during the drawing of geometrical shapes
Ian Cox, Bausch Lomb Inc., Is soft lens vision soft?
CVS Colloquium: Daniel Wolpert, Forward models in sensorimotor control
Kyros Kutulakos, What do N photos tell us about 3D shape?
Jessica Bayliss, Observations from a brain-computer interface application for environmental control
Randal Nelson, Cubist recognition of 3D objects
CVS Colloquium: Ulf Eysel, Ruhr-University, Lesion-induced plasticity in the visual system
Walter Makous, Visual adaptation: fixed gain in human cones
David Sheinberg, Brown University, From perception to action: The role of temporal cortical neurons in noticing visual stimuli
CVS Research Talk: Jeff Saunders, The role of visual feedback for guiding hand movements
Paul Glimcher, New York University, Decision theoretic analyses of parietal cortex
CVS Research Talk: Jim Ison, A multidisciplinary study of the causes of presbycusis, in human listeners and in animal models of aging
CVS Colloquium Speaker: Robert Wurtz, NIH, The visual motor dialogue between cerebral cortex and the superior colliculus
CVS Research Talk: Randall Hayes, Ideal observers and visual search
P. Read Montague, Neuromodulation through fast dopamine fluctuations
CVS Research Talk: Marc Dubin, The effects of competing visual tasks on MST neuronal responses to optic flow
CVS Research Talk: Keith Schneider, Visual Attention and Perceptual latency
CVS Research Meeting: Jack Fraser, A discussion of intellectual property and patents
CVS Research Talk: Sue Galvin, Measuring perception without awareness with type II ROC analysis
CVS Research Talk: Jack Werner
CVS Informal Guest Talk: Taosheng Liu, The effect of motion on recognition of novel, 3-dimensional objects
CVS Research Talk: Michael Helmbrecht, A MEMS segmented mirror for adaptive optics
CVS Research Talk: Stacey Choi, The Stiles-Crawford effect of the first kind (SCE-I) in myopic eyes
CVS Research Talk: Phil Kruger, How does the eye know which way to go? Signed signals for reflex accommodation
CVS Research Talk: Phil Lempert, Quantitative assessment of eyes presumed to have amblyopia (Does amblyopia actually exist as a clinical entity?)
Colloquium Speaker: Earl Miller, MIT, Prefrontal cortex and the neural basis of cognitive control
Why the lateral geniculate nucleus of the thalamus is just like the rings of saturn
Colloquium Speaker: Joe Malpeli, University of Illinois, Urbana-Champaign
In biology, it is usually safe to assume that form is driven by function - but not always. The lateral geniculate nucleus (LGN) relays information from the eye to the cerebral cortex. It consists of layers arranged in two distinct patterns, such that central vision is represented by 6 layers, and peripheral binocular vision by 4. In the rhesus monkey, the boundary between the two laminar patterns runs precisely through small gaps that represent the blind spots in each eye. It is reasonable to wonder what function this change in laminar pattern serves, and why it is related to the blind spot. The rings of Saturn also have a complex morphology, but trying to understand their origin by wondering what function they serve is unlikely to be productive. I will argue that the laminar morphology of the LGN is not driven by functional requirements, and does not result from specific genetic programs, but instead arises from the incidental outcomes of general rules of cell sorting that are not specific to the LGN. Computational models of morphogenesis will be presented that account for major laminar features of the LGN, and for differences in laminar structure between new- and old-world primates. They suggest that the laminar transition in the rhesus LGN is trapped by a developmental singularity (i.e., the blind-spot), analogous to the role of gravitational singularities (i.e., small moons) in determining major subdivisions of Saturn's rings. Morphological details of the transition boundary are consistent with these models.
Thalamic relay functions
Colloquium Speaker: Murray Sherman, Neurobiology, State University of New York, Stony Brook NY
The LGN is the best understood thalamic nucleus. Only about 7% of the input to geniculate relay cell derive from retina, the rest representing local inhibitory inputs, descending inputs from visual cortex, and ascending inputs from brainstem. This vast majority of inputs (93%) that are nonretinal dynamically control the nature of the geniculate relay, often by regulating membrane properties of relay cells that determine their mode of response to retinal inputs, which dramatically affects the nature of information relayed to cortex. Our studies of the lateral geniculate nucleus of the cat suggest that this dynamic control depends on the animal's behavioral state and represents the neuronal substrate for many forms of visual attention. The LGN is a first-order relay, because it relays subcortical (i.e., retinal) information to cortex for the first time. In contrast, the other main thalamic relay of visual information, the LP-pulvinar complex, seems largely to be a higher-order relay, since much of it seems to relay information from one cortical area to another. Much more cortico-cortical processing may involve these "re-entry" routes than has been hitherto appreciated. If so, the thalamus sits at indispensable position for cortical processing.
Flexible behavior in monkeys: space, memory and the posterior parietal cortex
Colloquium Speaker: Larry Snyder, Washington University, St. Louis MO
Human behavior is characterized by its flexibility. Our responses in any given situation are not driven only by the stimuli we receive; we also take past experience and current context into account. We have been investigating the extent to which macaque monkeys share this ability, and its neural underpinnings. In this talk I will describe findings from three projects. First, I will compare monkey and human behavior in a task-switching paradigm. Monkeys can switch tasks more easily than humans, but are also more distractible. Second, I will characterize the ability of monkeys to remember spatial information in different frames of reference. Memories are equally accurate for world- and animal-fixed locations, that is, locations which remain fixed in the world when the animal moves, compared to locations which move along with the anima. Surprisingly, however, under certain circumstances memories are less precise for world- than animal-fixed locations. Finally, I will compare animal behavior when remembering target locations with neural activity in area LIP of the posterior parietal cortex. Internal representations of spatial information found in LIP reflect the decrease in precision seen under world-fixed conditions. However, these internal representations also show an inaccuracy in the representation which is not reflected in the behavior. This suggests that information in LIP does not encode world-fixed spatial locations by itself, but instead that this information must be supplemented by information from other brain areas.
Ophthalmology Lunch Discussions
Scanning laser ophthalmoscopy
CVS Research Talk: Luca Brigatti, Assistant Professor, Ophthalmology, URMC, Rochester NY
Scanning laser ophthalmoscopy is a clinical application of confocal scanning laser technology that allows detailed in vivo imaging of the retina and optic nerve (ophthalmoscopy). As a clinical tools, it helps physician in the diagnosis and follow-up of glaucoma optic neuropathy and as a research tool, it contributes to the understanding of the anatomy and morphology of the living human eye and of the pathologic changes that occur in time in the diseased eye. A description of its principle will be provided with examples of clinical applications. A review of the clinical research in this field will also be provided.
Are corresponding points fixed?
CVS Research Talk: Jamie Hillis
Several investigators have claimed that the retinal coordinates of corresponding points shift with vergence eye movements. Two kinds of shifts have been reported. First, global shifts that increase with retinal eccentricity such shifts would cause a flattening of the horopter at all viewing distances and would facilitate fusion of flat surfaces. Second, local shifts that are centered on the fovea such shifts would cause a dimple in the horopter near fixation and would facilitate fusion of points fixated at extreme viewing distances. Nearly all of the empirical evidence supporting shifts of corresponding points comes from horopter measurements and from comparisons of subjective and objective fixation disparity. In both cases, the experimenter must infer the retinal coordinates of corresponding points from external measurements. We describe four factors that could affect this inference: 1) changes in the projection from object to image points that accompany eye rotation and accommodation, 2) fixation errors during the experimental measurements, 3) non-uniform retinal stretching, and 4) changes in the perceived direction of a monocular point when presented adjacent to a binocular point. We conducted two experiments that eliminated or compensated for these potential errors. In the first experiment, observers aligned dichoptic test lines using an apparatus and procedure that eliminated all but the third error. In the second experiment, observers judged the alignment of dichoptic afterimages and this technique eliminates all the errors. The results from both experiments show that the retinal coordinates of corresponding points do not change with vergence eye movements. We conclude that corresponding points are in fixed retinal positions for observers with normal retinal correspondence.
How to estimate self motion, and how the blowfly does it
Colloquium Speaker: Robert deRuyter, NEC Research Institute
How to estimate self motion, and how the blowfly does it. As a fly moves through the world its retina receives a wildly fluctuating spatiotemporal pattern of optical stimuli. The fly's brain must extract useful features from this input. We are interested in the brains processing strategies, and particularly in whether the flys brain approaches optimal behavior. To make this question experimentally accessible we study a relatively simple but well defined example, namely the estimation of self motion from visual input. From a direct sampling of the joint statistics of self motion and visual input in natural conditions we derive an optimal velocity estimator. By recording from fly motion sensitive neurons we characterize the flys motion response and its dependence on contrast and velocity. Comparing the data we find that the fly and the optimal estimator both have the same characteristic contrast bias in velocity estimation. Moreover, both results show aspects of two well known motion detection models, namely the Reichardt correlator and the gradient model. It has been known for a long time that the motion response of physiological systems is contrast dependent. Our data lead us to reinterpret this contrast bias as an optimal solution to the motion estimation problem in noisy conditions. Some consequences of this interpretation will be explored.
CVS Research Talk: Geunyoung Yoon, University of Rochester, Rochester NY
It has been known that the human eye suffers from higher order monochromatic aberrations as well as defocus and astigmatism. The development of technology to correct the eye's higher order aberrations raises the issue of how much vision improvement can be obtained. An adaptive optics (AO) system that measures and corrects the eye's aberrations provides supernormal vision and improves both the contrast sensitivity and visual acuity by correcting the higher order aberration over a conventional correction methods. These results encourage the development of customized correction methods such as laser refractive surgery, contact lenses and IOLs to achieve supernormal vision in everyday life. However, it is true that several factors such as photoreceptor sampling, biomechanical response of the cornea and chromatic aberration reduce the benefit of supernormal vision that could be provided by customized correction methods.
Color-selective encoding of luminance information
CVS Research Talk: Joe Hardy, University of California, Berkeley
Many current models of early vision posit independent encoding of color and luminance information. In addition, it is supposed that the pathways carrying luminance information subserve functional roles (motion, depth, form) different from those subserved by the color pathway. In contrast with these views, I will present evidence from psychophysical experiments arguing that mechanisms encoding luminance-variations in a color-selective fashion are important for the perception of motion and form. In the first set of experiments, we show that alternating the color of a moving luminance pattern creates a temporal-frequency-selective elevation in the threshold for perceiving the direction of motion. Next, we show that a color-selective shift in the apparent spatial frequency of luminance-varying patterns can be produced by adapting to patterns of different colors. Functional dissociations are shown between mechanisms encoding luminance in a color-selective fashion and those encoding luminance in a colorblind fashion.