PROGRAM ABSTRACTS

Predicting the effects of ocular aberrations on visual performance

Larry Thibos, Indiana University
Co-authors: Xu Cheng & Arthur Bradley, Indiana University

Foveal resolution is almost always limited by the eye's optical aberrations. Previous studies of the effect of the lower-order aberrations (defocus and astigmatism) on visual performance have shown that letter acuity is proportional to the magnitude of defocus. This is understandable because, to first approximation, the angular diameter of the blur circle on the retina for a point source of light is the product of pupil diameter and magnitude of defocus. Thus, for constant pupil diameter, as defocus increases the blur circle grows in size and therefore the dimensions of the smallest resolvable letter must also grow in size proportionally (Smith, et al. ,1989, Optom Vis Sci, 66: 430-435). However, this explanation lacks generality because it ignores the effects of higher-order aberrations on image quality, fails to embrace the true shape of the point-spread function, and provides no insight into the case of the well-focused eye. Accordingly, the purpose of our study was to characterize the visual effects of various combinations of higher-order and lower-order monochromatic aberrations on visual acuity for monochromatic stimuli. We then attempted to account for these experimental results by quantifying the optical quality of the eye with a variety of scalar metrics (Thibos et al., 2004 J. Vision, 4:329-351). We found that several image-plane metrics designed to quantify the compactness of the eye's point-spread function were good predictors of visual acuity under aberrated conditions, especially those metrics that give less emphasis to the low intensity tails of the point-spread function (e.g. the visually-weighted Strehl ratio, R=0.8). Similarly, the pupil-plane metrics of wavefront quality that were most predictive were those that give less emphasis to wavefront errors near the pupil margin (e.g. fraction of pupil area for which wavefront quality is relatively good, R=0.76).

Close Window

Human optical image quality and the Spatial Standard Observer

Andrew Watson, NASA Ames Research Center
Co-author: Albert J. Ahumada, Jr., NASA Ames Research Center

The Spatial Standard Observer (SSO) was developed to predict the detectability of spatial contrast targets such as those used in the ModelFest project (Watson, 2000). The SSO is a lumped parameter model basing its predictions on the visible contrast generalized energy. Visible contrast means that the contrast has been reduced by a contrast sensitivity function (CSF). Generalized energy means that the visible contrast is raised to a power higher than 2 before spatial and temporal integration. To adapt the SSO to predict the effects of variations of optical image quality on tasks, the optical component of the SSO CSF needs to be removed, leaving the neural CSF. Also, since target detection is not the typical criterion task for assessing optical image quality, the SSO concept needs to be extended to other tasks, such as Sloan character recognition.

Close Window

Visual performance and adaptation to changes in wave aberrations

Pablo Artal, University of Murcia, SP

By using adaptive optics we showed that the neural visual system is adapted to the eye's particular aberrations (Artal, P, Chen, L., Fern´┐Żndez, E. J., Singer, B., Manzanera, S., & Williams, D. R. (2004). Neural compensation for the eye's optical aberrations. Journal of Vision, 4(4), 281-287). In this initial experiment, subjects were asked to view a stimulus through the adaptive optics system that either recreated their own aberrations or a rotated version of them. The stimulus seen with the subject's own aberrations were always sharper than when seen through the rotated versions. More recent experiments involved measuring visual acuity through the normal and rotated aberrations after some periods of adaptation to the modified aberrations (Artal, P, Chen, L., Manzanera, S., & Williams, D.R., ARVO, 2004). These experiments suggested that the visual system can partially readapt to unfamiliar aberrations after a relatively short time of adaptation (around 15-20 minutes). In this presentation, I will revise these previous results form experiments using the adaptive optics system. In addition, I will discuss the results we are now obtaining in experiments using contact lenses that modify the aberrations permitting to follow the adaptation effect during longer periods of time (around 10 hours). The impact of this adaptation mechanism in spatial visual performance will be also discussed.

Close Window

Adaptation and subjective image quality

Michael Webster, University of Nevada, Reno

The retinal image is inherently blurred, yet the world typically appears in focus. I will review the role of neural adaptation in adjusting to image blur. Brief exposures to blurred or sharpened images can strongly bias the stimulus that appears best focused. These adjustments have a large influence on subjective image quality but comparatively weaker effects on acuity, suggesting that they function primarily to maintain perceptual constancy despite changes in the environment or the observer. The adaptation may be important for compensating for optical errors, but probably reflects more general processes that match visual sensitivity to the visual environment. For example, similar adjustments occur for temporal blur, suggesting that the adaptation calibrates perceived focus of both space and time.

Close Window

New video-based assistive technologies for low vision

Graham Strong, University of Waterloo

Over the past decade, electro-optical technology has evolved as an important platform for the development of contemporary assistive devices for people with low vision. Video-based devices are now commercially available in iterations corresponding to all commonly differentiated categories of optical low vision aids --- hand held, stand mounted, face worn, and head borne magnifiers and telemicroscopes. With several notable exceptions, these developments are manifestations of "technology push" initiatives aimed at replicating the established functionality of conventional CCTV systems within smaller, more portable embodiments by utilizing recent innovations in sensor, display, and lighting technologies. Most of these compact video magnifiers incorporate a variable, but modest, range of magnifications combined with several contrast enhancement selections. Their major weaknesses include over-magnification, mediocre colour rendering, reduced display contrast, narrow viewing angles, aspect ration distortions, non-intuitive camera/display juxtapositions, and modestly limited fields of view. Nonetheless, these devices have been well-received by many low visioned users because they offer several unique advantages over optical magnifiers for some applications. Video devices with distance viewing capabilities are somewhat more ubiquitous. Clinical experience with these multifaceted systems provides greater insight into the full potential of video technology for low vision applications. Their theoretical potential derives from the fundamental posits of contemporary clinical analysis. "Low vision" is commonly describes any untreatable condition of diminished vision that interferes with an individual's ability to perform common visual tasks such as reading. Underpinning these functional seeing disabilities is a unique assortment of measurable deficits in tested visual function. These elemental impairments often include visual acuity reductions, contrast sensitivity deficits, glare sensitivity, visual field defects, diminished scotopic or photopic function, and photo-recovery deficiencies. Optical devices invariably produce fixed levels of magnification that provide effective solutions for people with relatively specific impairments performing fairly specific seeing tasks. This approach is somewhat one-dimensional since it invokes a visual acuity solution for all impairment presentations. Video-based technology allows developers to incorporate a much broader range of enhancement options in a single low vision device. These devices produce a video-mediated reality that can be selectively tuned to redress the multifaceted visual deficiencies of most users with respect to most seeing tasks. Our experience with video telescope users reveals the utility of several novel viewing strategies that are exclusive to video device use, including innovative orientation and localization strategies that are potentiated by their zoomable user interface. Another significant performance feature of video devices is their intrinsic capacity to create readily discernable images within a wide range of hostile viewing environments. The increased availability of new video-based low vision devices poses a significant challenge for contemporary vision rehabilitation service providers, who must now resolve their integration into well-established low vision assessment protocols. The growing range and versatility of video devices establishes a viable alternative for most optical sight enhancement systems that is best evaluated using organized assessment protocols that allow assessors and users to directly compare the two intervention options.

Close Window

Vision multiplexing in the design of low vision devices

Eli Peli, Harvard Medical School, Schepens Eye Research Institute

The normal visual system provides us with a very wide field of view at an apparent high resolution. The wide field of view is continuously monitored at a low-resolution providing information for navigation and detection of objects of interest. These objects of interest are sampled over time using the high-resolution fovea applying a temporal multiplexing scheme. Most disabling low-vision conditions impact upon only one of the components, the peripheral low-resolution wide field or the central high-resolution fovea. The loss of one of these components prevents the interplay of central and peripheral vision needed for normal function and causes disability. Traditional low vision aids improve the impacted function, but usually at a cost of a significant loss in the surviving function. For example, magnifying devices increase resolution but reduce the field-of-view, while minifying devices increase the field-of-view but reduce resolution. A general engineering approach - vision multiplexing - that takes advantage of the dynamic nature of human vision is presented. Vision multiplexing seeks to provide both the wide field of view and the high-resolution information in ways that could be accessed and interpreted by the visual system. The use of optical methods and computer technologies in the development of a number of new visual aids, all of which apply vision multiplexing to restore the interplay of central and peripheral vision using eye movements in a natural way. Recent developments in a number of multiplexing will be described.

Close Window

Night vision devices for the visually impaired

Noa Rensing, MicroOptical Engineering Corp.
Co-authors: Evan Westrate, Eugene Giller, & Paul Zavracky, MicroOptical Corporation; Eli Peli, Alex Bowers, & Gang Luo, Schepens Eye Research Institute

MicroOptical's low vision program builds on our commercial and industrial eyeglass based displays to address the needs of individuals with low vision. In several of these programs the goal is to integrate cameras into display configurations appropriate to the information being displayed. Patient trials are used to refine the design approach, set specifications, and guide the iteration of the product design in order to best meet the needs of the user community.

In this paper we describe the development of an integrated headset targeting the needs of patients with tunnel vision – caused, for example, by Retinits Pigmentosa and Glaucoma, and of people with night blindness. The goal of the program was to integrate a camera and see-through display in a comfortable headset similar in appearance to ordinary eyeglasses. In this case, the see-through nature of MicroOptical's display is a key feature, as it allows the user to utilize their entire field of vision; see-around is not an option due to the users' own restricted field. The camera design is based on a CCD sensor to maximize low light sensitivity and included infrared LED's for auxiliary illumination. This extends the useful range of camera operation to below 0.2 ft-cd. Image processing algorithms are integrated into the drive electronics to further enhance the utility of the device. We will discuss the engineering tradeoffs, design considerations, and preliminary results from the user trials.

Close Window

Prosthetic vision for the blind: the future in sight?

Gislin Dagnelie, Johns Hopkins University

As the ultimate tool in low vision rehabilitation, visual prostheses aim at restoring vision to individuals who have lost sight due to a blinding disease or trauma (or who may never have had functional vision) by stimulating intact proximal stages of the visual pathways (inner retina, optic nerve, visual cortex). Even rudimentary functional visual prostheses are still a few years away (at least!), but from preliminary tests in blind volunteers with temporary as well as chronic implants we know the characteristics of phosphenes elicited by electrical stimulation of the retina and higher visual pathways, and we can make educated guesses about the time line and degree of success awaiting this area of research. To get from single phosphenes to spatiotemporal vision will require both technological and conceptual advances, and we may not even know some of the hurdles that will have to be cleared. This talk will provide a brief overview of current efforts and will build on results obtained thus far, known anatomical, physiological, and engineering aspects and challenges, and results obtained in prosthetic vision simulations, to anticipate developments over the next 5-10 years.

Close Window

Color discrimination in carriers of color deficiency

John Mollon, Cambridge University, UK
Co-authors: S. Mary Hood, Cambridge University; Lisa Purves & Gabriele Jordan, University of Newcastle

Some 15% of women are carriers of X-linked deficiencies of color vision. Do they share a little in the disability of their sons, in the way that mothers of hemophiliacs bruise more readily than other women? We argue that carriers of protan deficiencies must be considered separately from deutan carriers. If the long- and middle-wave cones are normally present in a ratio of 2:1 and if X-chromosome inactivation occurs randomly within the retina, then protan heterozygotes are expected to exhibit an average ratio (1:2) that is no further from unity than is the normal ratio, whereas the average ratio for deutan heterozygotes should be more extreme (5:1).

We tested 55 obligatory carriers of color deficiency and 55 controls. Square-wave, red-green gratings of 2 cycles per degree were displayed on a calibrated monitor using a graphics card with a resolution of 15 bits per gun. Discrimination thresholds were measured by a two-alternative temporal forced-choice task: The subject was required to report in which interval a minimal red-green modulation was present. To prevent the use of luminance cues, sensation luminance for the red and green guns was equated for each subject in a preliminary test and in addition, a random luminance variation was superimposed on the stimulus.

Deutan, but not protan, carriers had significantly poorer discrimination than did male or female controls. Note that this finding is independent of the question of whether carriers of anomaly are tetrachromatic: The number of dimensions of color vision is distinct from the fineness of discrimination along any given dimension.

Close Window

Perceived chromaticity shifts with retinal eccentricity

Neil Parry, Vision Science Centre, Manchester Royal Eye Hospital, UK
Co-authors: Declan McKeefry, University of Bradford; Ian Murray, UMIST

It has been shown that human colour perception changes as a function of retinal eccentricity. Some argue that, if factors such as stimulus size, macular pigment and the contribution of rods are taken into consideration, then peripheral colour vision is found to be qualitatively similar to that experienced in the fovea.

We examined this issue using an asymmetric colour matching technique. Perceived shifts in chromaticity of test stimuli presented on a CRT at different nasal eccentricities up to 24∞ were measured by comparing them with a probe of eccentricity 1deg. Photometrically isoluminant probe and test pairs were flashed for 380ms on a 12.5cd.m-2 white background. Twelve to 24 probe chromaticities were equally spaced around a hue circle in MBDKL colour space. They included "red-green" and "blue-yellow" cardinal axes. Perceived hue shift was measured as an orientation change in this colour space using a same-different paradigm to generate psychometric functions. In a second experiment, hue, saturation and brightness matches were made by method of adjustment.

Consistent with previous studies, perceived hue changed with increasing retinal eccentricity. The magnitude of these shifts was not uniform as a function of chromatic axis. Whilst some hues showed large perceptual shifts with increasing eccentricity, others exhibited negligible shifts. These invariant hues were found to be located close to previously identified unique hue loci2. Perceived saturation changed with chromatic axis, but this did not correspond to the hue shifts described above. This lack of correspondence might be attributable to the differential contribution of receptoral and post-receptoral mechanisms.

Close Window

A study on chromatic contrast regulation

Margalith Harrar, Museum National d'Histoire Naturelle, FR
Co-author: Françoise Viénot, Museum National d'Histoire Naturelle

Chromatic contrast occurs when a colour patch is surrounded by a chromatic field. Several studies demonstrate the importance of variation within the surrounding field, in addition to the effect of chromaticity of the field on colour appearance.

The chromatic appearance of a target is tested under equilibrium condition resulting from the effect of two inductive stimuli: a large peripheral colour inducing field which produces a strong colour induction, and an imbedded neighbouring surround which counterbalances the strong induction generated by the periphery. Chromatic appearance of the target (specified in terms of L-, M-, S-cone excitation signals) is assessed according to a hue cancellation judgement. Various surrounds similar in terms of average chromaticity and different in terms of spatial contrast were derived from a natural image by pixel randomising, by pixel ordering and by averaging the chromaticity.

Results show that the surrounds vary in their efficiency to counterbalance the strong induction generated by the peripheral colour field. The statistics of pixel randomised image seems to be the most powerful to counterbalance peripheral induction. Also, there is no obvious difference between pixel reorganised scenes and pixel averaged scenes to regulate contrast. This means that in our experiment, the high chromatic contrast amplitude is not definitely more efficient than uniform patches to control chromatic induction.

In conclusion, although every neighbouring scene has the same average chromatic content, the resulting colour appearance of the target seems to differ between scenes, and this may be ascribed to the spatio-chromatic organisation of the scene.

Close Window

Color constancy: stimulus or task

Adam Reeves, Northeastern University, US
Co-authors: Kinjiro Amano & David Foster, University of Manchester Institute of Science & Technology

Assessed with briefly exposured Mondrians, color constancy has been reported to vary across a vast range, from almost none, using direct matching of hue and saturation (Arend & Reeves, 1986) to almost complete, when discriminating illuminant from material changes (Craven & Foster, 1992). Not just the task, but the stimuli, labs, conditions, and observers differed between these studies. To isolate any task effect, our naïve observers rated how well the central squares in pairs of successive 1 sec Mondrians matched in material or in hue and saturation. The simulated Mondrians comprised 49 (7x7) abutting surfaces, whose global illuminants were 4000K or 16000K (first Mondrian) and 6700K (second). For observers who clearly distinguished between the two rating tasks, our Brunswickian color constancy index averaged 82% for material and 29% for hue/saturation. For those who did not, the index was ~ 52%. Inserting fine spatial (5' arc between squares in a pattern) and temporal gaps (0.2 sec between the first and second pattern) drove the index to ~38% in all cases. If color constancy requires averaging of signals over the display, such averaging is sensitive both to bottom-up signals (physical contiguity) and to top-down ones (task and ability).

Close Window

Taking gaze tracking from the lab to the field (and sky)

Jeffrey Mulligan, NASA Ames Research Center

Gaze tracking measures can provide information about the spatial locus of attention of a behaving agent. Here we examine the looking behavior of helicopter pilots flying under visual meteorological conditions. The goal of the study is to correlate various types of looking behavior with pilots' accuracy in maintaining a precisely specified route, to support the formulation of new regulations and procedures. Eight pilots were instructed to fly a precision route specified by a series of waypoints. The geographic coordinates of the waypoints were entered into an onboard receiver of global positioning system (GPS) signals. Using a single 8mm videocassette, we recorded four video streams (30 frames per second), one audio stream, and GPS data sampled at 1 Hz. The four video streams were comprised of two cameras attached to the pilot's head, and two stationary cameras mounted on the aircraft. The head-mounted cameras consisted of a camera viewing the pilot's right eye through an infrared "hot" mirror, and a forward-looking scene camera located in front of the subject's forehead. From the eye images we compute estimates of head-relative gaze, while we obtain independent estimates of the head pose from the head-mounted scene camera and a stationary "face" camera. This talk will present an overview of the technical challenges encountered in the processing of the images, as well as preliminary results of the study.

Close Window

Portable eyetracking in natural behavior

Jeff Pelz, Rochester Institute of Technology

Instrumentation to track observers' gaze has been in use for over a century [e.g., Delabarrer, 1898], resulting in a rich literature on the descriptive statistics of dynamics and behaviors of the oculomotor system. Early methods relied on mechanical connections to the eye and/or sometimes painful methods of restraining natural movements of the head. Later systems incorporated still or motion film and analog systems. The experiments were necessarily limited to the reduced laboratory environments, with stationary observers' eye movements monitored over trial durations typically limited from milliseconds to a few seconds. Low-level metrics such as the distributions of fixation duration, saccade size and direction, etc. were likely influenced by the rigid constraints imposed by early (and some current) instrumentation necessary to track gaze.

Investigators have developed custom instrumentation to extend oculomotor experiments into the real world, pioneered by Land and colleagues [1994, 1999, 2000]. Video cameras and electronics have been reduced in bulk and power consumption to the extent that real-time, wearable head and eyetrackers are practical, so it is now possible to monitor the eye and head movement behavior of observers in a wide range of tasks [Land & Hayhoe, 2001, Pelz & colleagues, 2000, 2001a,b, 2004a,b].

Some of the basic findings from laboratory-based experiments extend into natural behavior, but experiments have also revealed sophisticated strategies that were not seen simplified tasks under reduced laboratory conditions.

Close Window

Simulated visual field loss in mobile observers

Kathleen Turano, The Wilmer Eye Institute, Johns Hopkins

The architecture of the retina imposes certain constraints on visual function, e.g., poor acuity in peripheral retina and poor discrimination of very fast speeds in central retina. From an adaptive perspective, it is likely that the distinct retinal regions serve unique roles in action tasks. Until recently, technology has limited the ability to explore the roles of the various retinal regions on tasks performed by mobile observers. With faster computers and graphics boards, and more sensitive head and eye tracking systems, we can now isolate and visually stimulate specific retinal regions even in the presence of observer eye, head, and body movements. We have been exploring the roles of the retinal periphery and central retina in navigation. To do so, we use a wide-field head tracking system to define the observer's point of view within some 3D world, transform the view into a 2D perspective image, mask it, and output the image to a head-mounted display. To ensure that the masked area remains fixed in retinal coordinates, the position of the mask is controlled by the observer's eye position, which is determined from online analysis of eye images. One of the challenges in developing such a system for vision-action research is the necessity to achieve fast throughput (minimum processing time). I will discuss the effects of system delays on human response measures in navigation as well as present some of our recent findings on the role of the retinal periphery for navigation.

Close Window

Analysis of visual search using gaze contingent displays and ideal observers

Wilson Geisler, University of Texas at Austin
Co-authors: Jeffrey Perry & Jiri Najemnik, University of Texas at Austin

We have developed an algorithm and software for creating displays where either spatial resolution or chromatic content can be varied arbitrarily across the visual field, contingent on the current gaze direction (see http://svi.cps.utexas.edu/ ). We have been using this software to analyze the mechanisms of visual search. Search time and eye movements were measured while subjects searched for Gabor targets in 1/f noise. We varied, parametrically, target spatial frequency, noise contrast, and rate of fall-off in display resolution from the point of fixation. This experiment provides quantitative data on how information in the periphery contributes to search performance and to the pattern of eye movements. We find that the shape of the function describing search time (and number of fixations) versus degree of foveation is dependent upon target spatial frequency, but is independent of noise contrast. Interestingly, we find that human search performance in this complex task approaches that of an ideal observer (with the same spatial resolution as the human visual system), even though humans have very limited visual memory, and presumably cannot integrate information optimally across many fixations. Analysis of the ideal observer shows that, in fact, there is little benefit either from having detailed visual memories or from integrating information perfectly across fixations. Much more important is efficient parallel processing of the information on each fixation.

Close Window

Retinal image stabilization strategies for adaptive optics

Dan Ferguson, Physical Sciences, Inc.

Advances in retinal imaging with adaptive optics have yielded unprecedented image quality, and a new window to the structure and function of tissues in the living eye. When combined with low-coherence imaging techniques, the potential exists for layer-by-layer 3D mapping and functional imaging of the retina with cellular resolution. With smaller and smaller voxels, reduction of image noise and enhancement of specific spatial frequencies to boost contrast of various structures requires image averaging capability. But with higher and higher magnification, the image perturbations associated with eye motion are magnified as well. Post-processing compensation of image displacements and distortions becomes quite problematic in noisy image fields with relatively few landmarks and low contrast features. By adding high-speed, real time retinal image stabilization to adaptive optics imaging techniques, most motion-induced problems can be addressed, and the performance of image processing algorithms can be significantly enhanced. Closed-loop precision optical tracking has been demonstrated in the eye, and new approaches to integration and control in the Tracking Adaptive Optics Scanning Laser Ophthalmoscope (TAOSLO) are described. Both high closed-loop tracking bandwidth (~1KHz) and adaptive control of AO imaging systems are essential to achieve the target accuracy of sub-pixel motional broadening.

Close Window

The origin of the chromatic response of magnocellular ganglion cells

Hao Sun, SUNY College of Optometry
Co-author: Barry B. Lee, SUNY College of Optometry

The ganglion cells of the primate magnocellular (MC) pathway form the physiological substrate for photometric tasks and show a response minimum to equiluminant stimuli. However, there are residual second-harmonic (2F) responses to such stimuli at medium and high temporal frequencies (Lee et al., 1989) and first-harmonic (1F) chromatic responses at low temporal frequencies (Smith et al., 1992). The responses of MC cells to modulation in different directions of an L, M-cone space show that the higher harmonic distortions appear as soon as the stimulus vector moves away from the luminance direction, and are marked to cone-isolating stimuli. This rules out a mechanism based on M, L-cone summation non-linearities and suggests the presence of rectified chromatic signals, which might arise from the parvocellular (PC) ganglion cells. Recent studies show that both the 1F and the 2F responses become smaller if small spots are used, suggesting involvement of the surround. However, when annulus stimuli activating the surround alone are used, both effects disappear. Area-summation experiments show that the 2F response has a diameter of ~1.5 times the center diameter, which is comparable to PC cells' receptive fields (center+surround). In other experiments, the MC cell residual response to equiluminant borders also shows a similar width to the 2F field. Lastly, it is possible to null out the 2F response on a contrast reversal test with red-green chromatic gratings. This suggests a mechanism in which linear spatial summation occurs before a rectifying non-linearity, rather than a subunit structure.

Close Window

Melanopsin-expressing ganglion cells in primate retina project to the LGN and signal both color and irradiance

Dennis Dacey, University of Washington, Seattle
Co-authors: Hsi-Wen Liao & King-Wai Yau, Johns Hopkins School of Medicine; Beth B. Peterson & Farrel R. Robinson, University of Washington; Paul D. Gamlin, University of Alabama Birmingham; Vivianne C. Smith & Joel Pokorny, University of Chicago

Human vision starts with the activation of rod photoreceptors in dim light and of short (S)-, medium (M)-, and long (L)- wavelength-sensitive cone photoreceptors in daylight. Recently, unique photoreceptive ganglion cells were discovered in the retina of nocturnal rodents. These cells express the putative photopigment melanopsin and serve subconscious, 'non-image forming' functions such as circadian photoentrainment and pupil constriction. A comparable photodetection pathway in the diurnal human has been hypothesized. We show here a population of 'giant', melanopsin-expressing retinal ganglion cells in primates with unexpected anatomical and functional properties. The giant cells attain a peak density in the parafovea and are retrogradely labelled from tracer injections into the lateral geniculate nucleus (LGN). Physiologically, these cells are strongly activated by rods and all three cone types, and display a rare, S-OFF type of color-opponent receptive field. Together, the intrinsic photoresponse and the rod-cone inputs provide an irradiance signal that spans the full dynamic range of vision and that could reach neocortex via the geniculocortical pathway and contribute to conscious visual perception. Moreover, cone-opponency implies that, in the trichromatic primate, wavelength information is available to the pupillomotor and circadian systems.

Close Window

Primate pupillary responses mediated by a novel photopigment

Paul Gamlin, University of Alabama at Birmingham

The human pupillary light reflex is one of the most familiar and well studied circuits that link human vision directly to motor behavior, yet its neural control remains mysterious. The pupil reflex is driven by pathways that originate in rod and cone photoreceptors. However, given the rapid desensitization in cone pathways, the sustained constriction of the pupil under photopic conditions is paradoxical. Furthermore, at light OFF, the pupil response often exhibits a brief dilation followed by a paradoxical, sustained pupilloconstriction. Recently a population of intrinsically-photoreceptive retinal ganglion cells has been discovered in rodents that can drive the pupillary light reflex. Since a comparable group of distinctive giant retinal ganglion cells is present in macaque monkey, we sought to determine their influence on primate pupillary responses.

In two rhesus monkeys, we recorded pupillary responses evoked by 10 sec pulses of light at 10 wavelengths between 430 nm and 610 nm over a 6 log unit range. Such responses were recorded under normal conditions and during pharmacological blockade of ON and OFF retinal channels by intravitreal injections of L-AP4 and CNQX. To confirm injection efficacy, the flash electroretinogram was assessed using an Espion system (Diagnosys) equipped with a ColorBurst hand-held mini-Ganzfeld simulator. Following intravitreal injection of L-AP4/CNQX, the b-wave of the ERG was effectively eliminated. Nevertheless, monochromatic illumination between 430 nm and 530 nm elicited both light-evoked pupillary responses and sustained pupilloconstriction following light extinction. The spectral responsivity data for both these responses were well fit by a Vitamin A1 pigment nomogram with a _max of 482 nm. Also, under normal conditions, the spectral responsivity data for the paradoxical, pupilloconstriction were well fit by the same nomogram. Overall, the characteristics of these pupillary responses match the reported spectral sensitivity and kinetics of the intrinsic light response of giant retinal ganglion cells in vitro (Dacey et al., ARVO 2003; Smith et al., ARVO 2003).

In summary, under most photopic conditions, light-evoked pupillary responses reflect a significant contribution from both cones and intrinsic photoreceptive mechanisms. Paradoxical, sustained pupilloconstriction following light extinction reflects a dominant contribution from the intrinsic photoreceptive system. We conclude that all aspects of paradoxical pupil behavior result from the unique response characteristics of the novel photopigment expressed by intrinsically-photoreceptive retinal ganglion cells.

Close Window

Photoreceptor coupling in the primate retina

Steve Massey, University of Texas-Houston Medical School
Co-authors: J.J. O'Brien, University of Texas-Houston Medical School; X. Chen & P.R. MacLeish, Morehouse School of Medicine

Purpose: There are many examples of neuronal coupling via gap junctions in the primate retina. In the outer retina, cone to cone coupling may provide a mechanism to reduce uncorrelated noise (DeVries et al., 2002). Here we report the distribution of connexin36 (Cx36) in the outer plexiform layer of the primate retina.

Methods: Macaque retina was immunostained with antibodies against Cx36, cone arrestin (7G6) and blue cone opsin and labeling was viewed by confocal microscopy.

Results: The monoclonal antibody to cone arrestin (7G6) labels cones from their outer segments to their pedicles (Zhang et al., 2003). In addition, high-resolution confocal images in the plane of the cone pedicles also show a network of fine telodendria connecting adjacent cones. These contact points between adjacent cones have previously been identified as gap junctions sites by electron microscopy (Tsukamoto et al., 1992). Double labeling of this material with an antibody to connexin36 showed a light distribution of fine Cx36 plaques at the level of the telodendria. Individual Cx36 plaques occurred at contacts between neighboring cone telodendria. Connexin36 plaques also occurred at presumed postsynaptic processes beneath the cone pedicle. Blue cones were visualized within the photoreceptor matrix with an antibody against blue cone opsin. Animating through a confocal stack enabled us to identify blue cone pedicles, which are smaller and bear fewer telodendria. We are currently evaluating the Cx36 contacts of blue cone pedicles.

Conclusion: By visualizing the matrix of cone telodendria in the outer plexiform layer of the primate retina, we conclude that Cx36 is responsible for cone-to-cone coupling in the primate retina.

Close Window

Perceptual learning without perception is not passive and results in robust perception

Takeo Watanabe, Boston University

The brain demonstrates an amazing ability to become increasingly sensitive to important stimuli. It is often claimed that we become more sensitive only to the critical signals in the tasks we attend to. However, our recent series of experiments have shown that perceptual learning occurs with little attention. First, mere exposure to sub-threshold and task-irrelevant motion coherence signals led to enhancement in sensitivity to the motion direction. This finding indicates that attention is not necessary for perceptual learning (Watanabe, Nanez & Sasaki, 2001). Second, exposure to two types of task-irrelevant motion that are processed at different levels of visual processing improved sensitivity only at the lower-level. These results suggest that task-irrelevant perceptual learning occurs at a very low-level (Watanabe et al, 2002). Third, we addressed the question as to whether such task-irrelevant learning occurs purely passively (caused by stimulus-exposure). During exposure, we presented four different directions of motion an equal number of times, but the direction of interest (DOI) was paired with the task targets. If learning is purely passive, thresholds should improve equally for all the presented directions. Surprisingly, the threshold improved only for the DOI. These results show that learning of a task-irrelevant and sub-threshold feature is not purely passive, but it occurs only when the feature is correlated with a task target (Seitz & Watanabe, 2003). Finally, we have recently found that such learning is so robust that it sometimes results in perception of the exposed direction even when nothing is presented (Seitz, et al, in preparation). Based on these findings, we propose a model in which diffuse reinforcement learning signals perform an important role, complementary to focused attention in perceptual learning.

Close Window

Effects of video game playing on visual functions

Daphne Bavelier, University of Rochester
Co-author: C. Shawn Green, University of Rochester

Video game playing puts extraordinary demands on our visual and motor skills, offering a unique opportunity to investigate learning and its limits in the visuo-motor domain. In a series of experiments, significant improvements in video gamers were observed in four different measures of visual attention, indicating an increase in the capacity of the visual attentional system as well as more efficient processing of the visual scene over both space and time. Non-video gamers trained on a first-person point of view action video game showed significant improvement from their pre-training scores indicating that as little as ten hours of video game playing can alter these fundamental aspects of visual attention. Lately we have asked whether video game training could also alter earlier stages of visual processing. Visual resolution with and without spatial interference was measured at central and peripheral eccentricities. Video game players displayed enhanced visual resolution in all conditions, including in central vision where acuity limits are thought to be controlled predominately by the optics of the eyes. Non-video gamers trained on a first-person point of view action video game showed greater improvements than the control group on this task in the context of spatial interference, also termed crowding, but not without spatial interference. This result indicates greater plasticity in the factors that limit crowding than those that limit acuity proper, consistent with the view that these two types of visual resolution measure the limits of different structures in the visual system.

Close Window

Learning and cortical hierarchies

Geoffrey Ghose, University of Minnesota

A critical issue for understanding the physiological bases of cortical learning is identifying which particular representations can be altered by training. One hypothesis is that all cortical circuits are equally mutable with appropriate training. In this scheme, if specific neurons in primary visual areas provide the most appropriate signals for a task, then these neurons are modified by training to produce perceptual improvements. Two lines of evidence have supported this reasoning. First, the mutability of receptive field properties in primary visual cortex has been demonstrated by retinal lesioning experiments. Second, psychophysical experiments have shown perceptual improvements that fail to transfer across dimensions that are represented in primary visual cortex. For example, several studies of perceptual learning have reported perceptual improvements that are confined to the retinotopic location of training. Recent evidence suggests, however, that not all representations are equally mutable. In particular, single unit recording in monkeys trained in an orientation discrimination task, suggests that the representations in areas V1 and V2 are not altered by extensive training, while representations in an intermediate cortical area, V4, are. Imaging studies of the effects of orientation adaptation in humans reveal a similar pattern: little effect in primary visual areas and a robust effect in intermediate areas. These studies suggest that the hierarchy of visual areas is defined by both an elaboration of receptive field properties and an increase in the mutability of those properties.

Close Window

The dual functions of color vision: connecting thresholds and appearance

David Brainard, University of Pennsylvania

Color vision facilitates two distinct functions of vision. The first is to segment objects from each other and the background - color differences across an image provide an important cue in this regard. The second function is to provide information about object identity - color is generally considered to be a perceptual correlate of an object's intrinsic surface reflectance. In parallel with these two functions, two distinct experimental paradigms have been used to probe the adaptive mechanisms of color vision. The first employs measurements of detection and discrimination thresholds. This paradigm is attractive because threshold measurements are objective and precise. Threshold measurements, however, are silent about the way things look. Thus studies concerned with color appearance generally employ subjective scaling and/or asymmetric matching methods. These methods directly assess appearance. Although both threshold and appearance methods may be used to build models of color vision, there has been relatively little consideration as to whether a single model and set of parameters can simultaneously account for results from both paradigms. Understanding the answer this question is critical if we are to leverage threshold measurements to build models of color appearance. In this talk I will review recent work from my lab, with attention to at least two of the following three issues: i) how compatible are the computational demands placed on the visual system for optimizing discrimination and identification across changes of viewing context? ii) what experimental and analytic logic can be used to link models of color discrimination and color appearance? and iii) can a common model account for the effect of context on both thresholds and appearance?

Close Window

Color appearance models: splicing color science and practical applications

Mark Fairchild, Rochester Institute of Technology

Often in vision science researchers strive to collect data and create models for specific physiological mechanisms or psychophysical properties of color vision. Computational approaches often explore fundamental limits to visual (human or machine) performance for certain tasks, such as color constancy. From another perspective, color scientists often have very pragmatic issues to solve such as producing or reproducing color images or materials. Color appearance models have developed to predict overall human perception of color in widely varying viewing conditions and are applied to practical problems such as digital image printing. Such models generally bear little resemblance to known physiology, but rather aim to predict overall psychophysical performance. They also aim to predict what is seen, not what is not seen. This presentation will briefly review perspectives on color appearance from vision science, colorimetry, and color science. Some historical context will be presented leading up to the recent development of a new CIE color appearance model, CIECAM02. The basic structure of CIECAM02 will be discussed. Future directions in color and image appearance modeling will be described and examples of practical applications of these models to problems in the imaging industry will be illustrated.

Close Window

Non-monotonic relations between spectral coding and color perception

Kenneth Knoblauch, Inserm U371 Cerveau et Vision, FR

A central question in color vision is how neural activity due to spectral differences in the stimulus is integrated to produce the color perceived of an object. The problem is complex because in a sufficiently complex environment, the color attributed to a surface covaries more closely with its reflectance than the spectral characteristics of the light reaching our eye from the surface, itself. However, in the case of isolated fields in a dark surround, it might be thought that a simple relation between neural coding and color could be defined. While Young's notion of fundamental hue sensations associated with the excitation of individual classes of receptors has long since been discarded, linear and some nonlinear models relating photoreceptor excitation and color predict just such an association in this simple situation. For example, in the Hurvich & Jameson (1968) opponent-colors model, L-cones signals contribute redness and yellowness, M-cones greeness and yellowness, and S-cones blueness and redness. Nevertheless, this assignment is not uncontroversial, as several studies have suggested that M-cones could signal blueness, under some conditions. Knoblauch & Shevell (2001), using a hue-cancellation task in which the modulation of lights was constrained along cone-isolating directions, found that sometimes yellow-blue cancellation yielded non-monotonic relations between cone-excitation and the interaction of opponent-colors. These results are inconsistent with fundamental hue sensations associated with cone signals and suggest a resolution to some conflicting claims about what colors cones signal. Additionally, the non-monotonic relations are consistent with the rectifying nonlinearities observed in sub-cortical chromatic coding.

Close Window

Modes of appearance revisited: the non-unitary of colour codes

Rainer Mausfeld, University of Kiel, DE

The concept of 'modes of appearances' has an odd status in colour science. As a purely descriptive concept, which has emerged from attempts to classify well-established phenomenological observations of different 'types' of colour appearance, it is itself in need of an explanation in terms of some abstract principles of the internal coding of colour. Still, colour science has, by and large, downplayed the theoretical importance of corresponding observations by considering them as extraneous to colour proper and ascribing them to the effects of 'viewing conditions' or 'context': Different 'modes of appearance' are thus not regarded as different types of colours but simply as modifications of the same colour — namely the colour directly tied to the receptor excitations elicited by the local incoming light stimulus (as produced by the 'aperture mode') and transformed and modified in subsequent stages of processing. Such an approach rests on the assumption that 'colour' constitutes a homogeneous and unitary domain with respect to the 'data format' or set of representational primitives in which 'colour' figures, and that, furthermore, issues of processing can be investigated independently of the structure of representational primitives. However, this assumption, apart from being ill-motivated from a functionalist point of view, conflicts with well-established core observations: These range from 'modes of appearances', and phenomena, where two different colour appearances can be simultaneously present at the same 'location' as distinctive aspects of the percept (such that changes in one of these aspects cannot be compensated by changes with respect to the other), problems encountered with asymmetric colour matches, to appearances of 'material colours' (which colour science has notorious difficulty to deal with in a theoretically satisfactory way). In contrast to this assumption, various lines of converging experimental evidence suggest that the internal processing of colour is based on (at least) two different types of data structures or representational primitives, which give rise to two proprietary types of colour codes, pertaining coarsely to the 'colour of an object' and the 'colour of the ambient illumination', respectively. Accordingly, the different 'modes of appearance' mirror internal colour processes of an essentially different nature (including mechanisms for dealing with vagueness, ambiguity, and continuous transitions in relevant input characteristics), and, in line with various types of observations and experimental findings, indicate that 'colour' exhibits different coding properties with respect to different types of representational primitives. In particular, experimental evidence suggests that the 'colour' parameters of the representational primitives involved are, in characteristic ways, intrinsically interwoven with other parameters of these primitives, such as 'texture', 'depth', 'motion' etc. Because of this, it is highly unlikely that an assignment of values to the respective 'colour' parameters can be made on the basis of relations on or transformations of retinal colour codes alone (as computational schemes solely based on colour codes presume). After a brief discussion of the above-mentioned issues, I will report experimental results on the segregation of the retinal colour code into a dual colour code and on colour-related input properties (e.g. scene statistics) that modulate such a segregation.

Close Window

Vision Restoration Therapy (VRT) for the treatment of visual field deficits after brain damage: the role of eye movements and fixation behavior

Bernhard Sabel, Medical School Otto-von-Guericke University of Magdeburg, DE
Co-authors: Sigrid Kenkel, Erich Kasten, & Iris Mueller, Medical School Otto-von-Guericke University of Magdeburg

Adult patients suffering from post-chiasmatic lesions are able to increase their defect visual field size of an average of about 5° visual angle after Vision Restoration Therapy (VRT). To investigate visual plasticity effects in children an ongoing prospective study of 18 children and students, age 6-17, with visual field defects due to stroke, tumor or head trauma was conducted. Lesion age was beyond 1 year. The children trained for 3 months with a software presenting stimuli in areas of residual vision of the visual field. Training was performed five times a week for 20-40 minutes a day in their homes. Post training results revealed increased light detection in automated static high resolution perimetry. The number of detected stimuli (%) increased significantly pre-post training from 36.44% (19.19) to 41.61% (15.20). No significant changes had occurred during a non-treatment period. Results remained stable after a three- month- follow- up. Fixation and false positives responses were experimentally controlled and changes in these indicators were not significant. In conclusion, children and students with visual cortical impairments are able to enlarge their visual field as measured in supra-threshold perimetry after systematic light stimulation.

Close Window

Lessons about visual rehabilitation from children treated for cataracts

Daphne Maurer, McMaster University, CA
Co-author: Terri L. Lewis, McMaster University

Studies of children treated for cataract indicate that there are different sensitive periods for different aspects of vision. For example, visual deprivation beginning at 6 months of age prevents the development of normal acuity or peripheral light sensitivity, but has no effect on sensitivity to the global direction of motion, which is affected adversely only by visual deprivation beginning near birth. Our results also indicate that the deleterious effects of visual deprivation are sometimes worse if there was not only deprivation, but also uneven competition between the eyes–because the deprivation was monocular and there was little patching of the non-deprived eye. However the adverse effects of uneven competition are not seen at all points during development or for all aspects of vision. Together, the results are consistent with theories that visual input can affect later development by (a) preventing deterioration of existing neural structures; (b) reserving neural networks (via Hebbian competition) for later refinement; (c) allowing a developmental trajectory to start from an optimal state; (d) allowing recovery from earlier deprivation via the recruiting of alternative pathways; and/or (e) allowing the refinement of previously established structures. Our results indicate that there are timing constraints on each of these mechanims that are manifested as variations in sensitive periods. The implications for visual rehabilitation will be discussed.

Close Window

Critical periods in visual development: implications for amblyopia

Lynne Kiorpes, New York University

Vision is immature in newborns of virtually every species studied. Vision develops over some protracted period of time, the duration of which depends on the species. During this developmental period, the visual system is vulnerable to abnormal visual experience. Many studies over the past 25 years have sought to define the nature of this "critical period". However, the critical period is in fact a concept that subsumes many types of vulnerability, for example, susceptibility to insult as well as responsiveness to treatment (Daw, 1998). Different aspects of visual function have different critical periods. For example, spectral sensitivity has a very early, relatively short sensitive period whereas binocular vision has a very long one (Harwerth et al., 1986). Different structures in the visual pathways also have different critical periods. This is evident, for example, from studies showing that many years of monocular deprivation has little functional consequence for neurons in the lateral geniculate nucleus of monkeys (Levitt et al., 2001), yet has devastating consequences for neurons in primary visual cortex.

This talk will review some of what we know about critical periods and the development of amblyopia (reduced acuity resulting from early strabismus, blur, or deprivation) from work with animal models. In particular, I will discuss the evidence that differential vulnerability of visual function arises from the nature of the neural mechanisms underlying those visual functions. The implications for the treatment of amblyopia will be addressed.

Close Window

Early visual deprivation: strategies to prevent visual deficits and factors that promote recovery when deficits occur

Donald Mitchell, Dalhousie University, CA

The role of experience in the development of the central visual pathways has been a controversial issue that has been explored in the past through examination of the consequences of imposed periods of continuously abnormal visual input. I will report on a different approach whereby kittens were provided daily with separate periods of normal and abnormal visual exposure. For a period of 4 weeks beginning at 4 weeks of age, the total visual experience each day of kittens was restricted to 7 hours split unevenly between periods of monocular (ME) and binocular (BE) exposure. For the former exposure, the kittens wore an opaque mask that covered one eye for either 4 to 7 hrs while for the remaining time the mask was removed to allow binocular exposure. Visual acuity of the non-deprived eye for square-wave gratings were made by use of a jumping stand immediately prior to termination of the period of mixed visual exposure while the acuity of the deprived eye was assessed the next day immediately after kittens were taken from the darkroom in which they were housed between their daily 7 hour period of visual exposure. The results indicated that two hours of normal binocular visual input each day allowed the development of normal visual acuity in both eyes even when imbedded in far longer periods of abnormal visual exposure that alone would lead to blindness in the deprived eye. The beneficial effects of BE were not observed in animals that wore dissociating prisms during the period when both eyes were open indicating that it was critical that the visual input to the two eyes be both simultaneous and concordant for visual acuity in the deprived eye to develop to normal levels. The results are not what would be expected by a passive process whereby neural connections are continually shaped in response to the integrated visual input during the sensitive period, but instead suggest a process which places greater weight on normal or certain preferred input. We propose that the initial state, established by programmed gene expression, may be validated by a limited amount of the preferred input but changes made in response to abnormal experience may take longer because of the time necessary to translate additional proteins that mediate a different set of synaptic or hetereocellular events.

A special role for binocular visual input was also observed in studies of the recovery from an early period of monocular deprivation. Although the resulting visual deficits in the deprived eye are reduced or eliminated by a period of reverse occlusion (analogous to patching therapy of human amblyopes), the improved vision in the deprived eye may not persist after the period of occlusion of the non-deprived eye is terminated. Moreover, the vision of the non-deprived eye may be permanently compromised as a result of occlusion. Experiments on animals that received periods of part-time occlusion of the non-deprived eye that allow a daily period of binocular visual exposure indicate a route by which deleterious effects of full-time occlusion can be avoided.

Close Window

A neural mechanism that is plastic in adults and its implications for coding of color

Jay Neitz, Medical College of Wisconsin
Co-author: Maureen Neitz, Medical College of Wisconsin

Neitz, Carroll, Yamauchi, Neitz and Williams (2002) reported that color perception is mediated by a plastic neural mechanism that is adjustable in adults. It was demonstrated that long-term changes in chromatic experience can produce changes in color perception that can last for weeks. The long-term perceptual changes resemble traditional color aftereffects; for example, a light that appears unique yellow to the unadapted eye will transiently appear greenish after minutes of adaptation to a red light and, similarly, daily wearing of red contact lenses will produce a shift in the spectral locus of unique yellow. However, we propose that the long-term changes in color vision are fundamentally different from conventional chromatic adaptation in ways besides having a prolonged time course. Conventional chromatic adaptation represents an adjustment of the visual system that improves its ability extract information about the spectral reflectance of objects as the chromaticity of illumination varies. The associated aftereffects represent the consequence of an inadequate temporal response in the adaptation mechanism; the system is slow in returning to neutral after prolonged exposure to intensely colored stimuli. In contrast, we propose that the long-term changes in color vision that can be built up by daily exposure to chromatic alteration are manifestations of a plastic neural mechanism that allows information from the environment to instructively reorganize neural connections throughout life. In this case, the associated aftereffects represent functionally appropriate adjustments in synaptic weighting of neural connections that will not return to their previous values unless instructed by experience to do so.

Close Window

The time course of outer retinal adaptation

Barry Lee, SUNY College of Optometry
Co-authors: Dennis M. Dacey, University of Washington; Vivianne C. Smith & Joel Pokorny, University of Chicago

Primate horizontal cells, situated at the first synapse in the retina, offer a suitable locus for study of outer retinal adaptation.. Sensitivity regulation falls short of Weber's law at low- to mid-photopic levels (Smith et al., 2001), and outer retinal adaptation is cone-specific and spatially local (Lee et al., 1999), Here I review recent results as to the time course of outer retinal adaptation, which could be instantaneous through some form of response compression, or might display a finite time course due to operation of feedback or feedforward mechanisms.

We recorded from primate horizontal cells in an in vitro preparation. In one set of experiments, probe stimuli (sinusoids or pulses) were added to slowly modulated adapting backgrounds. Results indicated that adaptation mechanisms showed a finite time course, on the order of about 10 milliseconds. In addition, characteristic distortions of responses to sinusoids of different frequencies could be accounted with such a time course, as could distortions to positive and negative going pulses. Nevertheless, it proved difficult to provide a comprehensive model of the adaptation process, a process that may not reflect operation of a unitary mechanism.

Close Window

Multiple layers of contrast adaptation

Peter Lennie, New York University

The spatial selectivity of contrast adaptation found psychophysically has generally been taken as evidence for its cortical origin, and physiological studies have confirmed that adaptation occurs in many simple and complex cells. However, recent work in LGN shows substantial contrast adaptation in at least one of the pathways leaving the retina, and the characteristics of this adaptation are unlike those of adaptation in cortex. To complicate the picture further, adaptation in many neurons in V1 is often much less selective than earlier work would have led us to expect, and provides a less compelling account of psychophysical observations. The perceptual adaptation probably reflects changes occurring at several levels of the visual pathway, including some beyond V1.

Close Window

Reasons for adaptation

Donald MacLeod, University of California, San Diego

Adaptation is often equated with sensitivity regulation (as in a multi-scale meter) and/or resetting of a baseline (AC coupling). A more sophisticated form of adaptation might deform the input-output functions of the visual system in more complex ways, so as to maximize the average precision with which environmental inputs are represented. But the simple operations of zero adjustment and sensitivity scaling are able to achieve this goal fairly well when applied both to intensity, and (at later, but still intermediate stages of processing) to contrast. The formal simplicity of these operations conceals interesting and useful complexity: contrast gain as well as intensity gain are regulated through changes in temporal and spatial integration. The disruptive effects of photon noise at low luminance, and of fluctuations in nerve impulse counts at low contrast, can be reduced by spatial and temporal integration, but at high luminance or high contrast more rapid local analysis becomes possible, at the affordable price of reduced gain for luminance or contrast.

Dark adaptation can be slow because it doesn't, in nature, need to be fast; it is less obvious why light adaptation is so fast, but such fast adaptation may help create the representation of contrast during fixational eye movements.

In addition to its benefits for discrimination and detection, spatial contrast adaptation can help compensate for prior optical or neural losses present on short or long time scales. But deprivation can alternatively lead to added central losses. Perhaps inputs that are attenuated but still provide systematic signals may be boosted, while inputs so weak as to be noise-dominated are lost.

Close Window

What and where in the computation of relative disparity

Bart Farell, Syracuse University

Humans are highly sensitive to relative disparities and poor at judging absolute disparities. Yet relative disparity is thought to be computed by differencing absolute disparities. An analog is the fine chromatic discriminations possible despite the color blindness of individual cone classes. Absolute disparity confounds the disparity of the stimulus with the eyes' vergence state, and differencing eliminates the ocular component.

If relative disparity is simply a difference between absolute disparities, stimulus parameters other than disparity shouldn't matter, provided the precision of the absolute disparity signal is unaffected. Sensitivity to relative disparity should depend on where the stimuli are, not on what they are. We test this presumption here, and find it wanting.

The central Gabor patch and the annular grating had equal spatial frequencies and independently variable orientations. Observers judged the central patch as 'near' or 'far' relative to the zero-disparity annulus. Presentations, lasting 150 ms, were preceded by nonius lines.

When both gratings were vertical, stereo discrimination thresholds were very low. Rotating the annulus 45° elevated thresholds by about 4×. Yet when the central grating was also rotated to be parallel to the annulus, thresholds fell to the low value found when both were vertical. Thus sensitivity to relative disparity depends on relative orientation, independent of absolute disparity. Orientation differences as small as 10-15° were found to raise threshold. Hence, relative disparity is computed within orientation channels and sensitivity to it depends on what the stimuli are, as defined by orientation, not just on where they're located in space.

Close Window

Independent perceptual learning in monocular and binocular motion systems

Zhong-Lin Lu, University of Southern California
Co-authors: Wilson Chu, Barbara Anne Dosher, & Sophia Lee, University of Southern California

Investigations of eye-specificity of perceptual learning in judging motion direction of moving luminance-defined objects have found a substantial degree of transfer from trained to untrained eyes, implying a largely binocular site of learning. These results appear to be incompatible with the proposal that motion direction of moving luminance-defined objects is extracted by highly sensitive monocular systems and a less sensitive binocular motion system. We conjecture that the seemingly contradictory results may be a consequence of different eye-specificities of different learning mechanisms: a monocular learning mechanism in clear displays and a binocular learning mechanism in noisy displays. Eye-transfer tests, external noise manipulations and observer models were used to systematically characterize learning mechanisms and to investigate the eye specificity of each mechanism. Perceptual learning in one eye was measured over ten practice sessions. Transfer to and subsequent learning in the untrained eye were assessed in five transfer sessions. Learning in the trained eye improved performance (reduced contrast thresholds) in that eye with virtually equal magnitude across a wide range of external noise levels. The improvements were accounted for by a mixture mechanism of stimulus enhancement and template retuning. The degree of transfer of perceptual learning from the trained eye to the untrained eye depended on the amount of external noise added to the signal stimuli. In high external noise conditions, learning transferred completely to the untrained eye. In low external noise conditions, a large degree (about 46%) of eye-specific learning was found. During the transfer sessions, performance in the untrained eye further improved via stimulus enhancement with smaller though significant contribution of template retuning. The results suggest that perceptual learning of motion direction judgments involves two independent mechanisms at different processing stages of the visual system.

Close Window

The origins of stimulus dependent intrinsic optical signals of the retina

Daniel Tso, SUNY Health Science Center
Co-authors: M. Zarella, J. Schallek, Y. Kwon, R. Kardon, P. Soliz, SUNY Health Science Center

In order to determine the spatio-temporal properties and sources of functionally correlated intrinsic optical signals in the intact retina, we recorded intrinsic optical signals of the retina in vivo in the near-infrared (NIR, 700-900nm), while presenting a visible (550nm) stimulus. The stimulus set included counterflickering checkerboards and static patterns of various sizes, extents and contrasts. ERG/PERG recordings were also performed to provide an independent measure of retinal activity and optimal spatial frequency. Stimuli were presented in various portions of the retina and reveal that observe signals exhibited a non-uniform distribution across the retina.

In addition to the negative reflectance signal that mirrors cortical signals, there was often a strong positive (increase in reflectance) optical signal that was spatially offset to the region of the negative reflectance signal. The presence of this positive signal strongly depended on the retinal location of the stimulus such that the underlying source appears to have a superior-nasal bias. This positive signal was also strongly dependent on the focal plane of the imaging. This asymmetric spatial signal pattern, then, may reflect the architecture of the underlying circulation or deeper retina structures. Spatially asymmetric patterns in one eye were, not surprisingly, found to be distributed in mirrored fashion in the other eye. Static stimulus patterns as well as counterflickering stimuli elicited strong responses. Action spectra of these signals were measured and related to expected hemoglobin sources.

Close Window

The development of visual attention skills in school-age children: effects of age and video gaming

Matthew Dye, University of Rochester
Co-authors: Nina Fernandez & Daphne Bavelier, University of Rochester; Kaya deBarbaro, University of Toronto

The literature on the development of visual attention in school-aged children focuses on a small subset of tasks. We set out to explore the development of a wide range of visual attention skills in a sample of over 100 children aged 7 to 17 years. These children participated in four tests designed to measure different aspects of their visual attention: (1) the Attentional Network Test (ANT), (2) Useful Field of View (UFOV), (3) Attentional Blink (AB) and (4) Ball Tracking. The data suggest a steady increase in visual attention skills (both spatial and temporal) from the age of 7 to 10 years, and relatively small increases from ages 10 to 18 years. ANT data demonstrated a marked reduction in reaction times and error with increasing age, and the ability to make use of spatial cues developing around the age of 14 years. The UFOV task revealed an improvement in selective visual attention skills around 10 years of age. Recovery from the attentional blink was more rapid as children approached 10 years of age, and the number of balls that could be tracked simultaneously increased steadily across the whole age range. Analysis of action videogame playing suggested increased vigilance, decreased RTs to visual stimuli, and an ability to track a larger number of balls in game players aged 11-18 years. Together, the data provide evidence for further development of visual attention skills from 7-10 years of age, with further improvement obtained from playing video games requiring specific aspects of attention skills.

Close Window

Adaptation's effects on macaque MT neurons

Tony Movshon, New York University
Co-author: Adam Kohn, New York University

The responsivity of primary sensory cortical neurons is reduced following prolonged adaptation to an effective stimulus, but such adaptation has been little studied in higher sensory areas. Many psychophysical studies have described strong perceptual effects of adaptation to visual motion, so we studied the effect of prolonged visual stimulation on the responses of neurons in the macaque's area MT, a cortical area whose importance to visual motion perception is well established. We adapted MT neurons with sinusoidal gratings drifting in the preferred or null direction. Preferred adaptation strongly reduced the responsiveness of MT cells, primarily by changing their contrast gain, and this effect was specific to the adapted location within the receptive field. Null adaptation did not affect responses to preferred motion, but reduced the ability of null gratings to inhibit the response to a simultaneously presented preferred stimulus. These adaptation effects may be due to adaptation-induced changes in contrast gain earlier in the visual pathway, and may not reflect processing changes within MT itself.

We next examined the effect of adaptation on stimulus selectivity. In primary visual cortex, responsivity is usually reduced most when the adapting and test stimuli are well matched. Theories about the functional benefits of adaptation have relied on this specificity, but the resultant changes in neuronal tuning are of the wrong kind to account for well-documented perceptual aftereffects. We studied the effect of adaptation on the direction tuning of MT neurons. In striking contrast to V1, responsivity in MT is best maintained in the adapted direction, but strongly reduced for nearby directions. Consequently, adaptation in the preferred direction reduces direction tuning bandwidth, while adaptation at near-preferred directions causes tuning to shift toward the adapted direction. This novel effect of adaptation is consistent with perceptual aftereffects and suggests that different cortical regions adjust to constant sensory input in distinct ways.

Close Window

Orientation-specific adaptation in human visual cortex

Geoffrey Boynton, The Salk Institute

Adaptation of the fMRI signal has been increasingly used as a tool for measuring the response properties of underlying neuronal subpopulations. However, fMRI-adaptation studies with object stimuli tend not to show stimulus-specific adaptation in early visual areas, such as the primary visual cortex (V1). We specifically searched for adaptation effects in early visual areas by measuring the fMRI response to pairs of sinusoidal gratings that had either the same or orthogonal orientation. We found no orientation-specific adaptation effects in V1 but we did find increasing effects along the hierarchy of visual areas (V2, V3, and V4V). From these results, one might conclude that there are no orientation-specific neurons in human V1. However, is more likely that neurons in V1 are simply less susceptible to adaptation. Thus, our results support a general principle of the visual system in which lower visual areas maintain a veridical representation of the world, while higher visual areas represent only what is behaviorally relevant.

Close Window

Illusory motion from nonlinear response to contrast: how much does the visual system know about its own neuronal adaptation?

Benjamin Backus, University of Pennsylvania
Co-author: Ipek Oruç, University of Pennsylvania

In many visual mechanisms that adapt over time, the adaptation (change in neuronal firing rates) is not strictly internal to the mechanism, but is reflected in the neural code that the mechanism uses to transmit information elsewhere. In such cases, recipients of the information must know how the code has changed over time, if they are to read it accurately. Failure to do this may account for the striking percept of illusory motion in certain repeated asymmetric patterns (or RAPs) such as Kitaoka's (2003) image, Rotating Snakes, and Fraser & Wilcox's (1979) escalator illusion. We conducted experiments in which observers matched real motion to the illusory motion in static RAPs. An early, fast component of the illusory motion can be explained by the global-motion system's failure to compensate for known nonlinearities in early responses to contrast. A second, slow component may result from failure to compensate for slower neuronal adaptations to luminance and contrast. Movies in which real luminance changed over time, to mimic the hypothetical adaptations, evoked similar percepts of motion. Thus, whereas the pattern vision system evidently compensates for changes over time in the neural code for contrast, the global motion system does not. It is plausible that the global motion system normally need not compensate, if the statistics of natural patterns are such that the spurious local motion signals generated by adaptation to natural static images balance across space.

Close Window

Adaptation as a tool for probing the neural correlates of conscious visual awareness

Randolph Blake, Vanderbilt University

Surveying the literature one finds a number of innovative psychophysical strategies that have been employed to learn whether visual adaptation can transpire in the absence of visual awareness. These strategies exploit phenomena such as binocular rivalry and crowding to dissociate phenomenal perception from physical stimulation. This talk will review select studies that have employed several of these strategies, and in so doing will evaluate assumptions underlying the strategies and conclusions that can be drawn from the results.

Close Window

top