SESSION DETAILS

Neural Coding in the Retina

The issue of neural coding is central to our understanding of computation in the nervous system. The retina is a particularly interesting structure in that respect because its anatomy makes it possible to record virtually all the ganglion cells in a patch of the retina. This has allowed neuroscientists to study population code in a way which has not been possible in more central neural structures. This session will review some of the recent progress that has been achieved thanks to a unique combination of experimental and computational approaches.

Moderator: Horace Barlow, Cambridge University

9:30 AM: Jonathan Demb, University of Michigan

Cellular mechanisms for visual adaptation

Visual neurons adjust their sensitivity to the immediate environment through the process of adaptation. Adaptation increases sensitivity when signals are weak, to improve the signal-to-noise ratio, and decreases sensitivity when signals are strong, to avoid response saturation. In the retina, cells adapt to the mean intensity but also to the contrast, or the range of light intensities relative to the mean. Contrast adaptation is not present at the output of photoreceptors and thus must arise within retinal circuitry. I will describe experiments in mammalian retinal ganglion cells, measured in vitro, that aim to elucidate the cellular mechanisms for contrast adaptation. A linear systems approach is used to quantify a gain change at high contrast; the analysis is robust in the presence of known nonlinearities in the subthreshold or spiking responses. When contrast triples, the gain in the spiking response reduces by about a factor of two. Intracellular measurements show that the gain change is partially an intrinsic property of the cell, related to spike generation, and partially a synaptic mechanism that arises within the presynaptic bipolar cells. Different ganglion cell types express differing degrees of adaptation, and this variation must relate to each cell type's unique pattern of intrinsic properties and synaptic connections.

9:50 AM: Eero Simoncelli, New York University

Toward characterization of the complete visual signal in a patch of retina

I'll describe our recent efforts to characterize the spiking responses of a complete network of ON and OFF parasol ganglion cells in a small patch of primate retina, using a general linear model that consists of: (1) a linear receptive field that operates on the stimulus; (2) a feedback filter that captures the effects of the neuron's own spike history; (3) a set of cross-coupling filters that capture the effects of spiking in neighboring cells; and (4) an output nonlinearity that converts the total input to an instantaneous probability of spiking. The model is closely related to the more biophysically realistic integrate-and-fire model, and can exhibit a wide array of biologically relevant dynamical behaviors, such as refractoriness, spike rate adaptation, and bursting. We have applied the model to simultaneously-recorded responses of groups of macaque ON and OFF parasol retinal ganglion cells, stimulated with a 120-Hz spatiotemporal binary white noise stimulus. We find that the model accurately describes the stimulus-driven response (PSTH), and reproduces both the autocorrelations and pairwise cross-correlations of multi-cell responses. Finally, we show that the model can be used to map functional connectivity, providing a complete description of the identity, direction and form of functionally significant connections between cells.

10:10 AM: Sheila Nirenberg, Cornell Medical School

Ruling out and ruling in neural codes

The subject of neural coding has generated much heated debate. A key issue is whether the nervous system uses coarse or fine coding strategies. Each has different advantages and disadvantages and, therefore, different implications for how the brain computes. For example, the advantage to coarse coding is that it's robust to fluctuations in spike arrival times. Downstream neurons don't have to keep track of the details of the spike trains. The disadvantage, though, is that individual cells can't carry much information, so downstream neurons have to pool signals across cells and/or across time to obtain enough information to represent the sensory world and guide behavior. In contrast, the advantage to fine coding is that individual cells can carry a great deal of information; however, downstream neurons have to resolve spike train structure. Here we address the question of what the neural code can and can't be, using the retinal output cells as the model system. We recorded from essentially all the retinal output cells an animal uses to solve a task, evaluated the cells' spike trains for as long as the animal evaluates them, and used optimal, i.e., Bayesian, decoding. This approach makes it possible to obtain an upper bound on the performance of codes and thus eliminate those that are not viable. Our results show that coarse coding strategies are insufficient; finer, more information-rich codes are necessary.

10:30 AM: Richard H. Masland, Harvard Medical School

The number of visual channels in mammalian retinas

Molecular biological tools and cheap computing power now permit the assembly of large databases of neuronal images. This has been done for the main classes of retinal cells. The cells can be classified by traditional or more formal means, which agree that mammalian retinas contain ~ 12 structurally distinct types of ganglion cell. The structures reflect the underlying physical reality of the cells' differing sets of inputs from other retinal neurons, which act to create ~ 12 parallel encodings of the visual scene. The central targets and central uses of these signals are only partially understood.

10:50 AM: Discussion

Close Window

Contributed Talk Session: Color

2:00 PM: Andrew Stockman, University College London

Transient tritanopia of a second kind redux: delayed loss of S-cone sensitivity after long-wavelength field onset is consistent with the sluggish generation of an active photoproduct within the L- and M-cones

Mollon, Stockman & Polden (1987) reported an anomaly in the time-course of the light adaptation of the S-cone mechanism following the onset of an intense yellow bleaching light. Instead of recovering monotonically, S-cone increment threshold rises for several seconds before falling to its light-adapted steady-state value. As far as we aware, no mechanism has yet been proposed that provides a compelling explanation of this phenomenon.

As Mollon et al. showed, the anomaly must be mediated postreceptorally. We believe that the delayed suppression of S-cone sensitivity is not a property of the S-cone chromatic system per se, but instead reflects changes in the outputs of the L- and M-cones caused by the sluggish generation of an intermediate, active bleaching photoproduct within the L- and M-cone photoreceptors (some photoproducts are known to act much like real lights). We find that the time course of the rise and fall in S-cone threshold is consistent with the lifetime of an active photoproduct limited by two approximately first-order reactions with time constants of c. 5 and 20 s, which are likely to correspond to the rates of production and decay of the photoproduct. The S-cone thresholds are largely immune to the direct effects of photopigment depletion and other mechanisms of photoreceptor adaptation that affect the L- and M-cone thresholds (which recover monotonically). Consequently, this curious anomaly may provide a unique method of probing in vivo a part of the retinoid cycle that regenerates bleached photopigment in isolation from the usual photoreceptor adaptation mechanisms.

2:15 PM: Dingcai Cao, University of Chicago

S-cone discrimination with spatial and temporal chromatic contrast

Purpose: S-cone discrimination depends on the chromaticity difference between the test field and the surrounding area. The current study investigated the effect of the spatial and temporal chromatic contrast of a surround to the test field on S-cone discrimination.

Methods: The experimental paradigm isolated spatial, temporal, and spatial-and-temporal chromatic contrast effects on discrimination (Zele, Smith & Pokorny, 2006). S-cone discrimination thresholds were assessed by a four-alternative spatial forced choice procedure (Smith, Pokorny & Sun, 2000). Stimuli were either metameric to the equal energy spectrum, or varied in S-cone activation along a line of constant L/M-cone activation. A model based on primate Koniocellular pathway physiology described the data (Pokorny & Smith, 2004).

Results: Spatial and temporal contrast produced equivalent reductions in chromatic discriminability as the chromatic difference between the test and surround increased. S-cone discrimination in the absence of chromatic contrast was similar to that with spatial, temporal and spatial-and-temporal contrast.

Conclusions: S-cone discrimination was determined by either the spatial or temporal contrast component of the signal. In contrast to L/M-cone discrimination, which was best in the absence of chromatic contrast (Zele, Smith & Pokorny, 2006), S-cone discrimination did not vary with spatial or temporal contrast manipulation. This may reflect differences between the receptive field organizations of the Koniocellular and Parvocellular pathways.

2:30 PM: Scott Gabree, Northeastern University

Pedestal masking of S-cone increments and decrements: less contrast gain control in the S-OFF pathways

Substantial evidence has accumulated to indicate that the mechanisms detecting S-cone increments (S+) and decrements (S-) differ in more than simply the polarities of their S, M and L cone inputs. This evidence includes S+ and S- differences with respect to: (1) changes in Ricco's area with eccentricity (Vassilev, et al., 2003); (2) different threshold elevations with opposite sawtooth flicker polarities (Shinomori, et al., 1999); (3) different field sensitivities in transient tritanopia (McLellan & Eskew, 2000); (4) different patterns of chromatic noise masking (Giulianini & Eskew, 2007; Wang & Eskew, 2007), and (5) different patterns of masking by pedestals (Vingrys & Mahon, 1998). We used unipolar, annular test and pedestal stimuli, presented in sawtooth pulses, designed to favor detection by S-ON or S-OFF mechanisms. Like Vingrys and Mahon (1998), we measured test threshold versus pedestal contrast (TvC) functions for S+ and S- tests, but: (a) we used all four combinations of test and pedestal polarities, and (b) we also used long wave pedestals, consisting of equal increments (or decrements) of L and M cones (LM+ or LM-). When S-cone pedestals were used, classic "dipper" and "bumper" functions were found for near-threshold pedestals. Both S and LM pedestals produced masking at high pedestal contrasts; however, "bluish" (S+ or LM-) pedestals masked more than "yellowish" (S- or LM+) ones. The results indicate that these detection mechanisms are asymmetric, and suggest that perhaps there is a greater contrast gain control in the S-ON than the S-OFF pathway (Solomon & Lennie, 2005).

2:45 PM: David Richters, Northeastern University

Mechanisms underlying long-term chromatic adaptation

Long-term chromatic adaptation has been demonstrated in adults (Kohler, 1951; Neitz, 2002; Delahunt et al., 2004). Monochromatic unique yellow settings, for example, change as a result of long-term exposure to a chromatically altered environment (Neitz, 2002). The mechanism for the long-term perceptual change, however, is not well-understood. In the present investigation, we tested the proposal that the cone weights of the red-green mechanisms can be altered. If so, the slopes of the red and green mechanism threshold contours, which are determined by the cone weights, should change also. We examined threshold level detection contours before and after adaptation, as well as monochromatic unique yellow settings. Adaptation involved wearing red-tinted contact lenses for at least five hours each day for at least two weeks. The psychophysical measurements were done in the morning before inserting the lenses to ensure we were studying long-term changes. Unique yellow settings, which served to gauge the degree of adaptation, were measured almost daily with a monochrometer. Detection contours in the (ΔL/L, ΔM/M) plane were measured with a CRT on a white background, before adaptation (baseline) and at the apex of the adaptation stage. In three of four cases studied so far, some change in unique yellow was found. In one case, there was no change in unique yellow or change in the detection contour slopes. In another case, a weak shift in unique yellow was recorded but again there was no change in detection contours. Current data cannot support the claim that a change in red or green cone weights accounts for the shift in unique yellow.

3:00 PM: Richa Verma, University of Melbourne

Contribution of human cone photoreceptors to the photopic 30-Hz flicker electroretinogram

For the photopic flicker electroretinogram (ERG) in monkey, pharmacological blocking of post-receptoral responses eliminates a large component of the flicker response. We have used a non-invasive approach to determine the cone contribution to the photopic ERG in humans using the paired-flash technique.

ERG responses were recorded from three human subjects with normal vision. Stimuli were presented in a Ganzfeld bowl with a steady rod saturating background. The test stimulus was 30Hz square wave flicker. At fixed intervals during a cycle (every 3ms), an intense probe flash was presented to rapidly saturate the cones. We assumed that at 4ms after the probe the dark current in the cones had been shut down while the post-receptoral response was unchanged. We take this probe response to be proportional to the magnitude of the cone current flowing at that time, and we derive the fractional probe response by normalizing to the probe-alone response.

We determined the Fourier coefficients for the fundamental. The cone response amplitude formed a similar proportion of the overall flicker response amplitude for two subjects (8 & 6%), but was considerably higher for a third (56%). The cone response phase was similar for the same two subjects (78 & 70°), and was different for the third (275°), while the overall flicker phase was similar for all three subjects (306, 300, and 317°).

These results indicate that the photoreceptor response is a small component of the photopic flicker ERG, with some individual variability in the relative contribution of the cone response.

3:15 PM: Alex Wade, Smith-Kettlewell Eye Research Institute

Chromatic independence of surround suppression mechanisms is evidence for an early cortical site of contrast normalization

Surround suppression' is a contrast normalization computation in which the apparent contrast in one location is reduced by contrast in another, spatially-separated region. The two regions must have similar orientation and spatial frequency. Because sub-cortical neurons in the visual pathway have weak orientation tuning, surround-suppression is thought to be a cortical computation but its precise neuronal substrate is still unknown.

In the human visual system, chromatic information is carried in three largely independent pathways until the second or third synapse in V1. If surround suppression mechanisms act after this mixing, we expect contrast in one chromatic pathway to normalize contrast in another. If the site of surround suppression is early in V1, we expect to find relative independence between chromatic surround suppression mechanisms.

We measured the suppressive effects of annular surround regions on central 'probe' regions in a 2AFC contrast detection experiment. The surrounds and probes were defined by either S-cone or pure luminance contrast and could be co-linear or orthogonal to each other. We also measured the detection thresholds of the probes in isolation yielding a total of 12 different conditions. The spatial positions of the probes were cued at all times to reduce the effect of positional uncertainty.

We found powerful, orientation-tuned surround suppression effects when probes and surrounds had the same chromatic contrast. But we found little effect of luminance surrounds on S-cone probes or vice versa. We conclude that surround suppression occurs at the first or second synapse in V1, before multiple chromatic channels are generated.

Close Window

Multi-sensory Processing and Cross-modal Plasticity

This session will explore the interaction between vision and other sensory modalities, both in intact organisms and following damage to the visual system. As specialized and anatomically distinct as the different sensory modalities may seem, they exhibit remarkable cross-modal plasticity following damage. Ongoing research is demonstrating increasing levels of multi-sensory integration in different areas of the normal brain. Speakers in this session will detail some of the most recent, exciting findings in the field. They will discuss both the extent and limitations of cross-modal plasticity after damage, and the degree to which this can be explained by what we know of multi-sensory processing in the brain.

Moderator: Daphne Bavelier, University of Rochester

4:00 PM: Maurice Ptito, Université de Montréal

Cross-modal plasticity: lessons from the visual system

Sensory deprivation from birth leads to a number of manifestations ranging from anatomical reorganisation to behavioural and cognitive changes. We will illustrate these changes in humans that have been deprived of vision since birth (congenital blindness, CB). We took advantage of a sensory substitution device coined Tongue display unit (TDU) that uses the tongue as a medium to transmit "visual" information to the brain. By using MRI whole brain voxel-based morphometry, we show that the visual pathways of the blind are atrophied. Their visual cortex however can be recruited in tasks involving the somatosensory system. For example, CB can discriminate the orientation of a stimulus, detect motion and motion direction. In doing so, they recruit cortical areas involved in visual perception such as the visual cortex and area MT. Finally, TMS of the visual cortex induces parasthesiae on the tongue that are somatotopically organized. Our data argue for crossmodal plasticity in early blindness that is training-dependent.

4:20 PM: Amir Amedi, Harvard Center for Non-Invasive Magnetic Brain Stimulation

Towards closing the gap between visual neuroprostheses and sighted restoration: insights from studying vision, cross-modal plasticity and sensory substitution

Restoration of sight imposes clinical and scientific challenges. Despite intensive efforts, restoration of functional vision using neuroprostheses has not been achieved. A major reason for this failure might be that the brain in the blind undergoes profound plasticity and we do not know enough about vision and about how to communicate with this altered cortex to generate meaningful visual perception. New findings regarding the nature of representation of visual functions (such as perception versus imagery of object) or tactile functions in sighted and weeklong blindfold subjects will be presented. These studies show that interactions between sensory modalities are critical to our understanding of sensory representations in the brain. Next, I will review recent studies showing robust plasticity in the visual cortex of blind using fMRI and TMS as a virtual lesion tool to assess the functional relevance of these plastic changes in the tactile and language domains. Finally, sensory substitution devices (SSD) and will be discussed. In SSD, visual information captured by an artificial receptor is delivered to the brain using non-visual sensory information via a human-machine interface. Specifically, blind and sighted using an auditory-to-visual SSD called "The vOICe" are showing both successful performance behaviorally and specialized recruitment of visual structures (such as the 'visual' object related area LOC). This suggests "The vOICe" can be used for daily activities such as object recognition and might have the potential to play a role in 'guiding' the visual cortex to interpret visual information arriving from a retinal prosthesis in a blind subject.

4:40 PM: Alexandre Pouget, University of Rochester

Neural basis of Bayes-optimal multisensory integration: theory and experiments

Humans can combine multisensory inputs near optimally. This is quite remarkable considering that sensory inputs often come in very different formats, such as the voice of a speaker and the image of the lips movements. Moreover, sensory modalities are not equally reliable, and their degree of reliability can change from one instant to the next. I will present a neural model based on the notions of basis functions and probabilistic population codes, that can solve both of these problems optimally. The model makes two major predictions: 1- multisensory neurons should be mostly additive, and 2- their receptive fields are not necessarily in spatial correspondence. Both of these predictions are at odd with the current dogma in the multisensory integration literature, which claims that many neurons are superadditive, and that the receptive fields of most multisensory neurons are in spatial alignment. I will show that, in fact, experimental data support our predictions.

5:00 PM: Liz Romanski, University of Rochester

Integration of auditory and visual communication information in the primate prefrontal cortex

Social communication relies on the rapid and accurate integration of numerous auditory and visual signals. One cortical region which figures prominently in communication is the frontal lobes. In addition to neuropsychological and neuroimaging data confirming the essential role of the human inferior frontal lobe in language, studies have shown that single cells in the prefrontal cortex (PFC) of non-human primates respond to faces while more recent studies have focused on vocalization-responsive neurons in adjacent parts of the primate PFC. To understand how audio-visual information is integrated in the frontal lobes during social communication we recorded from single cells in the ventrolateral PFC while we presented vocalizations and corresponding facial gestures to awake, behaving monkeys. The stimuli consisted of short video clips of conspecific macaques vocalizing which were deconstructed into audio and visual components and were presented separately and simultaneously. The single units we encountered responded robustly to faces, vocalizations and combined face-vocalization stimuli. The multisensory neurons represented ~one-third of the task responsive population and exhibited significantly enhanced or suppressed responses to bimodal stimuli. Combination of the face stimuli with non-matching (incongruent) vocalizations resulted in significant changes in neuronal firing compared to the normal congruent stimuli in some cells. Moreover, alterations of the temporal onset of auditory-visual stimuli also resulted in a significant change when the onset of the auditory stimulus preceded the visual motion stimulus. Our results suggest that ventrolateral PFC neurons integrate communication-relevant auditory and visual information. Analysis of multisensory processing in the VLPFC of non-human primates may help us to understand social communication in the human brain, which critically depends on the integration of multiple types of sensory information.

5:20 PM: Discussion

Close Window

Color Naming and Cone Mechanisms

The rules that describe how we name and group colors have been a classical area of interest in color science. More recent studies have attempted to find new approaches for relating color names and categories to the activity in the cone mechanisms themselves. The goal of this symposium is to bring together representatives of both these approaches to see if these two directions can find common ground.

Moderator: Mike Webster, University of Nevada-Reno

8:30 AM: Clyde Hardin, Syracuse University

Color categories: nature and nurture

In the thirty-seven years since its publication, the principal theses of Berlin and Kay's Basic Color Terms, although widely criticized and significantly amended, have been largely substantiated. Explanations for the cross-language regularities thus established have in many instances been more elusive. Are we to understand them in terms of commonalities in the biology of human color perception, or in the statistics of environmental reflectances, or similarities due to cultural diffusion? That the best examples of five of the color categories in an eleven-term language should fall close to the Hering elemental colors is certainly suggestive of biological origins, but one wonders why some of the Hering colors seem not to be salient in languages with fewer color terms. And on any of the putative accounts of color category formation, why is it that there are many languages in the World Color Survey that have an undifferentiated green-with-blue ("Grue") category, but very few with an undifferentiated red-with-yellow category? Furthermore, in languages with the largest number of basic color terms, why are there basic terms for some binaries such as purple and orange, while terms referring to binaries such chartreuse and turquoise have never achieved basic status? And why is it that a category such as Green always seems to span a large region of color space, while one such as Red occupies a much smaller one?

8:50 AM: Heidi Hofer, University of Houston

The contributions of cones to color vision

Despite over two hundred years of perceptual and physiological work in the study of color vision the precise manner in which signals from L, M, and S cones are combined to result in the hues we perceive is still mysterious. While it is well established that early in the visual system color information is processed in two chromatically opponent channels, the L-M and S-cone opponent channels, it has also been established that these channels do not satisfactorily explain color perception. For example, despite the L-M and S-cone channels often being associated with red-green and blue-yellow psychophysical color opponency these so-called unique hues are not the colors that are perceived under isolated activity of the cone opponent channels.

Our approach exploits a phenomenon described by Holmgren, that small flashes of light appear to fluctuate in color, to probe the contributions of individual cones to hue and provide insight into the microstructure of the chromatic mechanisms. Detection and perception of near-threshold, monochromatic flashes of light, made smaller than individual cones with adaptive optics, were studied in subjects with known retinal mosaics and a wide range of L:M ratios. Our findings suggest that M cones contribute to blueness and that cones within the same class contribute differently to color perception depending on local retinal typography, with cones within clumps of like-type cones apparently failing to participate in chromatic pathways. We then discuss recent efforts to directly link elicited sensations with the individual cones responsible for detection using an ultra-high resolution retinal stimulus tracking method.

9:10 AM: Sophie Wuerger, The University of Liverpool

The cone inputs to colour appearance mechanisms

What computation does the human brain perform when we experience 'red', 'green', 'yellow', or 'blue'? How does the human visual system combine the retinal cone signals (L, M, S cones) to yield these colour sensations? To address this question we ask observers to select the colour which appears 'neither red nor green' (yielding unique yellow and unique blue) or 'neither yellow nor blue' (yielding unique red and unique green). Since we obtain these unique hue settings at different luminance and saturation levels, we have numerous points in 3-dimensional (LMS) cone space that correspond to a particular hue. This allows us to determine quantitatively how the cone signals are related to a particular perceived hue. Our results show that the human visual system employs as least three chromatic mechanisms to yield the four unique hues and, to a first approximation, these mechanisms combine the (L,M,S) cone signals linearly. Furthermore, the variability between observers is relatively small when expressed in terms of perceptual errors. Our results add further weight to the idea that the colour vision system in adult humans is able to recalibrate itself based on prior visual experience and that our colour appearance mechanisms may be a consequence of environmental constraints.

9:30 AM: Rolf Kuehni, North Carolina State University

What the World Color Survey tells about hue based color categories

The World Color Survey of 110 unwritten languages provides much information about color terms and their perceptual meaning in population groups not highly technically advanced. A new analysis of the data, based on certain rules, discloses 46 kinds of term concepts (identical perceptual meaning) with major usage, representing 47% of all terms used in the languages. The most important are Hering's perceptual primitives in the sequence red, white, black, green, yellow, and blue. Thirteen terms with major use can only be described with English double or triple terms, and 22 terms defy easy definition. Some of these have a usage level comparable or higher than those of Berlin & Kay's basic colors orange and gray. Major color term concepts used in languages begins with one, rises to 19 in 7-major-term languages and declines to 10 in 9-term languages. It is evident that while the Hering perceptual primitives play the most important role, presumably as a result of the operation of the color vision system, cultural effects have a major role during development of a settled color terminology and, if the WCS languages are representative examples, there is no simple single path for this development.

9:50 AM: Discussion

Close Window

Contributed Talk Session: Vision

11:00 AM: Jessica Wolfing Morgan, University of Rochester

High-resolution autofluorescence imaging of individual retinal pigment epithelial cells in vivo

Retinal pigment epithelial (RPE) cells are responsible for helping regenerate retinal in the visual cycle, providing metabolic support to the photoreceptors, and phagocytosing the photoreceptor outer segments. However, despite their important role in the retina, limited reports of RPE cell morphology exist. RPE cells are not usually resolved in vivo in conventional scanning laser ophthalmoscopes (SLO), even those that are equipped with adaptive optics (AO), presumably because they are obscured by the waveguiding photoreceptors. Recently, in vivo techniques have been developed to image the RPE layer by using lipofuscin autofluorescence. Here, we combine high-resolution AO scanning laser ophthalmoscopy with simultaneous autofluorescence and reflectance imaging to resolve the human and primate RPE mosaic in vivo. Mosaics of RPE cells were imaged at different retinal eccentricities and analyzed to determine cell spacing and density. In the primate, at 10 deg from the fovea, the RPE cell spacing was 17.6 µm and the density was 2,900 cells/mm2, while at the fovea the cell spacing was 11.6 µm and the RPE cell density was 6,300 cells/mm2. At 15 deg from the fovea in the human eye the RPE cell spacing was 15.5 µm. This noninvasive, in vivo technique allows RPE morphology to be studied in normal and diseased retina and could be used to assess treatment efficacy in human patients or animal models of disease.

11:15 AM: Kaccie Y. Li, University of California, Berkeley

Analysis of cone packing arrangement in adaptive optics images

An automated algorithm for identifying cone photoreceptors was implemented in Matlab for aiding the analysis of adaptive optics (AO) retinal images. The algorithm exploits the same optical qualities of cones that make AO retinal imaging possible. Its performance was tested on six cropped images that have already been analyzed by the authors. Out of a total of 2,153 manually labeled cones, the algorithm correctly identified 94.1 percent of them (92.7 to 96.2 percent across the six images) with false positives ranging from 1.2 to 9.1 percent. We analyzed four large AO montages from one monkey and three human retinas acquired from 0.10 to 1.86, 0.60 to 2.60, 1.10 to 4.27, and 0.35 to 1.85 degrees eccentricity respectively. Cone densities of these montages ranged from 25,300 to 72,200 cones/mm2. Voronoi analysis was used to analyze cone packing structure. Approximately 50 percent of the cones are hexagonally packed for all images. Percentages ranged from 35 to as high as 70 percent locally at certain eccentricities. The consistency of our measurements demonstrates the reliability and practicality of an automated cone identification routine. This algorithm in Matlab function format can be downloaded from http://vision.berkeley.edu/roordalab/Kaccie/KaccieResearch.htm.

11:30 AM: Anthony M. Norcia, Smith-Kettlewell Eye Research Institute

Regions of human visual cortex sensitive to small vernier offsets as determined by EEG source-imaging

Humans can determine the relative position of visual features with a precision ('vernier acuity') better than the sampling resolution of the cone mosaic. Here we use high-density EEG recordings combined with distributed source imaging methods to identify two regions of visual cortex that underlie the detection of near-threshold vernier offsets. Eight observers were presented with a set of 1 arc min vernier offsets that were periodically introduced and withdrawn from a 2 cpd square wave luminance grating at 3.75 Hz (alignment/misalignment). The offset regions of the grating alternated with static regions that served as a position reference. Psychophysical thresholds for this target are in the range of 15-30 arc sec (Norcia et al., 1999). This condition was contrasted with a control condition in which the 1 arc min vernier offsets displaced symmetrically with respect to the reference (misalignment/misalignment). EEG responses to each condition were recorded over 128 channels and were averaged at each electrode across observers. These "sensor-space" averages were aligned to an individual observer's head model generated from a structural MRI. Current density on the cortical surface was determined by a minimum norm inverse. Subtraction of the misalignment/misalignment condition from the alignment/misalignment condition isolates activity that is specific to the relative position of the offsets and the reference. This activity is maximal at the occipital pole (areas V1,V2,V3), but there is a secondary maximum in the middle occipital gyrus, bilaterally. This is the first evidence for a high-resolution position mechanism outside of the classical retinotopic areas.

11:45 AM: Arthur Shapiro, Bucknell University

Are simultaneous contrast phenomena really illusions? Or does the visual system represent only the relevant spatial frequencies?

Simultaneous contrast phenomena typically contain two patches of light that have identical luminance levels but have different perceived brightness levels (e.g., a patch with a black surround appears brighter than a patch with a white surround). Explanations of such phenomena generally assume that brightness relationships in the visual scene are determined by some form of early neural interaction (Herring-type explanation), inferences about illumination (Helmholtz-type explanation), or comparisons among multiple luminance frameworks (Gestalt-type explanation). We examine whether simultaneous contrast phenomena arise from a general process in which the visual system eliminates spatial frequencies lower than the fundamental frequency of the area of interest (an efficiency-type explanation). We examine a wide variety of contrast and assimilation illusions (standard SC, articulated surrounds, blurred-surround, Adelson's illusion snake and checker-shadow illusions, Purves and Lotto illusions, Logvinenko diamonds, Anderson and Winawer illusions, White's effect, the Bressan's Dungeon illusion, and a new class of illusions which we refer to as gradient-gradient illusions). When low-spatial frequencies are removed from these displays (i.e., when the displays are passed through a high-pass filter with a low-spatial-frequency cut-off), the physical values of the resulting images follow the same pattern as that of the perceived brightness. The results suggest that the test patches are actually physically different over the range of relevant spatial frequencies. We relate this approach to other spatial frequency models of brightness illusions (Blakeslee and McCourt, 1999; Dakin and Bex, 2003) and to a new model by Ioannides, Johnston, & Griffin (2006).

12:00 PM: D.J. McKeefry, Bradford University

Speed selectivity in visual short term memory for motion

In this study we employed a 'memory masking' paradigm (Magnussen & Greenlee, 1992) to determine which stimulus attributes are important in the storage of information about moving grating stimuli in visual short term memory (VSTM). Delayed speed discrimination thresholds were measured in the presence of masking stimuli which varied in terms of their spatial frequency or temporal frequency. Our results demonstrate that it is genuinely the speed of the stimulus, as opposed to temporal or spatial frequency content, that is crucial in the retention of information about motion in visual short term memory. The property of speed selectivity exhibited by VSTM mirrors that reported for neurons in area V5/MT, a brain area crucial for the processing of visual motion in primate brain (Perrone & Thiele, 2001). This link between area V5/MT and VSTM for motion is consistent with current views which suggest that there is a close association between the neural mechanisms involved analysis of sensory information and those involved in its retention in short term memory (Pasternak & Greenlee, 2005).

12:15 PM: Andrew Herbert, Rochester Institute of Technology

Searching for symmetry: eye movements during a difficult symmetry detection task

Bilateral symmetry does not pop-out of a noisy background as an indicator of object location and orientation should. We recorded eye movements to examine observers' behavior when searching for symmetry. If symmetry is detected automatically and in parallel across a display, then eye movements early in the search should be directed towards the target region. In Experiment 1, symmetric regions appeared at one of seven locations along the horizontal meridian. In Experiment 2, the symmetric regions were presented at locations throughout the display. In both experiments, 100 by 100 pixel symmetric dot patterns embedded in 800 by 600 pixel random backgrounds were presented while eye movements were monitored using an ASL Series 504 remote eye tracker. Symmetry was detected within 5 fixations on 20% of the trials, and after more than 15 fixations on over half of the trials. Response times and first target fixation latencies were longer when symmetry was farther from the starting fixation point in both Experiments. Targets were fixated an average of 5 s before a response in Experiment 1, which increased to 10 s in Experiment 2. No speed advantage was observed for detecting symmetry when present along the horizontal or vertical meridian compared to other locations. On some trials it was clear that fixations gradually approached the symmetric target, but this was inconsistent, and the results were not suggestive of preconscious symmetry detection. Accidental symmetries in the random background may have produced the slow search for the target symmetric regions.

Close Window

Cortical and Sub-cortical Circuitry

Recent advances in imaging and analysis techniques are enabling researchers to study the detailed microcircuitry in LGN and striate cortex in combination with large-scale neuronal population responses. This session will detail advances in the study of these areas and the connections between them.

Moderator: Tony Movshon, New York University

2:00 PM: Matteo Carandini, Smith-Kettlewell Eye Research Institute

Standing waves and traveling waves in visual cortex

The visual cortex is widely held to encode the attributes of visual stimuli through the activity of neuronal populations. How does this activity evolve over time? We stained the primary visual cortex of anesthetized cats with voltage-sensitive dye RH-1692, and imaged the dye fluorescence (Grinvald & Hildesheim, 2004). We stimulated with sequences of gratings flashed for 30-40 ms at random orientations and spatial phases (Ringach et al., 1997). We expressed population responses to this stimulus as a function of preferred orientation. We could predict these responses to a considerable degree on the basis of a simple linear filter that describes the average response to a given orientation as a function of time. This filter was separable, i.e. it was the product of a function of preferred orientation and a function of time. Separability indicates that activity in the orientation domain is a standing wave (it neither broadens nor sharpens over time). We then studied how activity evolves in the spatial domain. We stimulated with small patches of standing gratings whose contrast reversed at ~ 5 Hz. These stimuli elicited strong responses at twice that frequency (~10 Hz), consistent with an origin in complex cells. The activity extended over a few mm of cortex, with response phase showing a clear dependence on distance from the stimulated site. This behavior is evidence for a traveling wave. We conclude that visual stimuli elicit waves of activation in visual cortex: standing waves in the orientation domain, and traveling waves in the spatial domain.

2:20 PM: W. Martin Usrey, University of California-Davis

Dynamic properties of thalamic neurons for vision

The lateral geniculate nucleus (LGN) is the primary source of visual input to the cerebral cortex. Neurons in the LGN receive visual information directly from retinal ganglion cells—the output cells of the retina—and, in turn, give rise to axons that terminate in primary visual cortex. In addition to their retinal afferents, LGN neurons receive a massive feedback projection from the cortex. Our laboratory has been investigating the stability and dynamics of visual processing in the LGN with an emphasis placed on determining the contributions made by the feedforward and feedback pathways. Results show that the LGN is much more than a simple relay of visual information to cortex, but rather a site for dynamic processing of retinal signals.

2:40 PM: David Ferster, Northwestern University

How threshold shapes cortical selectivity

Since Hartline described lateral inhibition in the the 1930's, lateral inhibition has been assumed to shape receptive field selectivity in many sensory domains. In visual cortex, lateral inhibition—in the form of cross-orientation inhibition—has been invoked to explain the sharpness of orientation tuning, cross-orientation suppression, contrast invariance of orientation tuning, and other nonlinear properties of cortical neurons. Intracellular recording from cortical neurons suggest, however, that the behavior of cortical simple cells can be accounted for by a simple feed-forward model, one that lacks lateral inhibition, but that incorporates the nonlinearities of the visual pathway, such as contrast saturation, rectification, neuronal threshold, and trial-to-trial variability of responses.

3:00 PM: Murray Sherman, University of Chicago

The role of thalamus in cortical function: not just a simple relay

The LGN and pulvinar (a massive but generally mysterious and ignored thalamic relay) are examples of two different types of relay: the LGN is a first order relay, transmitting information from a subcortical source (retina), while the pulvinar is mostly a higher order relay, transmitting information from layer 5 of one cortical area to another area. First and higher order thalamic relays can also be recognized for the somatosensory and auditory thalamic systems, and this division of thalamic relays can also be extended beyond sensory systems. Most of thalamus is comprised of higher order relays. Higher order relays seem especially important to general corticocortical communication, and this view challenges and extends the conventional view that such communication is based mainly on direct corticocortical connections. In this sense, any new information reaching a cortical area, whether from a subcortical source or another cortical area, benefits from a thalamic relay. Thus the thalamus is not just a simple relay responsible for getting peripheral information to cortex. Instead it both provides a behaviorally relevant, dynamic control over the nature of information relayed, and it also plays a key role in basic corticocortical communication.

3:20 PM: Discussion

Close Window

Tribute to Robert Boynton

Moderator: David Williams, University of Rochester

Confirmed Speakers: Mitsuo Ikeda, Rhea Eskew, Geoff Boynton, Joel Pokorny

Close Window

Vision and Reading

This symposium presents recent advances in understanding the role of visual pathways in reading. For example, over the last few years there has been increasing evidence that reading efficiency depends upon dynamic visual processes, and that plasticity within both early and later stages of the visual system can affect reading outcomes.

Moderator: Maureen Powers, Gemstone Foundation

8:30 AM: Mark S. Seidenberg, University of Wisconsin-Madison

Visual and phonological processing deficits in dyslexia: evidence and possible linkage

If you talk to vision researchers, they think it's known that dyslexia results from a visual processing impairment (specific to the magnocellular pathway). If you talk to hearing researchers, they think it's known that dyslexia results from an auditory processing impairment. If you talk to reading researchers, they think it's very hard to reliably detect either visual or auditory processing deficits in dyslexia, and so they focus on its major behavioral characteristic, impaired "phonological awareness," which interferes with learning spelling-sound correspondences. So, despite the very real advances in understanding reading and its brain bases that have occurred over the past 25 years or so, there is still considerable uncertainty about the cause(s) of dyslexia. I will review some recent research by Lu, Manis, Sperling and myself which suggests that subtle visual processing impairments related to noise exclusion are common in dyslexia (though not specific to the magnocellular channel), and then suggest (speculatively, but consistent with computational models of reading) how they might be causally related to phonological deficits and other aspects of impaired reading.

8:50 AM: Joel Talcott, Aston University

Sensory processing skills and deficits as potential indicators of reading disability

Although dyslexia is generally attributed to phonological deficits, the constellation of difficulties associated with dyslexia is much more extensive. The challenge for the interdisciplinary community involved in dyslexia research is to define specific behavioral indicators ("endotypes") that would aid the early detection of children at risk of reading failure. I will review evidence related to three inter-related questions about the importance of sensory processing impairments to reading disability:

  • How strong is the evidence for an association between visual and auditory processing deficits and dyslexia?
  • Are these deficits specific to dyslexia?
  • What are the potential mechanisms by which sensory processing deficits might affect reading development and dysfunction?

Results from empirical work and meta-analyses support the heterogeneous nature of the underlying sensory deficits associated with reading disability and therefore question the utility of unitary deficit hypotheses for explicating the mechanisms underlying reading disability.

9:10 AM: Bob Dougherty, Stanford University

The development of visual pathways for reading

Proficient reading is an essential skill typically acquired at a young age. It represents an important assay of healthy sensory, motor and cognitive development. I will describe cross-sectional and longitudinal measurements in 7-12 year-old children of functional and anatomical development in the visual pathways essential for reading. We have found several functional and anatomical measures that are correlated with the development of reading and phonological awareness skills, including: 1. fMRI word visibility responsivity to an incidental reading task in ventral occipito-temporal cortex, 2. fMRI contrast responsivity in human MT+ to drifting gratings, and 3. diffusion tensor imaging measurements of fractional anisotropy and mean diffusivity in several regions within the white matter, including the splenium of the corpus callosum. These functional and anatomical results implicate a network of visual regions important for skilled reading and are clinically relevant to understanding healthy reading development and identifying reading disabilities.

9:30 AM: Eric Borsting, Southern California College of Optometry

The role of the eye care professional in helping individuals with reading problems

Eye care professionals frequently encounter patients who complain of perceptual distortions, somatic symptoms, and inefficient reading when engaged in school work or other near point tasks. Several theories have been put forth to explain these common symptoms, which occur in both children and adults who have reading problems or disabilities. The various theoretical explanations include oculomotor deficits, noise within the visual system, and deficient magnocellular pathways. I will present recent data on the association between visual symptoms and various visual factors in a group of college students and school-aged children without reading disability, which point to oculomotor factors as a primary cause of symptoms when reading and studying. These findings will be discussed in the context of current eye care treatments for visual factors that adversely affect reading.

9:50 AM: Discussion

Close Window

Evolution of Opsins and Color Vision

There is tremendous variety in color vision among different species. Moreover, some animals exhibit variety in color vision within a species; certain primates, including humans, are examples. Comparative, ecological and molecular genetic studies have provided a wealth of information about the evolutionary relationships between different varieties of color vision both within and between species. This symposium explores how insights into photopigment evolution can help answer questions about the biological mechanisms that underlie color vision and why differences occur within and between species.

Moderator: Nathaniel Dominy, University of California-Santa Cruz

11:00 AM: Shozo Yokoyama, Emory University

General evolution of the opsins in vertebrates

Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at ~510-540 nm (green- or middle wavelength-sensitive; MWS pigments) and ~560 nm (red- or long wavelength-sensitive; LWS pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.

11:20 AM: David Hunt, University College London

Molecular evolution of colour vision in primates

Trichromacy in catarrhine primates has evolved from the ancestral dichromatic state of other mammals through a duplication of the longwave-sensitive (LWS) opsin gene on the X chromosome. The duplicate genes encode for L and M pigment variants with λmax values at 565 and 535 nm respectively; the amino acid differences responsible for this spectral shift are at sites 180, 277 and 285. Upstream of this gene array is a locus control region (LCR) which appears to determine via interactions with the minimal promoter region which gene (L or M) is selected for expression. Gene conversion is a feature of the array with the L and M genes showing a high level of sequence identity.

In contrast, trichromacy in platyrrhine primates is based on a single polymorphic LWS gene on the X chromosome that specifies pigments with λmax values from 565 to 530 nm. This spectral variation again arises from substitutions at sites 180, 277 and 285. Multiple alleles at this locus are found in many species, with trichromacy limited to females that inherit two different alleles. The exception is the howler monkey that follows the catarrhine model with duplicate LWS genes and full trichromacy. The duplication in the howler monkey is however clearly different from that in catarrhines as it includes the LCR in the duplicated segment. Amongst the prosimians, a polymorphic LWS gene has been found in some members of the lemur family.

The primate SWS1 pigment peaks in the violet region of the spectrum. This shift from ancestral UV-sensitivity is largely due to substitution at sites 86 and 93, with Pro93 present in all primates and Leu86 in tarsiers, platyrrhines and catarrhines. Site 86 is however much more variable amongst the lemurs with either Ser, Cys, or Val present, indicating a separate origin for violet-sensitivity in this group. Surprisingly, the Aye-Aye has Phe86, the residue present in UV pigments, indicating that UV-sensitivity may be present in this unique species.

11:40 AM: Maureen Neitz, Medical College of Wisconsin

Evolution of opsins and inter-individual variability in humans

In comparison to most other proteins and to other opsins, the human long-wavelength (L) and middle-wavelength (M) opsins exhibit and extraordinary degree of amino acid sequence variability. For example, sequence variability in human rhodopsin is quite low, and the variability that has been observed has, with rare exception, been associated with disease, including progressive retinal degeneration, and stationary night blindness. Likewise, variation in the human short-wavelength (S) sensitive opsin sequence is quite rare, and all variation observed has been associated with tritan color vision deficiency. Mutations in rhodopsin and the S cone opsin arise from extremely rare random mutational events. In contrast, the human M and L opsin genes are prone to unequal homologous recombination, as evidenced by the high incidence of color vision defects in modern populations and the high frequency of female carriers. Over the course of human evolution, sequential rounds of unequal homologous recombination between normal and color defective opsin gene arrays has produced the variability in the present day L and M opsins. Evidence has accumulated indicating that the mechanism that produces the hypervariability in L and M opsins is uniquely human and that it has dire consequences for the evolution of human vision.

12:00 PM: Daniel Osorio, University of Sussex

Ecology of primate color vision

There are three types of account of whether and how primate photoreceptor spectral sensitivities and color vision might have evolved: I) the existing arrangement is sub-optimal, and in particular the spectral separation of the L and M cones reflects their recent evolutionary divergence; II) the spectral sensitivities are optimized for color vision, and III) there is a trade-off between the requirements of chromatic and spatial sampling. The discussion is enlivened by consideration of the diversity amongst New-World primates, where gene frequencies give insight to selection amongst different phenotypes (i.e. varieties of color vision). Different authors find evidence for all the main proposals summarized above, and I will review two particular approaches. First, a model developed by M. Vorobyev, which shows that the human retina is optimal for coding reflectance spectra of colorful objects, such as fruit and flowers. I will then turn to evidence from New World monkeys. This suggests that the 535nm (M) type of pigment gives relatively poor (dichromatic) vision, but that dichromats may be better than trichromats at finding camouflaged insects. The present knowledge does not give a clear account of the selective forces that influence evolution of primate cone pigments and color vision. We have however learnt that, at least for spectral sampling, understanding the evolution and design of the eye requires consideration not only of the natural signals that it encodes, but also how this information is used in making behavioral decisions.

12:20 PM: Discussion

Close Window

Retinal Structure & Function Revealed with High-resolution Imaging

Advances in imaging technology are providing new insights into the structure of the retina and how it correlates with visual function. This workshop will focus on recent developments in the study of early visual processing revealed by in vivo high-resolution retinal imaging.

Moderator: Wolfgang Drexler, Cardiff University

2:30 PM: Stacey Choi, University of California-Davis

High-resolution imaging of retinal disease and their retinal function

Patients with variety of retinal diseases were evaluated with high-resolution imaging instruments constructed at UC Davis and by testing retinal functions. Functional tests included multifocal electroretinogram (mfERG), color vision, visual fields and contrast sensitivity using stimuli presented at the locations corresponding to retinal imaging. The imaging instruments included: (1) adaptive optics (AO) flood-illumination which results in approximately 2 µm lateral resolution, (2) Fourier-domain optical coherence tomography (Fd-OCT) providing high speed (9-36 frames/sec) B-scans with either ~3, 4.5 or 6 µm axial resolution and 15 µm lateral resolution and (3) AO – Fd-OCT which has both high-lateral (3.5 µm) and high-axial (6 µm) resolution. Both AO instruments revealed changes in the cone photoreceptors, which correlated with localized functional changes. Due to the high speed of Fd-OCT, it is possible to acquire volumetric data of retinal structures. The 3D reconstruction of retina revealed structures that have not been visualized with commercial instruments.

2:50 PM: Yoshikazu Imanishi, Case Western Reserve University

Noninvasive two-photon imaging reveals retinyl ester storage structures in the eye

Metabolic transformations of retinoids within the retinal photoreceptor and retinal pigment epithelial (RPE) cells are responsible for the production of the visual chromophore, 11-cis-retinal, which is essential for vision. Within RPE cells, all-trans-retinol (vitamin A), a product of photoisomerised and reduced 11-cis-retinal, is esterified with fatty acids in a reaction catalyzed by lecithin:retinol acyl transferase (LRAT). While a number of retinoid metabolites are generated during regeneration of 11-cis-retinal, only retinol and retinyl esters show a weak intrinsic fluorescence (excitation λex at ~320 nm). Two-photon excitation laser scanning microscopy was applied as a noninvasive imaging technique to investigate the light-dependent distribution of fluorescent all-trans retinol and all-trans retinyl ester in the dissected mouse eye. Using the intrinsic fluorescence of all-trans-retinyl esters, noninvasive two-photon microscopy in combination with HPLC analysis revealed previously uncharacterized structures distinct from other cellular organelles, termed the retinyl ester storage particles or retinosomes. These structures form autonomous all-trans-retinyl ester-rich intracellular compartments distinct from other organelles and colocalize with adipose differentiation-related protein (ADRP). As demonstrated by in vivo experiments using wild-type mice, the retinosomes participate in 11-cis-retinal formation. Retinosomes accumulate in Rpe65-/- mice incapable of carrying out the enzymatic isomerization, and correspondingly, are absent in the eyes of Lrat-/- mice deficient in retinyl ester synthesis. These results indicate that retinosomes located close to the RPE lateral plasma membrane are essential components in 11-cis-retinal production.

3:10 PM: Scott Stevenson, University of Houston

Eye movement recording and retinal image stabilization with high magnification retinal imaging

Advances in ophthalmic imaging have produced very high magnification images of the retina in living human eyes, with wide ranging applications. Scanning systems such as Scanning Laser Ophthalmoscopes (SLOs) and Optical Coherence Tomographers (OCTs) acquire images over time and suffer distortions as a result of eye rotation during the scan. As magnification improves and ever smaller structures are imaged, the natural motions of the fixating eye become an increasingly significant problem. Correction of these distortions provides for better signal to noise ratios and more accurate image metrics, but also yields a record of the eye movement that caused them, and such records are among the most sensitive ever recorded by any method. In this talk I will provide an analysis of the impact that normal and abnormal fixational eye movements have on retinal imaging, and an overview of the various approaches taken to correct for them. As an example of how retinal imaging systems can be used as high sensitivity eye trackers, I will also describe recent oculomotor and psychophysical experiments conducted with the UC Berkeley Adaptive Optics SLO which examine the relationship between retinal locus and visual direction.

3:30 PM: Kostadinka Bizheva, University of Waterloo

Depth-resolved optical probing of retinal physiology with functional ultrahigh resolution optical coherence tomography

Non-contact, depth-resolved, optical probing of retinal response to visual stimulation with a <10µm spatial resolution, achieved by using functional ultrahigh-resolution optical coherence tomography (fUHROCT), is demonstrated in isolated rabbit retinas. The method takes advantage of the fact that physiological changes in dark-adapted retinas caused by light stimulation can result in local variation of the tissue reflectivity. fUHROCT scans were acquired from isolated retinas synchronously with electrical recordings before, during, and after light stimulation. Pronounced stimulus related changes in the retinal reflectivity profile were observed in the inner / outer segments of the photoreceptor layer and the plexiform layers. Control experiments (e.g., dark adaptation vs. light stimulation), pharmacological inhibition of photoreceptor function, and synaptic transmission to the inner retina confirmed that the origin of the observed optical changes is the altered physiological state of the retina evoked by the light stimulus. The conducted experiments demonstrated that fUHROCT allows for simultaneous, noninvasive probing of both retinal morphology and function, which could significantly improve the early diagnosis of various ophthalmic pathologies and could lead to better understanding of pathogenesis. In the past year, advances in UHROCT technology have allowed conducting in-vivo fUHROCT measurements in animal models and human patients, thus bringing the fUHROCT technique closer to clinical trials and commercial development.

3:50 PM: Discussion

Close Window

Monday, October 9, 2006: OSA Annual Meeting (FiO)

Advances in understanding accommodation and presbyopia correction

The eye's diminished capacity to accommodate in middle age is known as presbyopia. The relative surge in the number of elderly in the population is driving the recent development of methods to correct presbyopia. These methods include new surgical techniques and new designs of intra-ocular lenses (IOLs), such as 'accommodating' and multifocal IOLs. How well these methods work to restore near vision is currently a matter of debate. This theme will focus on recent advances in the correction of presbyopia and useful imaging techniques for the eye's anterior segment.

Moderator: Ian Cox, Bausch & Lomb

  • Adrian Glasser, University of Houston

    Dynamics of accommodation and the mechanism of presbyopia in the primate eye

  • Jane Koretz, Rensselaer Polytechnic Institute

    Understanding human accommodation and presbyopia by in vivo imaging of the anterior segment

  • Alan Lang, ReVision Optics

    Advances in the design of intra-ocular lenses for presbyopia correction

Close Window

Advances in instrumentation for high resolution retinal imaging

A recent zeitgeist among vision scientists has been imaging the retina with high resolution, not only for diagnosing and detecting retinal degenerative diseases but also for understanding basic biological processes in the retina. This theme will focus on the optical instrumentation being developed to help enable vision scientists to realize these goals. Ultrahigh resolution optical coherence tomography, adaptive optics ophthalmoscopy, and adaptive optics scanning laser ophthalmoscopy are some of the approaches to be discussed in detail.

Moderator: David Williams, University of Rochester

  • Yuhua Zhang, University of California-Berkeley

    MEMS-based adaptive-optics scanning laser ophthalmoscope

  • Don Miller, Indiana University

    Adaptive optics high resolution retinal imaging

  • Wolfgang Drexler, Medical University Vienna

    Ultrahigh resolution, functional optical coherence tomography

Close Window