OSA Fall Vision Meeting Schedule
Tentative schedule, subject to change
All talks & discussion sessions are at the Large Conference Room, Hilton Garden Inn University & Medical Center
Breakfasts and Lunches will not be provided. A list of restaurants within walking distance can be found at http://www.collegetownrochester.com/
Thursday, October 20
7:00 - 9:00 pm—Registration & Welcome Reception, FEI Atrium
Friday, October 21
8:00 am—Registration & Poster Setup
8:45 am—Welcome & Introduction, David Williams
9:30 - 10:00 am—Break
The developing visual system
Chair: William Bobier, University of Waterloo
Refractive errors tend to be the norm in young infants but generally resolve during early development through emmetropization. Once thought to be largely a passive process, animal studies have provided definitive proof of a defocus- driven active process of local ocular growth regulation, which is sensitive to the sign of defocus and relies on input from not only the central retina, but the peripheral retina as well. In myopia, this process appears to derail, and eyes continue to elongate beyond that required to achieve emmetropia, This presentation will provide an update, based on results from relevant animal and human studies, on current thinking around ocular growth regulation and the origin of the current epidemic of myopia, which is predicted to affect 50% of the world’s population by 2050.
We have tracked the structural and functional development of normal retina and used these normative data to facilitate interpretation of the retinal dysfunction and dysmorphia associated with retinopathy of prematurity (ROP). ROP, a potentially blinding disease, is active when the neurosensory retina is quite immature. Interpretation of non-invasive psychophysical and electrophysiological data in infants and children and in rat models of ROP are backed up by our biophysical, molecular biological, and histological data in the rats. A key result is that the onset of active ROP coincides with development of the rod photoreceptor outer segments and the phototransduction cascade. Dysfunction of the rods persists for years after the active disease has healed in early infancy. Taken all together, our results lead us to propose light as a non-invasive, non-pharmacological intervention designed to minimize and ultimately prevent ROP and its consequences.
Our research takes advantage of a natural experiment: children with a period of visual deprivation caused by dense, central cataracts. Longitudinal studies of their visual recovery reveal that even short periods of deprivation near birth cause lasting deficits not only in acuity but also in higher-level perceptual skills such as motion perception. The deficits are often sleeper effects that appear much later in development. They imply that the infant's early visual experience tunes the visual neural system to allow later specialization. When the experience is missing, deficits emerge later. Nevertheless, some potential for recovery remains even in adulthood--as revealed by video game therapy.
Statistical learning is the process of identifying patterns of probabilistic co-occurrence among stimulus features, essential to our ability to perceive the world as predictable and stable. Research on auditory statistical learning has revealed that infants use statistical properties of linguistic input to discover structure--including sound patterns, words, and the beginnings of grammar--that may facilitate language acquisition. Research on visual statistical learning has revealed abilities to discriminate, learn, and generalize probabilities in visual patterns, but the mechanisms (including developmental mechanisms) underlying infant performance remain unclear. This talk will present new work that examines competing models of statistical learning and how learning might be constrained by limits in infants’ attention, perception, and memory. Broader implications for theories of cognitive development will be discussed.
11:40 am - 12:00 pm Discussion
12:00 - 1:30 pm—Lunch
Contributed Session: Retinal Imaging
Chair: Jessica Morgan, University of Pennsylvania
Adaptive optics (AO) imaging of the human cone mosaic has shown that a visible stimulus can induce intrinsic changes in cone infrared reflectivity. We studied this intrinsic response as a function of stimulus irradiance and duration. Five subjects were imaged using a previously described AO scanning light ophthalmoscope. Following 2 minutes of dark adaptation, image sequences were obtained 0.65° from the fovea using a 790 nm imaging light. Four seconds after the start of each sequence, a 680 nm stimulus of variable irradiance (0.037 to 18.9 μW/deg^2) and duration (0.2 to 2 secs) was delivered to one half of the imaging field. Reflectance signals were extracted from each cone and standardized to their pre-stimulus values. The difference in the standard deviation of the reflectance of stimulated and control (unstimulated) cones was determined at each time point. The peak was extracted from the resultant time-varying response. Peak response increased with both stimulus irradiance and duration, qualitatively consistent with a dependence of the response on photoreceptor function.
Photoreceptors are endowed with directional properties that are expressed by their angular sensitivity to incident light as well as directionality in retinal images commonly referred to as the Stiles-Crawford effect of the first kind and the optical Stiles-Crawford effect, respectively. Individual photoreceptor tilt can be analyzed by moving the incident light across the pupil when capturing retinal images. The aim of this study is to examine the viability of a quadrant pupil detection scheme in which light enters near the SCE peak and backscattered light is captured through four equal-sized sectors in the pupil from which individual photoreceptor tilt can be derived.
The method employs a pyramidal prism to capture simultaneously 4 high-resolution retinal images. A numerical analysis using Matlab is performed to quantify the angular tuning in scattered light from each cone in the captured retinal images. The experimental results for parafoveal retinal imaging in healthy subjects are compared with modeled cone mosaic images. Retinal photoreceptors are modeled by single-layered scattering by mitochondria in the ellipsoid. Retinal images are calculated through four equal-sized sectors in the pupil lane in a 4f system. The Modeled photoreceptors are placed in the retinal plane with different angular tilts.
The initial result of our sectored quadrant pupil imaging system allows not only determination of total intensity images, but also direct determination of photoreceptor inclinations in the backscattered light intensity. It is found that the method is highly suited to determine photoreceptor inclinations without requiring displacement of the incident light in the pupil plane, whereby the determination of photoreceptor tilt becomes simplified. The experimental results compared well with the theoretical expectations thereby confirming the potential of the technique. It offers the possibility to analysis cone photoreceptor tilt in 2-D and is expected to probe valuable when analyzing retinal disease.
Adaptive optics scanning laser ophthalmoscopy (AOSLO) produces high-resolution images of the retinal mosaic. These images contain distortions due to eye motion, which can be used for high speed and resolution eye tracking that is referenced to the photoreceptor mosaic. Patch-based registration methods employ either a) cross-correlation or b) a map-seeking circuit. Both techniques compare images to a reference frame, which may itself contain motion and so provides only a relative measure. We have developed a method for generating motion-free reference frames and use it to compare techniques a) and b) to a modified method using feature (cone) tracking. We present a comparison of these algorithms under variations in image quality, retinal structure and types of eye movement. We discuss the implications for psychophysical experiments, which require accurate measurement of eye movements and cone density/arrangement, as well as for clinical experiments, in which sub-optimal image quality can affect such measures.
To improve the image acquisition efficiency of high resolution adaptive optics (AO) confocal ophthalmoscopy and counter the image distortion induced by continuous and rapid eye movement, we have developed a high speed AO parallel confocal ophthalmoscope (AOPCO). This instrument employs a digital micromirror device to modulate the imaging light into a line of point sources, illuminating the retina simultaneously. By using a high speed line camera to acquire the image and AO to compensate ocular wave aberration, the AOPCO can image the living human eye with cellular level resolution at the frame rate of 200 Hz with a digitization of 768×512 pixels/frame over a field of view of 1.93 x 1.28 degrees. We demonstrate the ability for reducing even eliminating the spatial distortion of retinal imaging caused by involuntary fixational eye movements and the potential for investigating fast moving features like erythrocytes through the retinal vasculature in the living human eye.
Purpose: A critical part in developing optogenetic and electrical vision restoration therapies is the ability to assess the restored function over long periods of time. Here we describe the application of an in vivo method to record and track the response of retinal neurons in a study of optogenetic vision restoration in a mouse model of photoreceptor degeneration (rd10).
Methods: We expressed the calcium indicator GCaMP6s and/or a red-shifted channelrhodopsin (ChrimsonR-tdTomato) in inner retina neurons using adeno-associated viral vectors. In vivo functional imaging of GCaMP6s was performed using a custom built adaptive optics scanning light ophthalmoscope to measure cell responses to light stimulation. Uniform field 0.2 Hz square wave stimuli were presented in Maxwellian view using two LEDs: 365 nm to drive S-opsin, and 620 nm to drive ChrimsonR. Responses were also recorded in wild-type (C57BL/6J) mice transfected with only GCaMP6s.
Results: Robust co-expression of GCaMP6s and ChrimsonR-tdTomato was observed in many neurons and persisted in mice as old as 210 days, 145 days after injection. In rd10 mice with ChrimsonR, cells showed robust responses to 620 nm stimulation at all ages tested; 70, 84 and 112 days old. In rd10 mice without ChrimsonR, no cells showed light evoked activity to 620 nm or 365 nm stimulation. In wild-type mice without ChrimsonR, many cells showed light evoked activity to 365 nm but none responded to 620 nm stimulation.
Conclusion: Expression and function of the channelrhodopsin ChrimsonR persists for extended periods of time, demonstrating excellent potential as a viable therapy for vision restoration in humans. In vivo imaging of calcium responses can be used to monitor responses of retinal neurons over time and may be a useful method for evaluating vision restoration methods.
While the advent of adaptive optics ophthalmoscopy has provided microscopic resolution in the living eye, the majority of neurons in the mammalian retina have evaded detection. This is because most retinal cells are highly translucent, allowing photons to travel through the neural retina with minimal scatter and absorption before reaching the photoreceptors. While retinal translucency is beneficial for vision, it poses a challenge for imaging because cells provide weak optical contrast.
Recent advances in non-confocal ophthalmoscopy now reveal cells that were once hidden by their translucency. By comparing directional cell scatter, we provide detailed images of: ganglion cells, horizontal cells, multi-laminar photoreceptor somata, red blood cells, platelets, white blood cells and putative sub-cellular organelles in the living eye without contrast agents.
We optimized the non-confocal contrast by integrating "split-detection" capabilities (Scoles et al. 2014) into an adaptive optics scanning light ophthalmoscope (AOSLO) custom built for the mouse eye (Guevara-Torres et al. 2015). The split-detection approach compares directional light scatter in the retina. The imaged point spread function of the AOSLO was bisected by a knife-edge prism and relayed into two, phase-locked photomultiplier tubes. The left and right half of the imaged retinal point spread function was digitally subtracted to remove the mutual light information common in the two channels. C57BL/6J mice were anesthetized and imaged with and without contrast agents applied to validate cell type.
Imaged ganglion cells were confirmed by imaging tagged Thy-1 fluorescent cells. Leukocytes were confirmed with fluorescent labeling of nuclei with acridine orange. Red blood cells, platelets, photoreceptor somata and horizontal cells matched the known morphology, density and/or topography in the mouse retina. This new catalog of cells has greatly expanded the number of cell types that may be studied in the living mammalian retina in health and disease.
3:00 - 3:30 pm—Break
Visual human factors in driving
Chair: Jeff Mulligan, NASA
While it is universally acknowledged that both bottom up and top down factors contribute to allocation of gaze, we currently have limited understanding of how top down factors determine gaze choices in the context of ongoing natural behavior such as driving or locomotion. Modeling top-down gaze control has been very difficult because it depends on characterizing the underlying task structure. One purely top-down model by Sprague et al (2007) suggests that natural behaviors can be understood in terms of simple component behaviors, or modules, that are executed according to their expected reward value, with gaze targets chosen in order to reduce uncertainty about the particular world state needed to execute those behaviors. I will discuss the plausibility of the central claims of this approach in the context of driving and locomotion tasks. The modular approach of independent component behaviors is consistent with many aspects of performance, and can generate sequences of fixations similar to those observed in human driving and walking behavior. Thus the model forms a useful, although incomplete, starting point for understanding top-down factors in active behavior.
Limitations in drivers’ visual abilities are a leading contributor to nighttime crashes into pedestrians. Too often, pedestrians at night are poorly illuminated, low contrast, and unexpected. This creates a critical visual challenge even for visually healthy and attentive drivers. To maximize their safety at night, pedestrians must be conspicuous, not merely potentially detectable as an ambiguous object. A conspicuous pedestrian will attract a driver’s attention and be readily perceivable as a pedestrian. This talk will summarize research on the conspicuity of pedestrians at night. A key discovery from basic visual science – our perceptual sensitivity to biological motion – has been harnessed to make pedestrians more conspicuous. Most road users, drivers and pedestrians alike, are not aware of the natural limitations of night vision. Consequently, at night drivers “overdrive” their headlights, and pedestrians overestimate their own conspicuity and avoid using conspicuity-enhancing markings. Collectively, these findings demonstrate the need for interventions to educate road users about night vision.
Failure to detect an oncoming vehicle, or misjudgement of the speed of that vehicle, are among the most common errors listed in road accident statistics. These errors are elevated for drivers in the 70+ age group, are more prevalent in accidents involving children and are also disproportionately high for collisions involving motorcycles. This talk will outline some principles of optical looming that result in some counter-intuitive effects, it will also briefly present empirical findings from simulator studies that support this model as an alternative to simple "distraction" explanations of "looked but failed to see" errors. The findings lead to proposals regarding inner city speed regulation, motorcycle headlight design, driver education and possibly the design of screening tests for older drivers.
Visual field requirements for driving vary widely and are largely without scientific basis. While some states specify a minimum horizontal binocular field extent (e.g., 120° in MA), others have no field requirements (e.g., NH), and some have monocular field requirements. Other countries, such as the UK and Australia, have requirements addressing both the integrity of the central visual field and the peripheral extent. This talk will review the results of studies which have started to address some of the existing knowledge gaps about the impact of field loss on driving performance. An essential consideration is whether a person with field loss is able to detect hazards in a timely manner, in particular, those hazards which first appear in non-seeing areas of the field. This is difficult to evaluate in an open-road driving situation where there is no control over if, where, or when hazards might appear. However, a series of driving studies in the safe, controlled, repeatable environment of a driving simulator has provided important insights into the effects of field loss on hazard detection. (1) Compensatory eye/head scanning is often inadequate resulting in missed detections or delayed responses, even when participants are highly primed to the hazard appearance. (2) The extent of compensatory eye/head scanning varies widely despite similar amounts of remaining visual field, suggesting that we need to evaluate more than just horizontal field extent. (3) The presence of central field loss should be taken into account when developing vision requirements for driving because central scotomas delay responses to hazards.
5:10 - 5:30 pm Discussion
5:30 - 6:00 pm Poster Blitz
Chair: Jeff Mulligan, NASA
6:00 - 8:00 pm Cocktail Reception and Posters
Saturday, October 22
8:30 - 9:00 am—Registration
Fixational eye movements
Chair: Jude Mitchell, University of Rochester
Microsaccades cause rapid retinal-image shifts that go perceptually unnoticed. The mechanisms for perceptual microsaccadic suppression are not well known; moreover, those for large saccades have been highly controversial, in part due to sparse understanding of neural substrates. Here we uncovered an unexpectedly specific neural locus for microsaccadic suppression in the superior colliculus (SC). We first developed a sensitive behavioral measure of perceptual suppression in two monkeys, demonstrating selectivity of microsaccadic suppression to low spatial frequencies. We then investigated visual responses in either purely visual SC neurons or anatomically-deeper visual-motor neurons, which are also involved in saccade generation commands. Surprisingly, visual-motor neurons showed the strongest visual suppression, and the suppression was dependent on spatial frequency like in perception. Most importantly, visual-motor neuron suppression selectivity was highly predictive of behavioral suppression effects in each individual animal, with our recorded population explaining up to ~74% of behavioral variance even on completely different experimental sessions. In contrast, purely visual neurons only had mild and unselective suppression. Because microsaccades are mechanistically similar to large saccades, our results illuminate the broader topic of saccadic suppression in general. Specifically, our results run directly contrary to a hypothesized SC mechanism for saccadic suppression, in which a motor command in the visual-motor and motor neurons is relayed to the more superficial purely visual neurons to suppress them, and to then potentially be fed back to cortex. Instead, our results indicate that an extra-retinal modulatory signal mediating perceptual suppression is already established in visual-motor neurons.
Human eyes never stop moving, despite our subjective experience to the contrary. Even when we attempt to anchor our eyes to an object or feature of interest, we still produce so called ‘fixational’ eye movements, namely microsaccades, drift and tremor. In recent years, microsaccade research has become a mainstay of oculomotor and visual neuroscience, with important implications for basic research, the understanding of various clinical conditions, and for the replication of studies conducted in circumstances in which microsaccades occur. In this talk I will discuss some of the consequences of making (and not making) microsaccades in a variety of visual tasks and environments, as well as some of the pathologies that affect microsaccades, with the aim of developing an integrative framework of microsaccadic function.
The rate of microsaccades, small eye movements that occur spontaneously during visual fixation, rapidly drops following the onset of transient stimuli. Here, we demonstrate that this involuntary reflex is yoked to fluctuations in explicit visual perception. Human observers reported the presence or absence of a brief visual stimulus while we recorded microsaccades, small spontaneous eye movements. Microsaccades were reflexively inhibited if and only if the observer reported seeing the stimulus, even when none was present. Moreover, perceptual sensitivity and the oculomotor reflex were both susceptible to orientation-specific adaptation, a signature of cortical processing. The oculomotor reflex therefore tracked perceptual visibility even when the stimulus was physically unchanged. By applying a novel Bayesian classification technique to patterns of microsaccades on individual trials, we were able to decode the state of perception more accurately than the state of the stimulus. We conclude that unified detection mechanisms link perception and oculomotor control: an elemental 'subjective' internal event—becoming aware of a change in the environment—is immediately and inadvertently revealed by motor inhibition. By demonstrating an objective measure of conscious detection that does not require explicit reports, this finding opens doors to clinical applications and further investigations of perceptual awareness.
Microsaccades are miniature replicas of the saccadic gaze shifts normally used to look at different objects in the scene. Unlike their larger counterparts, microsaccades maintain the stimulus within the foveola, the retinal region with highest visual acuity, raising the question of why observers make them. One of the long-standing proposals on the visual functions of microsaccades is that they enable exploration of small regions in the scene in the same way saccades are normally used to scan larger regions. Recent results based on improved methods for localizing the center of gaze have provided strong support to this proposal. They have shown that when humans are not requested to maintain prolonged fixation on a point―a common, yet unnatural, condition in vision research experiments―, they use microsaccades to precisely center gaze on nearby details of interest. These small gaze shifts are critical in high-acuity tasks and occur because of an eccentricity-dependent decline in visual functions even within the foveola itself. Here we will review this body of results and quantify the different costs of making and not making microsaccades.
10:40 - 11:00 am Discussion
11:00 am - 12:00 pm—Break & Posters
12:00 - 1:30 pm—Lunch
Contributed Session: Clinical
Chair: Dick Aslin, University of Rochester
The common marmoset has attracted increasing interest as a model for visual neuroscience. A measurement of fundamental importance to ensure the validity of visual studies is spatial acuity. The marmoset has excellent acuity that has been reported at the fovea to be nearly half that of the human (Ordy and Samorajski, 1968), a value that is consistent with them having similar photoreceptor densities combined with their smaller eye size (Troilo et al, 1993). Of interest, the marmoset exhibits a higher proportion of cones than rods in peripheral vision than human or macaque, which in principle could endow them with better peripheral acuity depending on how those signals are pooled in subsequent processing. Here we introduce a simple behavioral paradigm to measure acuity and then test how acuity in the marmoset scales with eccentricity. We trained subjects to fixate a central point and detect a peripheral Gabor by making a saccade to its location. First, we found that accurate assessment of acuity required correction for myopia in all subjects. This is an important point because marmosets raised in laboratory conditions often have mild to severe myopia (Graham and Judge, 1999), a finding that we confirm, and that would limit their utility for studies of vision if uncorrected. With corrected vision, we found that their acuity scales with eccentricity similar to that of humans and macaques, having roughly half the value of the human and with no clear departure for higher acuity in the periphery.
Introduction: Corrected myopes exhibit larger visual acuity loss at low luminance (1, 2) and less precise fixational eye movements (3) than emmetropes. We examined the relationship between fixation stability and acuity at two light levels.
Methods: Subjects (31 young adults; dominant eye refraction +2.25 to -10.88 D) were corrected with contact lenses if needed. Mesopic (0.276 cd/m2) and photopic (325 cd/m2) acuity was measured while eye movements were recorded (EyeLink 1000). Fixations were analyzed for letter sizes corresponding to mesopic and photopic acuity limits. Fixation stability is reported as the bivariate contour ellipse area (BCEA), defined as the area of fixation for 68% of the time (4), during acuity reading.
Results: Mesopic acuity was worse for higher myopia (p=0.03) and BCEA were larger for myopes at both light levels (mesopic, p<0.01; photopic, p<0.01). Subjects with smaller photopic BCEA had higher mesopic acuity (p=0.01) and smaller loss of acuity in dim light (p=0.02) relative to their photopic acuity.
Conclusions: Fixation instability in myopia may be associated with difficulty in visual tasks under challenging conditions, such as low luminance.
Amblyopia is associated with reduced fixation stability of the amblyopic eye and poorer visual acuity (VA) which is correlated with poorer fixation stability. We investigated whether impaired VA can cause reduced fixation stability. Fixational eye movements were measured in 5 controls and 8 patients with amblyopia (2 strabismics & 6 anisometropes) while fixating a suprathreshold cross (1.2? visual angle at 40cm, 10 x 15 sec blocks per condition). Monocular VA of controls was varied from 20/20 to 20/100 using plus lenses. The amblyopia group completed three monocular conditions; a) amblyopic eye fixating, b) fellow eye fixating and c) fellow eye fixating with VA matched to the amblyopic eye using plus lenses. Fixation stability (quantified using bivariate contour ellipse area) was unaffected by reduced VA in the control group. Amblyopic eyes had significantly poorer fixation stability than fellow eyes as noted in previous studies. However, such differences did not vary even when VA was matched between the two eyes. Therefore, impaired fixation stability in amblyopic eyes is not simply a result of poor visual acuity.
Stroke damage to V1 in adult humans causes cortical blindness (CB). Visual training in chronic (>6 months) stroke patients decreases the deficit but recovered vision is subnormal. In motor stroke, earlier rehabilitation leads to greater recovery. Here, we asked if visual training initiated soon after stroke leads to better improvement in CB. Subacute (<3 months) stroke patients were trained to discriminate global direction of random dot stimuli. Initially, blind field performance was at chance. After daily home training for 3 months, CBs attained intact-level motion integration thresholds at all trained blind field locations. Unlike chronic CBs, subacute CBs exhibited transfer of recovery to untrained locations up to 10 deg deeper into the blind field. CBs who began training <2 months post-stroke also demonstrated improved performance on an untrained fine direction discrimination task. Thus, training initiated in subacute CB generates faster, more generalized visual improvements than in chronic CB.
Visual perceptual training recovers coarse global motion discrimination in the blind field of cortically blind (CB) subjects, though brain areas and mechanisms supporting this recovery remain unclear. In 9 stroke patients with >3 months of visual training, we measured global-motion-evoked fMRI BOLD signals at: 1) trained blind field, 2) untrained blind field, and 3) intact field locations. During MRI, subjects were adapted to global motion and tested in the adapted direction and 180? opposite. Stimuli presented in the intact field elicited strong adaptation in low level visual areas and hMT+. Stimuli presented at untrained blind field locations elicited no significant signal in any visual areas. Stimuli presented at trained blind field locations generated responses in lower level visual cortex and hMT+, with clear but weak adaptation in hMT+. Our data show training to restore direction selectivity in CB subjects, suggesting plasticity of neural circuits that generate adaptation in V1-damaged systems.
Individual variation in cortical morphology is one hurdle for creating functional cortical maps across different populations. This is particularly challenging in populations with gross structural abnormalities as seen in cases of early hydrocephaly. In a testament to brain plasticity, many hydrocephalic patients with timely shunt placements develop normal or near normal function despite large loss and displacement of cortical tissue. We report here functional mapping in visual cortex in a case of hydrocephaly. This case offers particular insight because the gross abnormalities and tissue loss in the early visual areas is largely limited to one hemisphere, allowing a within subject comparison of structure and function. Preliminary results indicate strongly anomalous retinotopic mapping in the affected hemisphere yet little evidence of loss of visual function, suggesting neural compensation for abnormal cortical morphology. More extensive testing of localized visual function is ongoing.
3:00 - 3:30 pm—Break
3:30 - 5:00 pm—Tillyer Award & Lecture, Dennis M. Levi, UC Berkley
6:00 - 9:00 pm Banquet, Ballroom 384 at the City Grill
Sunday, October 23
8:30 - 9:00 am—Registration
Contributed Session: Vision
Chair: Geoff Aguirre, University of Pennsylvania
When an image is stabilized on the retina, it fades from perception due to local adaptation of neurons with receptive fields that are fixed in retinal space. We were interested in mapping the perceptive fields (PF) - the psychophysical equivalent of receptive fields - which delimit the spatial area where fading of flickered stimuli occurs. To do so, we used a tracking scanning laser ophthalmoscope to deliver stabilized stimuli to the retina. Subjects adapted to a small spot stimulus with a Gaussian intensity profile (σ = 0.56 arcminutes) that flickered at 5 Hz for 2 secs at 10 deg eccentricity. The stimulus faded after approximately 1 sec. The last onset of the stimulus was a probe flash that was spatially offset by 0-6 arcminutes in one of four cardinal directions. Subjects indicated whether the probe was visible using a 0-3 confidence rating scale. The PFs were approximately 10 arcminutes in diameter, asymmetric relative to the adapting point and vertically anisotropic. Outside of the PFs detectability increased rapidly, indicating that there are sharp borders between the PFs of stabilized flickering stimuli.
Retinal lateral inhibition has long been known as the 'contrast encoder' and the relationship among retinal processing, lateral inhibition, and contrast perception has been discussed in the conventional scheme of difference-of-Gaussian (DoG). However, many studies argued that the classic DoG RF failed to explain contrast perception phenomena involving large spatial scale surface interaction (~1 deg). On the other hand, we recently showed that one of the neglected retinal features, wide-RF of retinal interneurons, explains how the visual system performs long-range interaction (Kim & Bertalmio, 2015). The long-range effect was previously explained by the multiscale DoG model (Blakeslee & McCourt, 1997) that proposes a varying spatial-frequency-bandwidth-channel processing. Here we show that the the retinal hierarchical center-surround processing architecture combined with interneurons of narrow and wide RFs accomplishes multi-channel-like results. We propose an update to the conventional view on lateral inhibition.
Melanopsin is expressed in a subset of retinal ganglion cells and mediates non-image-forming responses to light. It is unknown whether melanopsin contributes to conscious vision. We obtained fMRI data from four subjects who viewed spectral pulses that selectively targeted either melanopsin or the cones (3 sec, cosine windowed, 5 contrasts between 25% to 400%; separate backgrounds used for melanopsin and cone-directed stimuli to maximize available contrast; melanopsin background: 110 cd/m2, 3.39 log10 scotopic Trolands). The melanopsin and cone-directed stimuli evoke distinct pupil light responses. Melanopsin-directed stimulation produces fMRI responses in the visual cortex larger than can be easily explained by imperfections in cone silencing. Both melanopsin and cone-directed stimuli produce monotonic contrast-response functions. We find evidence that melanopsin-directed stimulation produces a more temporally extended fMRI response, consistent with known properties of the melanopsin-containing RGCs.
We consider a theoretically interesting problem: How are stimuli compared when they fall at different, widely separated, positions in the visual field? Previously we have shown that thresholds for discriminating spatial frequency are similar whether targets are juxtaposed or are separated by up to 10 deg (eccentricity being held constant) - and even if the targets fall in opposite hemifields.
In the present experiments, we measured thresholds for comparing speed in two patches of moving dots. The patches lay on an imaginary circle of 5 deg radius centered on fixation, and presentations were too brief to allow eye movements. The separation between the centers of the patches varied from 2 to 10 deg.
At slow speeds, observers had lowest thresholds when the moving arrays were juxtaposed; but at medium speeds (6-8 deg/s) there was rather little variation in threshold with separation. It is implausible that the comparison depends on dedicated 'comparator neurons', i.e. higher-order cells that would draw inputs from pairs of lower-order cells that signal speed in local retinal regions; and we postulate instead a 'cerebral bus' that carries abstract representations of separated stimuli.
Motion discrimination is a classic model system for probing computations and circuits underlying perceptual decisions. Despite a long history of studying the sensitivity of single neurons, little is known about how direction can be read out from the activity of neural populations. We recorded from ensembles of MT neurons while monkeys performed a motion-discrimination task. We compared the performance of a simple, neurally plausible, decoder to the psychophysical performance and to the sensitivity of single neurons. We found that the population was more accurate than the best single neurons and performed at least as well as the monkey at our task. We also found that the joint response patterns of neurons was not needed to compute the optimal weight pattern. MT populations were most sensitive to the stimulus immediately following motion onset, which corresponded to psychophysical weights of the monkeys. These results provide empirical groundwork for extending single neuron studies of perception to the population level.
Microsaccades are tiny eye movements that individuals perform unconsciously during fixation. Despite the nature and the purpose of microsaccades are still lively debated, recent evidence showed an association between these micro eye movements and higher-order cognitive processes. In two experiments, here we specifically focused on working memory and addressed whether differential memory load could be reflected in a modulation of microsaccade dynamics. In Experiment 1, participants memorized a numerical sequence composed of either two (low-load condition) or five digits (high-load condition), appearing at fixation. The results showed a reduction in the microsaccadic rate in the high-load compared to the low-load condition. In Experiment 2, the numerical sequence was composed of five digits colored all in red or in green. Participants either memorized the five digits (high load) or the color (low load) of the numerical sequence. Hence, visual stimuli were exactly the same in both conditions. Consistent with Experiment 1, microsaccadic rate was lower in the high-load than in the low-load condition. Overall, these findings reveal the presence of a link between working memory and microsaccades.
10:30 - 11:00 am—Break
John Krauskopf Session I
Chair: J. D. Mollon, University of Cambridge
11:00 - 11:10 am—Tribute & Introduction
To study the circuits underlying color vision near the fovea, we developed a system with adaptive optics and high-speed eye tracking that enables tracking, targeting and stimulation of cones in living eyes. In two subjects with classified mosaics, cones stimulated with 543 nm light against a white background yield expected and unexpected percepts. Stimulated M cones yield either green or achromatic percepts, and the individual responses are pure - largely falling into color or achromatic reporting classes. L cones respond similarly, except the fraction of color-reporting cones yield red percepts. Similar results are found against a blue background, but the M cones convey distinctly blue percepts rather than green. What is unexpected is that the arrangement of the color and achromatic reporting cones are distributed in a way that is inconsistent with simple chromatically-opponent, center-surround midget ganglion cells. To understand how lateral interactions influence these percepts, we're measuring cone sensitivity thresholds against different adapting background conditions. Collectively, these studies shed light on how foveal cone signals are transformed by retinal circuitry.
Colour vision starts in the retina where light is absorbed in three different cone classes, sensitive to long-, medium-, and short-wavelength light. The cone signals then feed into three different post-receptoral channels, a luminance channel and two cone-opponent chromatic -channels (Derrington, Krauskopf & Lennie, 1984). These two cone-opponent chromatic channels do not correspond to perceptually salient colour mechanisms (unipolar red, green, yellow, blue), suggesting that the two sub-cortical chromatic channels are recombined in visual cortex into orderly hue maps consistent with our fMRI experiments.
Our behavioural results show that hue mechanisms are almost invariant with age and ambient illumination. Variability in unique hue settings across observers is only twice as large as within observer variability.
All these results taken together suggest that the human visual system is able to compensate for retinal (peripheral) signal changes by adjusting the relative cone weightings of the cortical colour mechanisms. The mechanism underlying this hue compensation is still poorly understood, but it is likely that it utilises invariant sources in our visual environment.
Color constancy denotes the ability of human and animals to assign a particular color percept to an object. The light reaching the eye confounds illumination and spectral reflectance of the object, making the recovery of constant object color an ill-posed problem. How good the visual system is at solving this task is still a matter of debate. Depending on the laboratory task and the specific cues available to observers, color constancy was found to reach levels between 20% and 80%. If color constancy were indeed that poor, we would experience frequent color changes of objects in everyday life. This does not seem to be the case. Instead, we take it for granted that objects "have" a color, and we do use color terms to describe objects, e.g. a green scarf.
We propose that the reason for this paradoxical situation is the use of laboratory tasks that systematically underestimate the degree of color constancy. We show that constancy is near perfect when using real objects in a natural task and illumination conditions. Our laboratory task was chosen to replicate the role of color constancy in everyday life. Participants were asked to bring a personal object that had for them a well-defined color that they were confident they could identify. Without the object being present, participants selected the Munsell chip that best represented the color of "their" object. They performed the task first in a room under neutral daylight illumination and in four other rooms that had non-daylight illuminations provided by windows covered with colored filters. In all situations they mostly selected the same colored chip as their match to the absent object, even though the light reaching the eye in each case differed considerably. Our results demonstrate that color constancy under real world conditions is exceptionally good and that previously measured insufficiencies can mainly be attributed to reduced laboratory settings and tasks.
12:25 - 12:30 pm—Wrap-up
12:30 - 2:00 pm—Lunch & Poster Removal
2:00 - 2:30 pm—Business meeting & YIA presentation
John Krauskopf Session II
Chair: Angela Brown, Ohio State University
2:30 - 2:40 pm—Tribute & Introduction
Our understanding of retinal physiology has come mainly from microelectrode recordings of retinal cells in a dish. However, even the most densely packed microelectrode arrays have limited spatial resolution and require the tissue to be maintained in an artificial environment. High-resolution in vivo optical recording can overcome both these limitations. By imaging many cells simultaneously, sparse cell types that are rarely encountered with a microelectrode can be studied. Moreover, an optical approach deployed in the living animal can track retinal changes over extended time periods, an advantage for studying the progression and treatment of retinal disease. We have developed a fluorescence adaptive optics scanning light ophthalmoscope to record visually driven neuronal activity in mouse and monkey retinal neurons that express a genetically encoded calcium indicator. In the primate, GCaMP6 expression in the foveal ring enables us to record from ganglion cells driven by the most central foveal cones; a region previously challenging to characterize using conventional electrophysiological methods. We are now able to present fine grained stimuli with a precision approaching the width of a foveal cone. This has allowed us to map on-center and off-center receptive fields of up to 150 ganglion cells simultaneously as well as to repeat imaging sessions in the same animal for as long as 12 months. Optical recording of single cells in intact animals has the potential to provide new information about the function of specialized retinal circuits and accelerate the development of methods to restore vision in retinal degeneration.
We distinguish shapes by contours, and surfaces by patterns and textures of multiple orientations and scales, despite retinal response compression leading to cross-orientation suppression. In striate cortex of primates and carnivores, orientation preference is arranged as iso-orientation domains that radiate circularly from pinwheel centers. Using tangential penetrations with multi-electrode arrays, we found that orientation tuning is narrower, and contrast saturation and cross-orientation suppression stronger, within iso-orientation domains than at pinwheel centers. These differences develop due to excitation (not normalization) from neighboring oriented neurons. As a result of these local intra-cortical computations, narrower tuning, greater cross-orientation suppression and higher contrast gain of iso-orientation cells, lead to extraction of object contours from images; whereas broader tuning, greater linearity and less suppression of pinwheel cells, generate selectivity for surface patterns and textures.
Researchers studying visual perception have developed numerous experimental methods for probing the perceptual system. The range of techniques available to study performance near visual threshold is impressive and rapidly growing and we have a good understanding of what physical differences in visual stimuli are perceptually discriminable. A key remaining challenge for visual science is to develop models and psychophysical methods that allow us to evaluate how the visual system estimates visual appearance.
In this presentation, I'll describe methods for modeling judgments of visual appearance that go beyond simple rating methods and describe how to model them and evaluate the resulting models experimentally. In particular, I will present joint work with Sophie Wuerger and John Krauskopf developing and using a method to measure angles in color space given judgments of color proximity.
3:55 - 4:00 pm—Wrap-up