Talk Series Abstracts 2003
Do cortical neurons resonate to compute?
CVS Research Talk: Daeyeol Lee, CVS Faculty
Neural activity recorded in behaving animals is non-stationary, making it difficult to determine factors influencing its dynamic patterns. In the present study, coherent oscillations in the activity of neurons in the caudal supplementary motor area (SMA-proper) were analyzed using the wavelet cross-spectrum. The animals were required to produce a series of visually guided hand movements according to the changes in target locations. Many neuron pairs displayed coherent oscillations, most frequently in the beta and gamma frequency bands, and the patterns of coherent oscillation were often similar across different movements. Coherent oscillations in the gamma frequency range became mostly synchronized during the hold period before target onset, and its duration was prolonged when the onset of next target was delayed. These results suggest that coherent oscillations in the gamma frequency might be related to anticipation of behaviorally relevant events, consistent with the proposal that synchronous oscillatory activity contributes to contextual control of cortical information flow. Compared to the gamma frequency oscillations during the preparatory periods, oscillations in the gamma frequency range after movement onset and the beta frequency oscillations displayed more variable phase differences, suggesting that coherent oscillations in various frequency bands subserve different functions. Analysis of mutual information showed that synchronous oscillatory activity did not provide additional information regarding target location or its timing compared to that available in the rate modulated spike trains. Thus, synchronous oscillatory activity in the SMA did not provide independent means to encode task-specific information.
Development of a commercial Hartmann-Shack ophthalmic instrument and single-exposure spectral imaging systems based on commercial Bayer-patterned CCDs
CVS Research Talk: Kevin Kearney and David Prelewitz, Pixel Physics
Pixel Physics is a Rochester-based company specializing in the design and development of image-based instrumentation systems for biomedical and remote sensing applications.
DEVELOPMENT OF A COMMERCIAL HARTMANN-SHACK OPHTHALMIC INSTRUMENT: I will present computational modeling studies and experimental data supporting the hypothesis that the cerebral cortex represents sensory data in terms of a probabilistic model of the environment. The computational modeling studies how that both the receptive field properties and the spiking nature of V1 neurons can be accounted for in terms of a probabilistic model of natural images that attempts to explain the incoming image stream in terms of a collection of sparse events. The data (fMRI and EEG) suggest that neural representations in V1 are being actively "explained" by higher-level areas, in a manner consistent with predictive coding, and that the timecourse of object recognition allows sufficient time for cortico-cortical feedback loops to play a significant role in the formation of high-level percepts.
SINGLE-EXPOSURE SPECTRAL IMAGING SYSTEMS BASED ON COMMERCIAL BAYER-PATTERNED CCDs: Standard silicon-based CCD and CMOS imaging arrays have spectral response over the 400-1000nm range. Color digital imaging cameras work by fabricating a patterned array of dye filters on top of the silicon pixel structure: The dye layers typically have either a Red, Green, or Blue (RGB) transmission, and are arranged in a regular pattern on the imaging array (the most common arrangement is the so-called Bayer pattern). While the imaging array has Near Infrared (NIR) response, this is typically blocked by an auxiliary filter. Pixel Physics has developed technology - both individually and with the RIT Center for Imaging Science - to manipulate the spectral response of these pixels to form different bandpass regions. For instance, it is possible to use the out-of-band (NIR) response of the blue channel to form a NIR channel, while keeping the Green and Red channels. Such a Color-Infrared camera may be of use in imaging retinal features - for instance those associated with diabetic retinothapy. With the addition of a beamsplitter, it is possible to obtain six spectral bands. This technique has been used to obtain spectral data of the human iris. The talk will give an overview of the technology for the purposes of stimulating research in the area of ophthalmic imaging.
Dynamics of image and space representation in visual cortex
CVS Boynton Colloquium Series: Bruno Olshausen, Redwood Neuroscience Institute and Center for Neuroscience at UC Davis
I will present computational modeling studies and experimental data supporting the hypothesis that the cerebral cortex represents sensory data in terms of a probabilistic model of the environment. The computational modeling studies how that both the receptive field properties and the spiking nature of V1 neurons can be accounted for in terms of a probabilistic model of natural images that attempts to explain the incoming image stream in terms of a collection of sparse events. The data (fMRI and EEG) suggest that neural representations in V1 are being actively "explained" by higher-level areas, in a manner consistent with predictive coding, and that the timecourse of object recognition allows sufficient time for cortico-cortical feedback loops to play a significant role in the formation of high-level percepts.
Translation perception and its relationship to the reflex eye movements
CVS Research Talk: Nick Au Yong, Graduate student working with Gary Paige and Scott Seidman, UR Medical Center, Rochester NY
Otolith-mediated vestibular responses such as the linear vestibulo-ocular reflex (LVOR) are highly frequency-dependent. The LVOR becomes increasingly robust as motion frequency increases (i.e. high-pass dynamics, >0.5Hz). In this study, we examined the relationship between the dynamics of interaural translation perception and corresponding reflex eye movements. In order to assess the contribution of otolith information in translation perception, we utilized frequencies that span the physiological range of the system (i.e. 0.1-2.0 Hz). Previous perception studies have been mostly limited to modest frequencies (<0.5Hz), due in part to the difficulty of assessing perception during high frequency motion. We overcame this issue by using a novel manual image stabilization task (parametric adjustment task) in conjunction with a more traditional method (joystick task). The translational LVOR was also assessed in a subset of subjects in order to compare the dynamics of perception and reflex responses. Finally, I will compare animal behavior when remembering target locations with neural activity in area LIP of the posterior parietal cortex. Internal representations of spatial information found in LIP reflect the decrease in precision seen under world-fixed conditions. However, these internal representations also show an inaccuracy in the representation which is not reflected in the behavior. This suggests that information in LIP does not encode world-fixed spatial locations by itself, but instead that this information must be supplemented by information from other brain areas.
CVS Research Talk: Michelle Conroy, Graduate student working with Daeyeol Lee, Monkey games: randomness in a two-player zero-sum game
Interpreting the output of cortical cells
CVS Research Talk: Dana Ballard, CVS Faculty
Recent experimental measurements have suggested an increasing importance of synchronous spikes in cortical computation. These observations are difficult to reconcile with decades of single-cell recordings that have revealed the correlation of increased firing rate with behavioral measures, suggestive of a rate-code. One possibility is that the cortex has adopted a signaling strategy that makes extensive use of synchrony for fast communication, but does it in a way that is consistent with the rate-code indications. We suggest how this could be done. Specifically it shows how synchronous spike codes on both feed forward and feedback connections between the LGN and cortex can be used to form oriented receptive fields given natural images as input. The novel features of our spike model is that it combines synchronous updating of inputs with a probabilistic signaling strategy. We show that these features allow the reproduction of synchronicity measured in the Lateral Geniculate Nucleus as well as classical rate-code features.
Deciding where and when to look: probing the neural mechanisms that govern eye movement choices
CVS Research Talk: Rich Krauzlis, Assistant Professor of Systems Neurobiology, Salk Institute for Biological Studies, La Jolla CA
Primates continuously resample their visual environment through a combination of slow pursuit and quick saccadic eye movements. Such frequent movements require timely choices by both pursuit and saccades about which stimulus to target next. These choices likely involve identifying the most active neurons among those representing the competing alternatives, but it is unknown how these choices are coordinated between the two eye movement systems. We have examined how the superior colliculus, a structure in the midbrain known to be important for orienting to stimuli in the environment, is involved in this process of target selection. We compared the neuronal activity at sites representing the possible choices and found that the difference in activity between the neuronal populations could predict when and where subjects looked. Our results suggest that choosing the goal for pursuit and saccades involves applying different decision criteria in parallel to a common pool of information. This simple but robust sensory-motor strategy can coordinate the outputs from different motor systems with higher-order sensory processes (such as attention), while at the same time retaining independent control over the output motor commands.
Imaging the infant retina: what can adaptive optics tell us?
CVS Research Talk:Andrea Lathrop, Graduate student with Richard Aslin
This talk will include a basic review of the literature on the development of the foveal cone mosaic and other factors impacting infant vision. This will be followed by an outline of work-in-progress and the planned use of the Rochester Adaptive Optics System to take images of the photoreceptor layer in the living infant eye.
Staring development in the face—the role of visual input in the development of expert face processing
CVS Research Talk: Daphne Maurer, Professor of Psychology, McMaster University
Infants are drawn toward visual patterns from birth, but over the first 6 months, their acuity improves dramatically and many face processing skills emerge in rudimentary form. Despite this rapid early development, adult acuity is not achieved until about age 7 and some face-processing skills continue to develop through adolescence. We are evaluating the role of visual experience in driving these postnatal changes by studying children who missed all patterned visual input from birth until bilateral congenital cataracts were removed surgically during infancy and the eyes given compensatory contact lenses. I will discuss (1) the effect of such deprivation on the development of acuity during infancy and (2) the long-term effects on the later development of expertise in face processing.
Variability in human brain plasticity
CVS Research Talk: Helen Neville, Professor of Psychology, University of Oregon, Eugene OR
I will review ERP and fMRI results of visual, auditory and language processing in normal adults and in those who have had different sensory and/or language experience. Taken together these studies suggest that within vision and audition and language different neural systems display considerable variability in the degree to which they are modified by early experience. Within vision, early auditory deprivation has most marked effects on the organization of systems important in processing motion information. Within the visual and auditory systems sensory deprivation has more effects on the representation of the peripheral than of the central fields. In addition, different subsystems within language display varying degrees of modifiability by experience. The acquisition of lexical semantics appears relatively robust and invariant even in individuals with markedly different timing and modality of language input. By contrast, systems active during grammatical and phonological processing display marked effects of alterations in the timing and nature of early language input. These results converge with other lines of evidence that suggest it is important to distinguish these different aspects of language, and they raise hypotheses about the initial development of these different language systems. Parallel studies of normal infants, children, and adults and studies of those with abnormal development provide further evidence for the roles of genetic factors and experience in human neurobehavioral development. The results of these several different types of experiments provide evidence that some systems within the human brain retain the ability to change, adapt, and learn throughout life, while other aspects of human neural and behavioral development display multiple, specific and different critical periods.
Seeking guidance: how do we find what we are looking for in visual search?
CVS Research Talk: Jeremy Wolfe, Professor of Ophthalmology, Visual Attention Laboratory, Harvard Med & Brigham and Women's Hospital
From finding your pen on your desk to finding a weapon in the x-ray image of a backpack, visual search tasks are ubiquitous. We need to search because object recognition seems to be possible for only one (or perhaps a few) objects at a time. To recognize an object as a desired target, we must select it with the aid of visual attention. Performance on a wide range of search tasks can be described by our "Guided Search" model. The model holds that a limited set of "preattentive" processes can use basic feature information about color, size, etc. to guide attention toward likely targets. In this talk, I will focus on the nature of preattentive guidance of attention. I will convince Dana Ballard that "preattentive" is not a bad word. Time permitting, I will also discuss the consequences of attention for "post-attentive vision".
A role in time for object discovery
CVS Research Talk: Brandon Sanders, University of Rochester, Rochester NY
For over twenty years the field of Computer Vision has failed to duplicate the brain's seemingly effortless separation of the world into objects (Object Discovery). During these several decades, a persistent spatial bias in Computer Vision has created an artificial subservience of temporal information to the interests of spatial inference. In this talk I discuss our recent work that examines time in its own light. I describe how, for certain time scales and natural behaviors, spatial information may be ignored and temporal information exploited to perform robust object discovery. The key insight is the assumption of a quasi-static world in which every object of interest is stationary during some interval of observation. The Quasi-static Object Discovery (QsOD) framework I present (1) ignores distracting motion, (2) correctly deals with complicated occlusions, and (3) naturally groups object observations across multiple completely uncalibrated cameras.
Mechanism underlying visual classification
CVS Research Talk: Orna Rosenthal, Postdoctoral Fellow, Carnegie Mellon University
We are constantly bombarded by large amounts of changing visual information, yet we (and other animals) are able to extract repeating patterns embedded in this information and to relate to them as familiar and meaningful. Classification of the stream of information is a primary function in this process. Visual classification is the way we relate to different items of visual information as if they were the same while relating differently to other collections of information items. In order to create new classes from the presented individual exemplars, representation of the classes must be formed and maintained as long as exemplars need to be classified. In my talk I will present several studies aimed at exploring factors and mechanisms that underlie visual class formation. Specifically, I will focus on effects of stimulus statistics as well as feedback on class formation. Additionally, I will describe recent data about visual classification learning in a patient with a bilateral extrastriate lesion.
Optimal feedback control as a theory of motor coordination
CVS Boynton Colloquium Series: Emanuel (Emo) Todorov, Assistant Professor of Cognitive Sciences at the University of California at San Diego
This will be based on our Nature Neuroscience paper, and will include a number of additional unpublished results.
Implications of the trichromatic mosaic for color vision
CVS Thesis Defense: Heidi Hofer, Graduate student with David Williams
The organization of the human trichromatic mosaic and its role in color vision is not well understood. The advent of adaptive optics for the human eye has made it possible to characterize the arrangement of the three classes of cone photoreceptors, which are responsible for color vision, in living human retinas. It is also now possible to deliver stimuli to the retinas of those same subjects that are smaller than the receptors responsible for their detection. This technology is exploited to uncover the influence of individual cones, and their distribution within The retina, on color perception by studying the appearance of tiny, brief, near-threshold stimuli, in subjects with known mosaics. The locations of L, M, and S cones within patches of retina at 1 degree eccentricity were determined in S color normal subjects. L:M ratio varied from 1:2.7 to 16.5:1, a much larger range than previously suggested. Despite the large variability, all subjects perceived hues in the same way. L and M cones were generally found to be randomly arranged. S cone arrangement was either random or slightly regular. Detection and appearance of punctate stimuli was studied in 5 of these subjects at the same retinal location. Color appearance varied systematically with L:M ratio. Contrary to initial expectations, subjects reported a large variety of percepts, even for extremely low probabilities of detection. For test wavelengths to which S cones are insensitive, subjects reported up to 27% of colored flashes blue. These blue responses varied with wavelength in a manner consistent with the sensitivity of M cones. White responses increased linearly with the asynunetry in L:M ratio, behavior generally inconsistent with the hypothesis that both L and M cones must be stimulated for white sensations to occur. These results confirm previous suggestions that M cones contribute to sensations of blueness. The variety of color percepts and the statistics of the blue and white responses are inconsistent with the assumption that color appearance depends solely on the photopigment in the receptors mediating detection. Instead, the results imply that individual cones of each class can generate different percepts.
Design of optimal controllers for biomechanical systems
CVS Research Talk: Emanuel (Emo) Todorov, Assistant Professor of Cognitive Sciences at the University of California at San Diego
This will be a methods talk. It will start with a mini-review of optimal control theory in the context of biological control, and then compare available and new algorithms for controller design for complex nonlinear systems. I will send you some technical papers that you can distribute, but the most important one is still being written (last minute NIPS paper:)
Mutation analysis and phenotypes of patients with Leber congenital amaurosis
CVS Research Talk: Irene Maumenee, Johns Hopkins Center for Hereditary Eye Diseases, Johns Hopkins University, Baltimore MD
Leber congenital amaurosis (LCA) is characterized by profound visual impairment from birth or shortly thereafter. Fundus examination is often initially normal, but chorioretinal atrophy, narrowing, of retinal vessels, intraretinal pigment migration, and white or yellow fundus flecks develop over time. Rarely, a punched out macular lesion may be seen at birth, it may be stable or worsen progressively. The same lesion may also develop in the teen years.
Six genes and their mutations have been identified in patients with LCA. Genotype-phenotype correlations are being undertaken and naming of subtypes of LCA based on gene mutations appears an attainable goal.
There is still a deficit of genes in LCA and strategies for their identification will be outlined.
Genetic basis of total colorblindness
CVS Research Talk: Olof Sundin, Johns Hopkins Center for Hereditary Eye Diseases, Johns Hopkins University, Baltimore MD
Complete achromatopsia is a rare, autosomal recessive genetic disorder characterized by the total inability to detect colors. Most cases of this disease are caused by mutations in two genes, CNGA3 and CNGB3, that encode subunits of a channel necessary to generate light-induced electrical responses in the red, green and blue-sensitive cone photoreceptors. Those affected with the disorder are completely dependent on the normal function of their rod photoreceptors, and are effectively blind under daylight conditions.
CVS Equipment Demonstration: Steven Elliott, demo of the Cambridge Research Systems Lt. Equipment
Computational studies on rapidly-adapting mechanoreceptive fibers
CVS Research Talk: Burak Guclu, Syracuse University
A firing-rate-based population response model for monkey rapidly-adapting (RA) mechanoreceptive fibers was used to find the probability of stimulus detection in the population . It is shown that several RA fibers are required for the detection of the tactile stimulus. However, this rate-based model does not include rate variance and cannot directly predict psychophysical thresholds. Therefore, a time-dependent model was established. The responses of cat RA fibers were modeled using a Markov process and Laplace distribution of action-potential phases with respect to the stimulus. The population model based on the Markov process was used to obtain the 40-Hz thresholds of the Non-Pacinian I (NP I) psychophysical channel as a function of the stimulus-contactor location. Different receptive-field distributions and rate-based decision criteria were also studied. Predictions of the population model are close to the human psychophysical results. The experiments show that the NP I thresholds remain approximately constant across the terminal phalanx of the finger, suggesting that the receptive-field distribution may be uniform as opposed to the findings in the literature. Thus, this is in contradiction with the increase of the innervation density towards the distal end of the phalanx which would be expected to produce lower thresholds. It was also found that the mechanical impedance of the skin varies across the fingertip. Therefore, it is concluded that the mechanical attenuation function used in the model should be modified in such a way that the effects of the innervation density and skin mechanics cancel out.
Investigation of shape processing using psychophysics and fMRI
CVS Research Talk: Rebecca Achtman
In the early stages of visual processing (primary visual cortex), shapes are sampled by discrete, localized, visual filters. The integration of the outputs of these local filters allows us to detect global shape information. Although this integration process is critical for visual processing beyond the primary visual cortex, it remains poorly understood. This thesis investigates what limits the performance of the mechanisms used to detect global structure. In particular, we asked four questions: (1) What information is important for detecting global form? (2) How well can we detect shape defined by changes in contrast? (3) Do the spatial properties of detectors that process global shape change across the visual field? (4) What cortical areas are involved in global shape processing? We used psychophysical methods and functional magnetic resonance imaging (fMRI) to study the integration of local filters for global shape processing in normal adult observers. All our stimuli were spatially bandpass and contained global circular structure. Overall, our findings suggest that the visual system combines the outputs of local detectors both across the visual field and over different stimulus attributes (e.g. contrast, spatial frequency, spatial position, polarity, contrast-defined information). Our excellent sensitivity to these globally structured patterns suggests the involvement of higher-order mechanisms optimized for global processing. However, these higher-order mechanisms are not localized in an individual retinotopic area nor is there a systematic hierarchical increase in activity throughout the ventral processing pathway in response to globally structured stimuli. In conclusion, significant processing of shapes occurs at both the local and the global level.
Eye-head coordination during reading horizontally and vertically arranged Korean texts
CVS Research Talk: Hyojung Seo, Department of Psychology at the Seoul National University
We quantitatively compared parameters of eye and head coordination during reading horizontally and vertically arranged Korean texts. Reading was faster for horizontally arranged than for vertically arranged texts by 24%, primarily due to larger gaze amplitude for horizontal reading, and thus smaller numbers of saccades and fixations. The higher velocity of gaze saccades for given amplitudes in horizontal than vertical reading also contributed to the difference in reading speed. The horizontal bias in reading is at least partly due to the oculomotor system, because the higher velocity for given amplitude of horizontal saccade was also observed in a control experiment devoid of lexical load, in which a sequentially stepping laser target was tracked. The analysis of instantaneous phase of eye and head movements with a metric derived by the Hilbert transform suggests that eye and head coupling is stronger for vertical than for horizontal direction in both reading and laser-tracking tasks. These results indicate that the coupling strength between eye and head movements is variable depending on the direction of gaze shift.
The organization of binocular vision
CVS Research Talk: Casper Erkelens, Helmholtz Institute, Utrecht University, The Netherlands
The leading theory of binocular vision is that it is based on a cyclopean organization. The term cyclopean refers to the unified senses of depth, direction and form obtained from the integration of information of the two eyes. The work of Hering, Hubel & Wiesel, Julesz and others has importantly contributed to this theory. Work of my group on vergence eye movements, binocular visual direction, stereopsis and binocular colour perception indicates a different organization. Our results support the view that binocular vision is based on two monocular streams of visual information, both affected by the outputs of several binocular processes.
Dissecting the microenvironment of the glaucomatous retina
CVS Research Talk: Rebecca Sappington, Graduate student with David Calkins,
High-pressure glaucoma (HPG) is a disease that causes blindness in which increased intraocular pressure induces sharp cupping of the optic nerve head (ONH) and displacement of the lamina cribosa, a porous meshwork of extracellular matrix in the ONH through which retinal ganglion cell (RGC) axons pass. Glia have quickly emerged as fundamental components of the microenvironment of the glaucomatous retina and as mediators of nearly all aspects of glaucomatous pathology, including extracellular matrix pathology, RGC death, and vascular pathology. Glial cells in the ONH of patients with HPG produce tumor necrosis factor alpha (TNF_), a proinflammatory cytokine, and nitric oxide, a free radical gas involved in inflammation. Production of TNF_ by retinal glia in response to elevated hydrostatic pressure induces RGC death in vitro. To establish and isolate the contributions of individual glial cell types to RGC death in experimental HPG, I have developed an in vitro model of HPG in which individual cell types are isolated using magnetic separation and maintained in culture under conditions of controlled hydrostatic pressure. In concurrence with previous work, we demonstrate an increase in RGC death in response to glial signals produced under elevated hydrostatic pressure. We determined that both retinal astrocytes and microglia produce TNF_ in response to elevated hydrostatic. In addition, signals produced by both astrocytes and microglia in response to elevated hydrostatic pressure induce RGC death. However, astrocytes induce higher levels of RGC death than microglia. TNF_ production by retinal astrocytes and microglia in response to treatment with endotoxin is partially regulated by the transcription factor, nuclear factor kappa B (NF_B). In both retinal astrocytes and microglia, NF_B is translocated to the nucleus in response to elevated hydrostatic pressure. Furthermore, this response is diminished by treatment with a proteosome inhibitor. In light of our findings, I will discuss initiation of NF_B translocation and consequent gene transcription as a mechanism for production of TNF_ by retinal astrocytes and microglia in an in vitro model of HPG.
Research on the anterior segment using optical coherence tomography
CVS Research Talk: Jay Wang, Research Assistant Professor, Department of Ophthalmology, University of Rochester
Optical Coherence Tomography (OCT) is a non invasive imaging technique that can recreate a high resolution cross-sectional image of the cornea from many sagittal scans of backscattered light using the low coherence interferometric method. OCT offers the possibility of showing objective structural changes and provides exact and rapid biometric in vivo analysis of structures and dimensions in the eye without direct contact or immersion techniques. In this talk, I will review the research application using OCT on the anterior segment of the eye including tear film, topographical corneal thickness and light scatter.
CVS Research Talk: Captain Steve Schallhorn, US Military, Customized ablation in refractive surgery
Cortical magnification factors in V1 correlate with visual acuity in human subjects
CVS Boynton Colloquium Series: Geoff Boynton, Salk Institute
We compared visual acuity thresholds to areal cortical magnification factors (ACMF) in primary visual cortex in 10 human observers. Two acuity measurements were acquired: (1) Vernier acuity measurements were made using standardpsychophysical techniques on a CRT, and (2) grating acuity thresholds were made using a laser interferometer to bypass the optics of the eye. The ACMF for V1 in both hemispheres was derived for each observer by fitting complexlogarithmic transformations to flattened representations of fMRI activity maps. Vernier and Grating acuity thresholds relate to ACMF by a power function of -1/2, which means a fixed distance in V1 (~ 0.12 mm for Vernier acuity and ~0.18 mm for grating resolution) represents the spatial offset of the Vernier stimulus at threshold, regardless of the eccentricity of the stimulus. Also, we found that across subjects, low acuity thresholds are associated with larger amounts of cortex in V1 representing the stimulus.
Measurement of color matching functions using a digital micro-mirror device
CVS Research Talk: Yasuki Yamauchi, Research & Development Center, Document Products & Supply Company, Fuji Xerox, Co., Ltd., Kanagawa JP
As many kinds of high-quality color imaging devices are wide spread, it is required to match color appearances between different devices. However, the tristimulus values of the stimuli in different devices are not always the same when they are visually matched. One hypothesis to explain this contradiction is that the color matching functions (CMFs) to calculate tristimulus values, defined by CIE in 1931, cannot apply to real observers as each observer has different CMFs. To test this hypothesis, we built a compact apparatus with the use of a digital micro-mirror device (DMD) to measure CMFs of individual observers.
The experiment was conducted with a Maxwellian view optical system. The system can present a test stimulus whose spectral power distribution can be arbitrarily set by adjusting the power of every monochromatic light between 400 to 700 nm with a step of 10 nm. This can be realized by selectively switching DMD, on which the spectrally decomposed light using a diffraction grating is focused. We used 32 independent compound lights as a test stimulus. The observer adjusted the color of the test stimulus to match that of the reference white. We used two-degree bipartite field to present the test and the reference stimuli.
Three observers conducted five to ten sessions. From those results, we estimated the individual CMFs using linear algebraic algorithm by assuming the additivity of CMFs. The CMFs obtained were different among observers, and also different from that of the CIE standard observer. The mean CMFs of these three observers, however, is similar to that of the CIE standard observer. Our results indicate that the optical system we proposed can be used for precise measurement of CMFs as well as traditional colorimeters and that there are significant differences in CMFs among observers.
What is V1 really doing in natural visual situations?
CVS Boynton Colloquium Series: Michael Paradiso, Professor of Neuroscience, Brown University
Natural vision takes place within the context of rich varied stimuli and frequent exploratory eye movements. This situation is quite different from typical visual neurophysiology experiments that employ simple stimuli flashed onto a neuronís receptive field while an animal fixates. We have studied the effects of contextual changes and saccades on the responses of macaque V1 neurons and found that both factors significantly affect the response patterns. There are important implications of these findings for the temporal processing of visual information and the computations underlying perception. We have found in parallel human psychophysics experiments that there are also implications for human perception as the macaque physiological data correctly predict aspects of human performance in detection and discrimination experiments.
Dual-task interference and cognitive architecture
CVS Boynton Colloquium Series: Hal Pashler, Professor of Psychology, University of California, San Diego
Most people claim to be able to perform easy cognitive tasks simultaneously. However, objective measurement often reveals interference when even seemingly trivial tasks are paired, so long as distinct decisions or planning of distinct responses is required. Experiments manipulating the duration of different stages in dual-task experiments indicate stubbornly serial processing in central stages, including but not limited to response selection. This queuing limitation can be dissociated from sensory or perceptual attention and does not seem well characterized by concepts like "central executive", nor does it seem to be a shareable resource. Recent controversy over the role of strategies in producing central queuing will be discussed, along with recent findings on dual memory retrievals and evidence suggesting that anatomical overlap is not a necessary condition for obligatory queuing.
Visual determinants of eye movements revealed by reverse correlation
CVS Research Talk: Ben Tatler, University of Sussex
To determine the visual characteristics that specify where we move our eyes, we recorded saccades while viewing natural scenes. We then used the reverse correlation technique to determine the optimal (least squares) linear filter that operates on (potentially non-linearly transformed) images to generate the observed saccades. This method has previously been used to identify visual, auditory and tactile characteristics that determine spike generation in neurons. Here we attempt to predict saccaded-to locations. First we constructed saliency maps that made specific features of the images explicit: raw luminosity, difference of luminance from the mean, local contrast, chromaticity, and edge information based on the outputs of odd-symmetric Gabors. Each was performed at two spatial scales. We then measured saccade-triggered-averages (STAs) for each of these saliency maps. Unsurprisingly, no structure was found for the raw luminosity STA. For the other transformations, localized receptive fields were found at the high frequencies. STAs give some insight into the operation of the system, but a better characterization is in terms of an optimal linear filter. The optimal filter is only equal to the STA if the inputs are uncorrelated but this was not the case. To calculate the optimal filter with correlated inputs requires a matrix inversion of the input covariance matrix, which can be numerically unstable. We therefore used ridge regression with cross-validation to identify these filters stably. Most interestingly, edge information was weighted positively at high frequencies, but negatively at low frequencies. We interpret this as eye movements being determined partly by surface property boundaries whilst being invariant to gross illumination changes.
Equipment Demonstration: Sol Simpson, Eye-tracker demo
Multimodal cue interactions during observer self-movement
CVS Research Talk: Bill Page, Research Assistant Professor, Cognitive Behavioral Neurology
Movement through the environment provides a number of sensory cues about the observer's heading direction. Optic flow, the patterned visual motion seen during self-movement, is a rich signal about heading. Vestibular afferents also provide translational acceleration signals that encode heading direction. Eye movements complicate heading estimation because they distort the retinal image of optic flow and alter the directional alignment of the visual and vestibular signals.
The dorsal part of area MST (MSTd) in monkey parietal cortex integrates visual, vestibular, and eye movement signals relevant to observer self-movement. We have recorded MSTd neuronal responses during naturalistic self-movement. Our findings suggest that the population response in MST encodes heading direction relying on vestibular input in darkness and visual input in light. Pursuit signals interact with these responses to maintain a veridical population heading estimate.
We have now begun studies of self-movement responses when the monkey is actively engaged in a steering task. Our preliminary results indicate that there are robust effects of the sensory stimuli, the behavioral task, and the perceptual strategy that the monkey uses in that task.
Expression of genes encoding glutamate receptors and transporters in morphologically identified cells of the primate retina
CVS Thesis Defense: Michael Hanna, Graduate student with David Calkins
Photoreceptors in the retina respond to changes in light by modulating the tonic release of glutamate from their axon terminals. Visual information is encoded in the binding of released glutamate to receptors localized to diverse types of bipolar cells that parcel visual information into parallel pathways. Understanding this physiological diversity requires knowledge of the glutamate receptors expressed by bipolar cells. Since glutamate release at the photoreceptor synapse is graded, with the highest concentrations approaching normally toxic levels, bipolar cells might also express glutamate transporters to sharpen their physiological response. I examined the expression of genes encoding glutamate receptors and transporters for the two primary bipolar cell pathways in the retina of the primate Macaque fascicularis: the rod bipolar cell (RBC), which collects rod signals and mediates vision at night, and the diffuse bipolar cell #3 (DB3), which collects from cones and contributes to motion and color vision. RBCs depolarize to light and are known to express the metabotropic glutamate receptor mGluR6, while DB3 cells hyperpolarize to light and are likely to express one or more ionotropic glutamate receptors (GluR). I explored the genetic diversity between RBCs and DB3 cells by creating cDNA libraries from individual cells and probing these libraries using gene-specific PCR, subtractive library hybridization, and random cloning. These libraries were compared to expression in whole retina. I demonstrated for the first time in primate retina genes encoding the AMPA subunits GluR1-4, the kainate subunits GluR5-7 and KA1-2, and the metabotropic subunits mGluR1-8, including a newly described splice variant of mGluR8. I also demonstrated broad expression of the transporters EAAT1-5, including the first evidence of EAAT4 in retina. The RBC expresses the ionotropic subunits GluR1-2 and GluR4, as well as the metabotropic subunits mGluR3 and mGluR6. These cells also express the transporters EAAT2 and EAAT4. In contrast, the DB3 cell expresses only ionotropic subunits GluR1-4 and GluR6, as well the EAAT2 transporter. The results indicate that both pathways for rod and cone vision require not only multiple subunits of glutamate receptors, but also of glutamate transporters to shape their responses to light-induced changes in glutamate concentration.
Is optic flow used to guide walking?
CVS Research Talk: Jeff Saunders, Postdoctoral Fellow with David Knill
Optic flow can provide a strong cue to self-motion, and has been assumed to play an important role in guiding locomotion. However, there has recently been some debate about whether or not perceived heading from optic flow contributes to control of walking. I will review previous results that have led to the questioning of the role of optic flow, and present data from two new experiments that I have recently conducted in the VR lab. Subjects walked to visible targets in a dynamic virtual environment. Visual feedback was presented with a head mounted display system, and the simulated views were updated in real-time based on data from an optical head tracking system. In this setup, visual and nonvisual information about self-motion could be dissociated by changing the mapping from physical to visual space. Results suggest that subjects guide walking by directing their physical heading toward the visual target, rather than aligning their visual heading toward the target. The apparent lack of contribution from optic flow contrasts with findings for other related tasks, like posture control or simulated driving. I will discuss possible explanations for this difference, and suggest other roles for optic flow information in guiding movement.
Spatial and temporal integration for cognitive mapping in cortical area MST
CVS Thesis Defense: Michael Froehler, Graduate student with Charles Duffy