Feature Integration in the Visual System

Neurons in the LGN and primary visual cortex have access only to a tiny region in the visual field. For making sense of visual scenes, our brain has to integrate this distributed information into coherent percepts, hence allowing us to successfully interact with a complex environment.


a) Neural correlates of task-dependent feature integration in areas V1/V2 and V4

In collaboration with Sunita Mandon and Andreas Kreiter (Brain Research Institute, University Bremen), we are conducting electrophysiological studies on contour integration in macaque monkeys. Our goal is to understand how feature integration is performed in the visual system, investigating the different roles of visual areas V1, V2 and V4 in the integration process. Hereby we focus on two important aspects of these processes: First, we use dynamic stimuli for better observing the temporal dynamics of feature integration, thus allowing us to disentangle feedforward, feedback and recurrent processes. Second, we employ cues with different levels of complexity (such as e.g. single features and more complex shapes) to understand how task requirements interact with ongoing integration processes (parallel functional configuration).


b) Integration of multiple features

Local patches in a visual scene contain information about many different image features such as orientation, spatial frequency, color and motion. Using modeling and mathematical analysis in combination with psychophysical experiments, we seek to identify cortical structures and neuronal dynamics that integrate information from different features into neural representations. For this purpose, we employ hierarchical models with a population dynamics (simplified Wilson-Cowan equations) to quantitatively reproduce experimental data from behavioral and physiological studies. On the integration of spatial frequency and orientation (contour integration), we are closely collaborating with Malte Persike and Günter Meinhardt (Abteilung Methodenlehre, Psychological Department, University of Mainz).

Model for evaluating interactions linking orientation and spatial frequency information in the visual system. Each oriented patch in a visual stimulus (bottom) activates a cortical hypercolumn with neural populations having different preferred features. Interactions are mediated by recurrent couplings which are adapted such that the model output explains psychophysical stimulus detection thresholds (from: Ernst, U. A., Schiffer, A., Persike, M. & Meinhardt, G. Contextual interactions in grating plaid configurations are explained by natural image statistics and neural modeling. Front. Syst. Neurosci. 10:78, 2016).


c) Feature integration in dynamic scenes

In natural environments, visual scenes are generically dynamic - however, most experimental work on this topic uses static stimuli for investigating feature integration. We are bridging this gap by studying contour integration in dynamic scenes under extended viewing conditions. Surprisingly, it turns out that contours are extremely difficult to perceive in these situations, challenging the notion of contour integration being a stimulus-driven, fast, feed-forward process. Our goal is to understand the origins of the differences between feature integration in static and dynamic contexts, and to disentangle the roles of neural dynamics (adaptation and noise), task configuration (cueing, selective attention) and cognitive factors (fatigue, expectations).

Have a look at typical stimuli presented in the experiment by downloading the following movies: Red arrows were added to indicate where a contour forms at the end of the trial, however, these arrows were not present in the real experiment. The contour always appears at the end of a trial, but it is extremely hard to perceive in an extended presentation while it 'pops out' easily in a short presentation (if you have difficulties in perceiving the contour, try the extended presentation with markers or the short presentation with markers).