How fruit flies achieve accurate visual behavior despite changing light conditions

by

Editors' notes

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Cell phone cameras adjust the brightness of a scene using a single value, which explains why the sky is overexposed in the top image and the ground is underexposed in the lower image. The eyes of flies and humans, on the other hand, can generate stable responses to contrast even when there are changes to background luminance. Credit: Marion Silies

When light conditions rapidly change, our eyes have to respond to this change in fractions of a second to maintain stable visual processing. This is necessary when, for example, we drive through a forest and move through alternating stretches of shadows and clear sunlight.

"In situations like these, it is not enough for the photoreceptors to adapt, but an additional corrective mechanism is required," said Professor Marion Silies of Johannes Gutenberg University Mainz (JGU). Earlier work undertaken by her research group had already demonstrated that such a corrective "gain control" mechanism exists in the fruit fly Drosophila melanogaster, where it acts directly downstream of the photoreceptors.

Silies's team has now managed to identify the algorithms, mechanisms, and neuronal networks that enable the fly to sustain stable visual processing when light levels change rapidly. The corresponding article appears in Nature Communications.

Rapid changes in luminance challenge stable visual processing

Our vision must function accurately in many different situations—when we move in our surroundings as well as when our eyes follow an object that moves from light into shade. This applies to us humans and to many thousands of animal species that rely heavily on vision to navigate.

Rapid changes in luminance are also a problem in the world of inanimate objects when it comes to information processing by camera-based navigation systems, for example. Hence, many self-driving cars depend on additional radar- or lidar-based technology to properly compute the contrast of an object relative to its background.

"Animals are capable of doing this without such technology. Therefore, we decided to see what we could learn from animals about how visual information is stably processed under constantly changing lighting conditions," explained Silies, discussing the research question.

Combination of theoretical and experimental approaches

The compound eye of Drosophila melanogaster consists of 800 individual units, or ommatidia. The contrast between an object and its background is determined post-synaptically by the photoreceptors. However, if luminance conditions suddenly change, as in the case of an object moving into the shadow of a tree, there will be differences in contrast responses. Without gain control, this would have consequences for all subsequent stages of visual processing, resulting in the object appearing different.

The recent study with lead author Dr. Burak Gür used two-photon microscopy to describe where in visual circuitry stable contrast responses were first generated. This led to the identification of neuronal cell types that are positioned two synapses behind the photoreceptors.

These cell types respond only very locally to visual information. For the background luminance to be correctly included in computing contrast, this information needs narrow spatial pooling, as revealed by a computational model implemented by co-author Dr. Luisa Ramirez.

"We started with a theoretical approach that predicted an optimal radius in images of natural environments to capture the background luminance across a particular region in visual space while, in parallel, we were searching for a cell type that had the functional properties to achieve this," said Silies, head of the Neural Circuits lab at the JGU Institute of Developmental Biology and Neurobiology (IDN).

Information on luminance is spatially pooled

The Mainz-based team of neuroscientists has identified a cell type that meets all required criteria. These cells, designated Dm12, pool luminance signals over a specific radius, which in turn corrects the contrast response between the object and its background in rapidly changing light conditions.

"We have discovered the algorithms, circuits, and molecular mechanisms that stabilize vision even when rapid luminance changes occur," summarized Silies, who has been investigating the visual system of the fruit fly over the past 15 years. She predicts that luminance gain control in mammals, including humans, is implemented in a similar manner, particularly as the necessary neuronal substrate is available.

More information: Burak Gür et al, Neural pathways and computations that achieve stable contrast processing tuned to natural scenes, Nature Communications (2024). DOI: 10.1038/s41467-024-52724-5

Journal information: Nature Communications

Provided by Universitaet Mainz