By making a neural-network computer model that can be fooled by optical illusions like humans, the researchers advanced knowledge of the human visual system and may help improve artificial vision.

Optical illusions can be fun to experience and debate, but understanding how human brains perceive these different phenomena remains an active area of scientific research. For one class of optical illusions, called contextual phenomena, those perceptions are known to depend on context.

For example, the color you think a central circle is depends on the color of the surrounding ring. Sometimes the outer color makes the inner color appear more similar, such as a neighboring green ring making a blue ring appear turquoise — but sometimes the outer color makes the inner color appear less similar, such as a pink ring making a grey circle appear greenish.

A team of Brown University computer vision experts went back to square one to understand the neural mechanisms of these contextual phenomena. Their study was published on Sept. 20 in Psychological Review.

For the study, the team lead by Serre, who is affiliated with Brown’s Carney Institute for Brain Science, started with a computational model constrained by anatomical and neurophysiological data of the visual cortex.

The model aimed to capture how neighboring cortical neurons send messages to each other and adjust one another’s responses when presented with complex stimuli such as contextual optical illusions.

One innovation the team included in their model was a specific pattern of hypothesized feedback connections between neurons, said Serre. These feedback connections are able to increase or decrease — excite or inhibit — the response of a central neuron, depending on the visual context.

Once the model was constructed, the team presented it a variety of context-dependent illusions. The researchers “tuned” the strength of the feedback excitatory or inhibitory connections so that model neurons responded in a way consistent with neurophysiology data from the primate visual cortex.

Then they tested the model on a variety of contextual illusions and again found the model perceived the illusions like humans.

In order to test if they made the model needlessly complex, they lesioned the model — selectively removing some of the connections. When the model was missing some of the connections, the data didn’t match the human perception data as accurately.