Neural straightening of natural videos in macaque primary visual cortex

O J Hénaff, Y Bai, J Charlton, I Nauhaus, E P Simoncelli and R L T Goris

Published in Computational and Systems Neuroscience (CoSyNe), Feb 2019.

Many behaviors rely on predictions derived from recent sensory input. Making visual predictions is challenging because naturally occurring light patterns evolve in a complex, nonlinear manner. We hypothesize that the visual system transforms its inputs so as to straighten their temporal trajectories, thereby enabling prediction through linear extrapolation. Previously, we provided psychophysical support for this theory (CoSyNe 2018). Here, we test the temporal straightening hypothesis in primate visual cortex. We first compared the straightness of natural videos with the straightness of neural population activity elicited by these videos. We presented random sequences of static frames taken from 10 short videos and used multi-electrode arrays to record V1 population activity in anesthetized macaque monkeys. We obtained temporal trajectories of population activity by arranging neural responses in the videos' natural order. Estimating the average curvature of a temporal trajectory is straightforward in the (deterministic) image domain, but challenging in the (noisy) neural domain. As such, we developed a data-efficient and unbiased estimator of neural curvature. We found that V1 systematically straightens natural videos. Straightening occurred over multiple timescales (30-100 ms), and appears to be specific to natural videos. We created synthetic videos that fade from the first to the last frame of each of the natural videos. These movies are straight in the image-domain, but elicited neural trajectories that were much more curved than expected from chance. We then asked which computations give rise to neural straightening. We simulated responses of a hierarchical model that captures the primary nonlinear transformations performed by the early visual system (retina-LGN-V1). The model's V1 response trajectories mimicked our observations. In contrast, deep neural networks trained for object recognition do not straighten natural videos. Temporal straightening may thus be an objective that specifically shapes the computations of the primate visual system.
  • Related Publications: Bai18a, Henaff18a, Henaff-phd, Henaff17a
  • Listing of all publications