Impression learning: Online predictive coding with synaptic plasticity

C Bredenberg, E P Simoncelli and C Savin

Published in Computational and Systems Neuroscience (CoSyNe), Feb 2021.

Early sensory areas in the brain are faced with a task analogous to the scientific process itself: given raw data, they must extract meaningful information about its underlying structure. This process is particularly difficult, because the true underlying structure of the data is never revealed, so representation learning must be largely unsupervised. Framing this process in the language of Bayesian probabilities is tempting but difficult to connect to biology, because we still lack a satisfactory account of how the machinery of Bayesian inference and learning is implemented in neural circuits. Here, we provide a theoretical account of how learning to infer latent structure can be implemented in neural networks using local synaptic plasticity. To do this, we derive a learning algorithm in which synaptic plasticity is driven by a local error signal, computed by comparing stimulus-driven responses to internal model predictions (the network's ``impression'' of the data). We associate these components with the basal and apical dendritic compartments of pyramidal neurons. Our solution builds on the Wake/Sleep algorithm (Dayan et al., 1995) by allowing learning to occur online, and capture temporal dependencies in continuous input streams. Compared to a traditional three-factor plasticity rule (Williams, 1992), it is substantially more stable and data-efficient, which allows it to be used for learning statistics of high-dimensional inputs. It is also flexible in that it is applicable to both rate-based and spiking-based neural activity, as well as different network architectures. More generally, our model provides a potential theoretical bridge from mechanistic accounts of synaptic plasticity to algorithmic descriptions of unsupervised probabilistic learning and inference.
  • Listing of all publications