Published in Computational and Systems Neuroscience (CoSyNe), Mar 2023.
The view of the world through the lens of our senses is noisy and incomplete. Confronted with this uncertainty, the brain relies on knowledge about the natural world and the current task context to perceive and to act. The framework of Bayesian inference successfully formalizes this knowledge in terms of probabilistic priors, and perception/action as probabilistic inference. However, our understanding of how such priors are obtained, represented and used by the brain remains limited. Here we build upon the recent success in machine learning of ``diffusion models'' as a means of learning and using priors over images. We construct a recurrent circuit model that can implicitly represent sensory priors and combine them with other sources of information to encode task-specific posteriors. Our solution relies on dendritic nonlinearities, optimized for denoising, and stochastic somatic activity modulated by a global oscillation. Integrating these elements into a densely connected recurrent network results in circuit dynamics that sample from the prior at a rate given by the period of the global oscillation. When combined with additional inputs reflecting sensory or top-down contextual information, the dynamics generate samples from the corresponding posterior. As a simple illustration of the process, we design a low-dimensional dataset mimicking properties of natural images, and demonstrate sampling from the associated prior and context-conditional posterior. These simulations show that the network is capable of performing inference with nontrivial multimodal distributions and that it can reorganize its dynamics to include the influence of top-down contextual signals. Overall, our model provides a new framework for investigating circuit-level representations of priors and their use for guiding behavior across tasks.