Neural implementation of Bayesian inference using efficient population codesD Ganguli and E P SimoncelliPublished in Computational and Systems Neuroscience (CoSyNe), (II-9), Feb 2012.This paper has been superseded by:
|
Given this encoder, we derive a novel decoder to approximate the BLSE. Similar to the population vector, it computes weighted averages of the preferred stimuli. However, the firing rates are not used directly as weights, but are first convolved with a linear filter then exponentiated. The decoder is neurally plausible, and requires knowledge only of the preferred stimuli and a fixed filter, and not the prior or tuning curves (Jazayeri & Movshon 2009). Simulations demonstrate that it outperforms the standard population vector, and converges to the true BLSE as N increases. In a low signal-to-noise regime, the decoder outperforms a BLSE operating on a resource-matched homogeneous population. We conclude that in a regime where resources are limited, neural representations optimized for transmitting information enable neurally plausible decoding that can utilize implicit prior information to perform Bayesian inference.