Comparison of power and tractability of neural encoding models that incorporate spike-history dependence

L Paninski, J W Pillow and E P Simoncelli

Published in Computational and Systems Neuroscience (CoSyNe), (208), Mar 2005.

A neural encoding model provides a mathematical description of the input-output relationships between high-dimensional sensory inputs and spike train outputs. We consider two fundamental criteria for evaluating such models: the power of the model to accurately capture spiking behavior under diverse stimulus conditions, and the efficiency and reliability of methods for fitting the model to neural data. Both of these are essential if the model is to be used to answer questions about neural encoding of sensory information.

Based on these criteria, we compare three recent models from the literature. Each includes a linear filter that determines the influence of the stimulus (i.e., a receptive field), followed by a nonlinear, probabilistic spike generation stage which determines the influence of spike-train history. This history-dependence allows the models to exhibit many of the non-Poisson spiking behaviors observed in real neurons such as refractoriness, adaptation, and facilitation. All three models allow one to compute the probability of spiking conditional on the recent stimulus and spike history. Finally, all three models can be tractably fit to extracellular data (i.e. the stimulus and a list of recorded spike times) by ascending the likelihood function.

The first model is a generalized integrate-and-fire (IF) model (e.g. Keat et al, Neuron, 2001; Jolivet et al., J. Neurophys. 2004). It consists of a leaky IF compartment, driven by a stimulus-dependent current and a spike-history dependent current, and a Gaussian noise current. The first two currents result from a linear filtering of the stimulus and spike train history, respectively, and membrane conductance can also take on a linear dependence on the input. We have shown recently that the parameters of this model -- the stimulus and spike-history filters, and the reversal, threshold, and conductance parameters -- can be robustly and efficiently fit using straightforward ascent procedures to compute the maximum likelihood solution, because the loglikelihood is guaranteed to be concave (Pillow et al, NIPS 2003; Paninski et al., Neural Comp., 2004). Although the likelihood computation is computationally intensive, the model has a clear and well-motivated biological interpretation, and has been found to account for an variety of neural response behaviors (Paninski et al., Neurocomputing, 2004; Pillow et al., submitted);

The second model is a generalized linear model (GLM) (see also, e.g., Truccolo et al., J. Neurophys. 2004), which resembles the IF model, except that the spiking is determined by an instantaneous nonlinear function of membrane voltage. This can be considered an "escape-rate" approximation to integrate-and-fire, where instead of spiking at a fixed threshold, the spike probability is a sharply accelerating function of membrane potential. Like the IF model, this model has a concave log-likelihood function, guaranteeing the tractability of maximum likelihood estimation. Moreover, the likelihood function for this model is much simpler to compute (Poisson likelihoods, instead of crossing-time probabilities of Gaussian processes), so fitting is much faster and easier to implement than for the IF model (though see Paninski et al., this meeting, for recent improvements on computing the IF likelihood). One drawback is that no similar concavity guarantees exist for the conductance parameters for this model, meaning that the model may be less flexible for modeling the responses of some neurons in which post-spike conductance changes play an important role.

Finally, we examine a model popularized by Berry and Meister (J. Neurosci. 1998; see also Miller and Mark, JASA 1992). Unlike the IF and GLM models, where spike history interacts additively with the stimulus input, this model decomposes the probability of spiking into the product of a "free firing rate," which depends only on the stimulus, and a "recovery function" which depends only on the time since the last spike. Berry and Meister proposed some simple techniques for estimating the two model components. We have recently developed a maximum likelihood method that is more efficient and provides accurate estimates under more general conditions. Specifically, optimization of the post-spike "recovery" function (to maximmize the likelihood of the data) may be performed uniquely and analytically, leading to a highly efficient estimator (Paninski, Network: Comp. Neur. Sys., 2004). As such, this model is the most efficient of the three in terms of estimation of the recovery function; however, while the loglikelihood is concave as a function of the post-spike term and stimulus-dependent term individually, we have no guarantees on the joint concavity of this likelihood, and so in principle some local maxima might exist when simultaneously optimizing both terms. More importantly, the dependence on spike-train history is captured entirely by the time since the last spike, which limits the model's ability to reproduce more complicated spiking dynamics (e.g., slow adaptation) that can be captured by either of the other two models.

We provide a detailed comparison of all three models, with a careful examination of their performance capturing the input-output characteristics and spike-train history effects in the responses of real neurons. We examine both the accuracy with which they predict responses to novel stimuli, and their residual error in predicting the distribution of interspike intervals, which can be quantified using a version of the time-rescaling theorem applied to each model.


  • Listing of all publications