Learning to be Bayesian without Supervision
Presented at:
Neural Information Processing Systems (NIPS*06),
Vancouver BC, 4-7 Dec 2006.
Published as:
Advances in Neural Information Processing Systems
eds. B. Schölkopf, J Platt and T Hofmann, vol. 19, May 2007.
© MIT Press, Cambridge, MA.
Related publications:
• Tech. Report on non-parametric image denoising using implicit-prior methods TR2007-900.
• Optimal denoising in redundant bases icip-07.
Bayesian estimators are defined in terms of the posterior
distribution. Typically, this is written as the product of the
likelihood function and a prior probability density, both of which are
assumed to be known. But in many situations, the prior density is not
known, and is difficult to learn from data since one does not have
access to uncorrupted samples of the variable being estimated. We
show that for a wide variety of observation models, the Bayes least
squares (BLS) estimator may be formulated without explicit reference
to the prior. Specifically, we derive a direct expression for the
estimator, and a related expression for the mean squared estimation
error, both in terms of the density of the observed measurements.
Each of these prior-free formulations allows us to approximate the
estimator given a sufficient amount of observed data. We use the
first form to develop practical nonparametric approximations of BLS
estimators for several different observation processes, and the second
form to develop a parametric family of estimators for use in the
additive Gaussian noise case. We examine the empirical performance of
these estimators as a function of the amount of observed data.
Reprint (310k, pdf) | Other Online Publications