Learning least squares estimators without assumed priors or supervision

M Raphan and E P Simoncelli

Computer Science Technical Report, Courant Inst. of Mathematical Sciences, New York University, Technical Report TR2009-923, Aug 2009.

This paper has been superseded by:
Least squares estimation without priors or supervision
M Raphan and E P Simoncelli.
Neural Computation, vol.23(2), pp. 374--420, Feb 2011.


Download:

  • Reprint (pdf)
  • Official (pdf)
  • Courant TechReport Repository

  • The two standard methods of obtaining a least-squares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one optimizes a parametric estimator over a training set containing pairs of corrupted measurements and their associated true values. But many real-world systems do not have access to either supervised training examples or a prior model. Here, we study the problem of obtaining an optimal estimator given a measurement process with known statistics, and a set of corrupted measurements of random values drawn from an unknown prior. We develop a general form of nonparametric empirical Bayesian estimator that is written as a direct function of the measurement density, with no explicit reference to the prior. We study the observation conditions under which such "prior-free" estimators may be obtained, and we derive specific forms for a variety of different corruption processes. Each of these prior-free estimators may also be used to express the mean squared estimation error as an expectation over the measurement density, thus generalizing Stein's unbiased risk estimator (SURE) which provides such an expression for the additive Gaussian noise case. Minimizing this expression over measurement samples provides an "unsupervised regression" method of learning an optimal estimator from noisy measurements in the absence of clean training data. We show that combining a prior-free estimator with its corresponding unsupervised regression form produces a generalization of the "score matching" procedure for parametric density estimation, and we develop an incremental form of learning for estimators that are written as a linear combination of nonlinear kernel functions. Finally, we show through numerical simulations that the convergence of these estimators can be comparable to their supervised or Bayesian counterparts.

    Warning: Undefined array key 2 in /System/Volumes/Data/e/1.3/p1/lcv/html_public/pubs/makeAbs.php on line 304

    Warning: Undefined array key 2 in /System/Volumes/Data/e/1.3/p1/lcv/html_public/pubs/makeAbs.php on line 304

    Warning: Undefined array key 2 in /System/Volumes/Data/e/1.3/p1/lcv/html_public/pubs/makeAbs.php on line 304
  • NIPS*06 paper: Raphan06b
  • TR on learning a general form of nonparametric Bayes estimator: Raphan07b
  • Raphan PhD thesis: Raphan-phd
  • Listing of all publications