Learned normalized energy models for linear inverse problems
Nicolas Zilberstein, Santiago Segarra, Eero P Simoncelli and Florentin Guth.
Published in Int'l Conf Machine Learning, Jul 2026.
Generative diffusion models can provide powerful prior probability models for inverse problems in imaging, but existing implementations suffer from two key limitations: (i) the prior density is represented implicitly, and (ii) they rely on likelihood approximations that introduce sampling biases. We address these challenges by introducing a new energy-based model trained for denoising with a covariance-based regularization term that enforces consistency across different measurement conditions. The trained model can compute normalized posterior densities for diverse linear inverse problems, without additional retraining or fine tuning. In addition to preserving the sampling capabilities of diffusion models, this enables previously unavailable capabilities: energy-guided adaptive sampling that adjusts schedules on-the-fly, unbiased Metropolis-Hastings correction steps, and blind estimation of the degradation operator via Bayes rule. We validate the method on multiple datasets (ImageNet, CelebA, AFHQ) and tasks (inpainting, deblurring), demonstrating competitive or superior performance to established baselines. Code is available at \url{https://github.com/nzilberstein/Anisotropic-energy-Model}.