High-quality samples generated with score-based reverse diffusion algorithms provide evidence that deep neural networks (DNNs) trained for denoising can learn high-dimensional densities, despite the curse of dimensionality. However, recent reports of memorization of the training set raise the question of whether these networks are learning the ``true'' continuous density of the data. Here, we show that two denoising DNNs trained on non-overlapping subsets of a dataset learn nearly the same score function, and thus the same density, with a surprisingly small number of training images. This strong generalization demonstrates an alignment of powerful inductive biases in the DNN architecture and/or training algorithm with properties of the data distribution. Our method is general and can be applied to assess generalization vs. memorization in any generative model. Deprecated: stripslashes(): Passing null to parameter #1 ($string) of type string is deprecated in /System/Volumes/Data/e/1.3/p1/lcv/html_public/pubs/utils.php on line 171
Deprecated: explode(): Passing null to parameter #2 ($string) of type string is deprecated in /System/Volumes/Data/e/1.3/p1/lcv/html_public/pubs/utils.php on line 186
Deprecated: str_replace(): Passing null to parameter #3 ($subject) of type array|string is deprecated in /System/Volumes/Data/e/1.3/p1/lcv/html_public/pubs/utils.php on line 817