In the past decade, convolutional neural networks (CNN) have achieved state-of-the-art results in denoising. The goal of this work is to advance our understanding of these models and leverage this understanding to advance the current state-of-the-art. We start by showing that CNNs systematically overfit the noise levels in the training set, and propose a new architecture called bias-free CNNs which generalize robustly to noise levels outside the training set. Bias-free networks are also locally linear, which enables direct analysis with linear-algebraic tools. We show that the denoising map can be visualized locally as a filter that adapts to both signal structure and noise level. Denoising CNNs including bias-free CNNs are typically trained using pairs of noisy and clean data. However, in many domains like microscopy, clean data is generally not available. We develop a network architecture that performs unsupervised denoising for video data, i.e, we train only using noisy videos. We then build on top of the unsupervised denoising methodology and propose a new adaptive denoising paradigm. We develop GainTuning in which CNN models pre-trained on large datasets are adaptively and selectively adjusted for individual test images. GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, particularly for test images differing systematically from the training data, either in noise distribution or image type. Finally, we explore the application of deep learning-based denoising in scientific discovery through a case study in electron microscopy. To ensure that the denoised output is accurate, we develop likelihood map which quantifies the agreement between real noisy data and denoised output (thus flagging denoising artifacts). In addition, we show that popular metrics for denoising fail to capture scientifically relevant details and propose new metrics to fill this gap.