In 1834, Weber observed that just-detectable stimulus perturbation (perceptual threshold) for many sensory attributes is proportional to stimulus intensity. Fechner later proposed that this lawful behavior arose from a logarithmic internal representation of stimulus intensity, which when differentiated gives rise to Weber's observation. In apparent contradiction to Fechner's logarithmic proposal, Stevens showed that observers' rated stimulus intensity are typically power functions of stimulus intensity, with exponents that vary substantially for different stimulus attributes. These two conflicting descriptions of the relationship between stimulus intensity and internal representation are each supported by different types of experimental evidence, and the discrepancy remains unresolved. We show that this quandary can be resolved in a framework that incorporates the joint effects of mean and variance of an abstract internal stimulus representation. Specifically, we propose that magnitude ratings (as used by Stevens) reflect the mean internal representation of stimulus intensity, whereas discrimination thresholds (as well as super-threshold distances) depend on both the mean and variability, as captured by the measure of Fisher Information. Under this interpretation, Weber's Law is fully consistent with Stevens' Power Law (for all exponents) when variance of the internal representation grows as the square of the mean, a property that has been observed or proposed for a variety of neural representations.