Humans and animals can quickly adapt to new task demands while retaining previously-developed capabilities, but the neural mechanisms of such flexibility remain unclear. While neural computations are governed by synaptic interactions, adjusting the strength of all synapses to allow this behavioral adaptation would likely be slow and not easily reversible. Here we use intrinsic, structured gain fluctuations to rapidly and transiently fine-tune hierarchical neural networks for a particular task. This mechanism takes inspiration from well documented low-dimensional covariability in visual areas of the brain, which has been attributed to shared gain modulation in task-informative neurons (Rabinowitz et al., 2015; Bondy et al., 2018; Haimerl et al., 2021). We construct a multi-layer neural network whose primary encoding stage is modulated by a stochastic gain signal, with learned task-specific targeting. These fluctuations act as a label of informative neurons, and accompany the stimulus signal as it traverses the hierarchy. Upon reaching the decision layer, this label facilitates task-specific decoding, without relying on changes in network weights. Trained stochastic gain modulation allows the circuit to adapt to novel tasks, achieving good performance with minimal task experience. It is not only faster than relearning all network weights but also instantly reversible (disabling modulation restores initial network computation). This mechanism also achieves better performance than deterministic gain increases traditionally used to model attentional mechanisms. Overall, these results provide a novel explanation of how the brain can flexibly, robustly and reversibly adapt to changes in task structure.