Human vision is far from uniform across the visual field. At fixation, we have a region of high acuity known as the fovea, and acuity decreases with distance from the fovea. However, it is not true that peripheral vision is just a blurrier version of foveal vision, and finding a precise description of how exactly they differ has been challenging. This thesis presents two investigations into how the processing of visual information changes with location in the visual field, both focused on the early visual system, as well as a description of a software package developed to support studies of the type found in the second study. In the first study, we use functional magnetic resonace imaging (fMRI) to measure how spatial frequency tuning changes with orientation and visual field location in human primary visual cortex (V1). V1 is among the best-characterized regions of the primate brain, and we know that nearly every neuron in V1 is selective for spatial frequency and orientation. We also know that V1 neurons’ preferred spatial frequencies decrease with eccentricity, which aligns with the decrease in peak spatial frequency sensitivity found in perception. However, precise descriptions of this relationship have been elusive, due to the difficulty of characterizing tuning properties across the whole field. By utilizing fMRI’s ability to measure responses across the entire cortex at once to a set of stimuli designed to efficiently map spatial frequency preferences, along with a novel analysis method which fits the responses of all voxels simultaneously, we present a compact description of this property, providing an important building block for future work.
In the second study, we build perceptual pooling models of the entire visual field from simple filter models inspired by retinal ganglion cells and V1 neurons. We then synthesize a large number of images to investigate how the sensitivities and invariances of these models align with those of the human visual system. This allows us to investigate to what extent the change in perception across the visual field can be accounted for by well-understood models of low-level visual processing, rather than requiring more cognitive phenomena or models with millions of parameters. Finally, I describe an open-source software package developed by members of the Simoncelli lab that provides four image synthesis methods in a shared, general framework. These methods were all developed in the lab over the past several decades and have been described in the literature, but their widespread use has been limited by the difficulty of applying them to new models. By leveraging the automatic differentiation built into a popular deep learning library, our package allows for the use of synthesis method with arbitrary models, providing an important resource for the vision science community. Altogether, this thesis presents a step forward in understanding how visual processing differs across the visual field and, with the effort to share the code, data, and computational environment of the projects, provides resources for future scientists to build on.