Detecting Moving Objects During Self-motion

Hope Lutwak.

PhD thesis, ,
Jan 2025.

Download:
  • Reprint (pdf)

  • As we move through the world, the pattern of light projected on our eyes is complex and dynamic. Even in a world that is completely stationary, our self-motion results in velocities on the retina. Added to this there exist independently moving objects, which also create evolving patterns of light on our eyes. Despite the fact that both induce retinal velocities, we are somehow able to accurately distinguish between stable parts of the environment and independently moving objects. One might hypothesize this is achieved by detecting discontinuities in the spatial pattern of velocities, however this computation is also sensitive to velocity discontinuities at the boundaries of stationary objects. We instead propose that humans make use of the specific constraints that self-motion imposes on retinal velocities. When an eye translates and rotates within a rigid 3D world, the velocity at each location on the retina is constrained to a line segment in the 2D space of retinal velocities (Longuet, Higgins, Prazdny 1980). The slope and intercept of this segment is determined by the eye's translation and rotation, and the position along the segment is determined by depth of the scene. Since all possible velocities arising from a rigid world must lie on this segment, velocities not on the segment must correspond to moving objects. We hypothesize that humans make use of these constraints, by partially inferring self-motion based on the global pattern of retinal velocities, and using deviations of local velocity from the resulting constraint lines to detect moving objects. We call this the depth constraint segment.

    We first test if the depth constraint has an effect on 2D velocity discrimination using a simplified stimulus made with a collection of plaids that drifted according to a moving observer. Under these conditions, we failed to find convincing evidence that the constraint had on 2D velocity discrimination. We then tried to test the hypothesis with more naturalistic stimuli, viewed within a head-mounted virtual reality device, simulating a translation forward in different virtual environments. This time, consistent with the hypothesis, we found that performance depended on the deviation of the object velocity from the constraint segment, not on the difference between retinal velocities of the object and its surrounding velocities. Finally, we examine the effect of self-motion on detecting a specific kind of motion artifact (jitter) that occurs in an augmented reality display. We found that our ability to perceive the motion artifact depended on self-motion and the evoked eye movements.


  • Listing of all publications