Home   People   Projects   Papers   Methods and Equipment   Links   Teaching   Contact Information
Projects

Multiply Viewpoints and layout geometry

Four experiments investigated the roles of layout geometry and viewing perspectives in the selection of intrinsic frames of reference in spatial memory. Participants learned the locations of objects in a room from two or three viewing perspectives. One view corresponded to the axis of bilateral symmetry of the layout. Judgments of relative direction using spatial memory were quicker for imagined headings parallel to the symmetric axis than for those parallel to the other viewing perspectives. This advantage disappeared when the symmetric axis was removed. Judgments of relative direction were not equally fast for the two oblique experienced headings. These results indicate that the layout geometry affects the selection of intrinsic frames of reference and that two oblique intrinsic directions cannot be selected simultaneously.

Spatial context and scene recognition

Two experiments investigated participants’ spatial memory and spatial updating after they briefly viewed a scene. Participants in a darkroom saw an array of five phosphorescent objects on a table and, after a short delay, indicated whether a probed object had been moved. Participants made their judgment from the original viewing perspective or from a new viewing perspective. In one condition, the objects other than the probed one moved (different spatial context), and in the other condition the other objects stayed untouched (same spatial context). Performance was better in the same context than in the different context condition, and better when the testing perspective was the same as the viewing perspective than when it was different. These findings indicated that interobject spatial relations were mentally represented in terms of an intrinsic frame of reference and the orientation of the intrinsic frame of reference was not changed during locomotion.

Disorientation and allocentric spatial representation

Four experiments investigated the nature of spatial representations used in navigation by testing whether the coherence of spatial memories was disrupted by disorientation. Participants learned the layout of several objects and then pointed to the objects while blindfolded in three conditions: before turning (baseline); after turning to a new heading (updating); and after disorientation (disorientation). The internal consistency of pointing was relatively high and equivalent across all three conditions when the layout had salient intrinsic axes, a relatively large number (9) of objects was used, and the participants learned the locations of the objects from a viewing perspective on the periphery of the layout. The internal consistency of pointing was disrupted by disorientation when participants learned the locations of four objects while standing amidst them, and the layout did not have salient intrinsic axes. It was also observed that many participants retrieved spatial relations after disorientation from the original learning heading and that participants were able to point to objects quite accurately from new headings after disorientation. These results suggest that human navigation in familiar environments depends on allocentric representations. Egocentric representations may be used for obstacle avoidance, and their role in navigation may be greater when allocentric representations are not of high fidelity.

Object recognition and locomotion

Two experiments were conducted to investigate whether spatial updating during locomotion would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3D objects displayed in a head-mounted display. To change the observed viewpoint of the object, both the orientation of the object in 3D space and the spatial position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the viewpoint change of the object was 50° or 90°. With 50° rotations, participants were more accurate when the test view was the same as the expected view (due to their own locomotion) than when the two views were different, but performance was not different between expected and unexpected views when viewpoint change was 90°. These results indicate that spatial updating during locomotion occurs within a limited range of viewpoints, but that spatial updating does not eliminate viewpoint costs in visual object processing