Decoding the contents of perceived and imagined navigation through virtual reality environments
Reggente, N., Essoe, J.K., Jevtic, I., Rissman, J.
This poster was presented under the Higher Cognitive Functions:
Space, Time, and Number Coding category at the
Organization for Human Brain Mapping Annual Conference
in Honolulu, Hawaii on Monday June 15, 2015
See the full poster here: Reggente_OHBM_Final
Introduction: Navigating about one’s environment is a multi-faceted effort requiring a faithful representation of the visuospatial layout and one’s position and orientation within a space. Mental imagery is known to play a key role in successful navigation; whether planning out one’s own route or providing directions to others, we must mentally simulate a trajectory through space and conjure up representations of pertinent contextual details. In this fMRI study, we used a VR-based experimental paradigm to examine whether and how the neural representations of one’s environment and navigational trajectory change as a function of whether the navigation is perceived or imagined.
Methods: Three unique virtual environments (VEs) were created so as to maximize the distinctiveness of each VE, while matching them for size and spatial distribution of major landmarks.
Day 1(In Lab): Subjects were familiarized to the VEs by way of token collection tasks and guided navigation exercises that ensured even explorations of the VEs across subjects.
Day 2(In Lab): Subjects were submitted to additional navigational exercises followed by a test of their allocentric memory of the VEs.
Day 2(In Scanner): fMRI data were collected as subjects viewed a series of first-person video clips taken from each of the VEs that they explored on Day 1. Each 30 s video started from a specific landmark and provided a tour of the perimeter of the VE before ultimately returning back to the same landmark. For each starting landmark, two videos were presented to the subject: one where the route followed a clockwise trajectory and the other, counter-clockwise. After viewing all possible combinations of clockwise, counter-clockwise, starting landmark, and VE, subjects were trained to perform a new task involving mental imagery-based navigation of the routes. On each trial of this task, which subjects performed with their eyes closed, cued as to which VE they should imagine themselves in, which landmark to start at, and which landmark should be the first that they pass as they mentally circumnavigate their way around the perimeter of the VE, back to the starting landmark.
We used a support vector machine classifier within a leave-one-run-out cross validation scheme to decode video-viewing trials according to which of the three worlds the video took place in. Likewise, we examined the data by splitting the video-viewing trials by which direction the video was heading, irrespective of world.
We then labeled the mental imagery task data in the same fashion. Lastly, we trained the classifier using all available perception task data and tested using mental imagery data. Each of these data analytic strategies was implemented using a searchlight-based information-mapping framework.
When using BOLD activity pattern acquired during video viewing, we were able to predict which world the subject was viewing with classification accuracies upwards of 70% when our searchlight sphere was centered on voxels in visual cortex. During mental imagery, significant regions expanded, with comparable accuracies, to BA10. We were also able to predict which direction subjects were viewing/imaging with significant accuracies relying on information from occipito-temporal and medial frontal regions, respectively. When the classifier was trained on video viewing data and tested on mental imagery data we were most successful at decoding which world the subject was imagining themselves in when the searchlight sphere was centered on the caudate or the LOC; decoding direction revealed significant effects in the cerebellum and temporal lobes.
By analyzing fMRI activity patterns measured during individual trials, we can predict the navigationally pertinent contents of an individual’s visual and mental workspace with a reasonably high degree of accuracy.
The authors would like to thank Defense Advanced Research Projects Agency (DARPA), for its support of this project under grant No. D13AP00057.