BodyLoci

To memorize information efficiently, human beings created a large variety of mnemonic methods. One of the most powerful called the memory palace or method of loci, requires the mnemonist to picture a mental space, move through this place to visualize elements located in this space and recall them. With advances in mixed reality, it is now possible to leverage the whole body for interacting. To provide an interaction technique facilitating the learning process, we propose to combine these two approaches. Hence, we propose BodyLoci, an interaction technique for mixed reality using the whole body as an interactive surface and supporting spatial, visual, and semantic memories.

2018

Mixed RealityMemorizationInteraction DesignUser Experience

See also bARefoot PvD Markpad Side-Crossing Menus

The methods of loci
The methods of loci is a powerful mnemonic technique used by many mnemonists to memorize large quantities of information (e.g., the decimals of Pi). It combines several types of memories: spatial, visual, and semantic. The idea is relatively simple, mnemonists must visual mentally a familiar place in which they can virtually walk. Within this mental space, they will make up striking images and create weird stories with the items they need to remember. For instance, if the mnemonist must remember the ingredients of a recipe, they could visual these ingredients in their living room in disproportionate scales (e.g., an egg the size of a person sited on the sofa). The combination of these striking elements located at a specific position in space creates a very strong link in memory.

The BodyLoci technique
BodyLoci is a gestural interaction technique that enables rapid command selections in mixed reality. It builds on the methods of loci to facilitate command learning. Instead of a mental space where the users can navigate, we leverage the body as a spatial structure familiar to the users to support spatial mappings of commands. Concretely, menus and commands are placed at specific locations on the user's body. To select a command, the user must touch this part of the body. The two pictures on the right depict the user selecting a menu and a command within this menu. To enable such interactions with off-the-shelf devices, we use a Kinect V2 for tracking the user body, and use a Vive controller attached on the wrist to perform the selections (see pictures below).

/images/projects/bodyloci/menus.jpg /images/projects/bodyloci/items.jpg
/images/projects/bodyloci/kinect.jpg /images/projects/bodyloci/wrist.jpg /images/projects/bodyloci/mixed_reality.jpg

Learning commands using BodyLoci
To assess how efficiently users learn commands with BodyLoci, we ran two experiments with 24 participants each.
The first was focused on comparing BodyLoci to the Marking menus technique, a conventional baseline in HCI that leverages directional gestures to enable gestural shortcuts (see picture on the right). Through two sessions over two days, we observed how the participants learned gestures with both techniques. We did not observe significant differences between techniques for the recall rates, meaning the participants seemed to learn with comparable efficiency using both techniques; they could recall approximately 12 commands out of 16 the last day.
In the second experiment, we aimed at investigating the learning strategies used by the participants when using BodyLoci. We explicitly asked the participants to "create stories" when learning the commands on their body and added background images to the graphical interfaces used in conjunction with BodyLoci. By comparing the results with the previous experiment, we observed a clear impact of the stories on the recall rates (they could recall 14 commands out of 16), but the images did not seem to help.

/images/projects/bodyloci/mixed_reality-2.jpg

What did we learn?
The BodyLoci technique does not support better command learning than a baseline technique without specific instructions. This indicates that concretizing the method of loci through interaction does not make this method easier to learn by non-expert users. However, we observed that when instructing users to create stories for learning commands, they clearly improved their recall rates. This highlights a major challenge of graphical interfaces: they mostly rely on implicit learning and do not teach users to adopt efficient learning strategies. Our study shows that simple instructions can greatly change the memorization process, at the cost of more mental effort on the user side.

Impact of Semantic Aids on Command Memorization for On-Body Interaction and Directional Gestures Bruno Fruchard, Eric Lecolinet, Olivier Chapuis AVI'18: Proceedings of the 2018 International Working Conference on Advanced Visual Interfaces, pp.1-9

Publication