Model-based Evaluation of Recall-based Interaction Techniques

This work empirically evaluates interaction techniques that rely on user memory, such as hotkeys, here coined Recall-based interaction techniques (RBITs): (1) the lack of guidance to design the associated study protocols, and (2) the difficulty of comparing evaluations performed with different protocols. To address these challenges, we propose a model-based evaluation of RBITs. This approach relies on a computational model of human memory to (1) predict the informativeness of a particular protocol through the variance of the estimated parameters (Fisher Information) (2) compare RBITs recall performance based on the inferred parameters rather than behavioral statistics, which has the advantage of being independent of the study protocol. We also release a Python library implementing our approach to aid researchers in producing more robust and meaningful comparisons of RBITs.

2024

ModelEmpirical Study

See also PvD Squish this Image Characterization Strategies

Comparing Recall-Based Interaction Techniques is challenging Recall-based interaction techniques (RBITs) are interaction techniques that rely on user memory, like keyboard and gesture shortcuts e.g., Marking-menus, Octopocus, MarkPad etc. RBITs are generally faster than interaction techniques relying on recognition such as menus or toolbars. Moreover, they do not use screen space, letting users focus on their primary task. Learning RBITs takes time and effort, which limits their acceptability by users as well as their overall efficiency. Comparing RBITs on the basis of their memorization is therefore important.

Experimental protocols have an impact on estimated benefits of RBITs Such comparisons are usually conducted through a recall-based protocol, which may combine several training and test phases. The techniques performance are often calculated as the recall percentage during test phases, which measures the ability of the participants to recall the command-action mapping when no assistance is provided. However, comparing RBITs with these recall-based protocols is more difficult than it appears. The training schedule has an impact on this schedule; creating them is difficult and often done through trial and error.

We propose to model RBITs and compare them using them Our primary contribution is to suggest model-based evaluation as a method to evaluate RBITs that does not rely on the actual schedule. A model-based evaluation assumes that the observed recall data can be described by a memory model and its “true” parameters, these parameters serving as the basis to summarize and compare RBIT performance. The key advantage of performing the comparison in the parameter space is that the latter are independent of the schedules. Our second contribution is to provide a method to determine the most efficient schedule when designing an RBIT experiment i.e., the one which minimizes the uncertainty about the “true” parameter values of the RBITs. Our third contribution is pyrbit, a Python library that implements all methods presented in this work.

Model-based Evaluation of Recall-based Interaction Techniques  Julien Gori, Bruno Fruchard, Gilles Bailly CHI'24: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems

Publication