Team in-situ

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography
Inria / Raweb 2003
Project: in-situ

Project : in-situ

Section: New Results


Evaluation and Optimization of Pointing and Interaction Techniques

Participants : Yves Guiard [correspondant], Michel Beaudouin-Lafon, Wendy Mackay, Renaud Blanch, Caroline Appert.

Graphical user interfaces (GUIs) are based on a small set of interaction techniques, which rely heavily on two elementary actions: pointing a target on the screen, e.g. an icon or button, and navigating to a non-visible part of the information space, e.g. by scrolling or zooming.

We are working on improving pointing and navigation performance in GUIs. Indeed, the performance of pointing on a screen is similar to that of pointing in the physical world, and it should be possible to take advantage of the computing power to get a significant advantage when pointing in an information world. The major theoretical tool to study pointing performance is Fitts' law [44], which defines the movement time as an affine function of the index of difficulty (ID), defined as the log of the ratio between target distance and target width. In other words, pointing performance strictly depends on the relative size of the target to the distance to the target.

We have explored a technique called target expansion [29] [33] which grows the size of the target when the cursor is near it, and showed that the index of the difficulty of the task is that of the expanded target, even when expansion occurs when the cursor has traveled 90% of the distance to the target. However taking advantage of this property proves to be difficult because it requires a proper anticipation by the system of the target to be expanded. Indeed, expanding the wrong target impairs performance and goes against our primary goal.

We have developed two techniques that look for more promising. The first one is semantic pointing [31] [12]. Semantic pointing uses two independent sizes for each potential target presented to the user: one size in motor space adapted to its importance for the manipulation, and one size in visual space adapted to the amount of information it conveys. This decoupling between visual and motor size is achieved by changing the control-to-display ratio according to cursor distance to nearby targets.

A target with a small visual size and a large motor size will display little information but will be easy to select, which makes it appropriate for, e.g., buttons or links in a Web Page. A target with a large visual size and a small motor size will display more information but will be hard to select, which is appropriate for non-interactive informative labels, for example. A controlled experiment has confirmed that the index of difficulty that best predicts pointing time is the one that uses the motor size rather than the visual size. A prototype application shows how this technique applies to standard GUIs, e.g. scrollbars, dialog boxes, menus and web pages.

The second technique is called vector pointing [34]. It can be seen as an extreme version of semantic pointing where the cursor jumps ``empty'' space and moves from target to target with very little mouse motion. We have shown with a control experiment that Fitts' law does not apply. Indeed, the pointing time of a single target with semantic pointing is constant! In a real application, the performance of vector pointing depends on the accuracy of the prediction of the target being aimed by the user and the influence of distractors. Work is ongoing to develop an application for testing vector pointing in a real setting.

We have also persued our work on multiscale navigation, i.e. navigation of an information world that can be zoomed in or out at any scale (also called Zoomable User Interface or ZUI). Following up on our work that showed that Fitts' law applies to tasks with an extremely high level of difficulty (30 or more, i.e. a ratio of 230 between target size and target distance) we have developed a theoretical model to explain this result[33]. The model shows an effect of view size on multiscale navigation, which we successfully tested in a controlled experiment. We will continue this work in the context of the Micromegas project (see section  7.4) where we will develop ZUIs and study novel navigation techniques.

The complexity of an interaction technique for a given task, i.e. a given interaction sequence, measures the cost of the actions relative to the size of the task when using this interaction technique. The work on pointing and navigation studies the limit performance of human subjects in such tasks. However it is often difficult to observe such performance when the technique is used in the context of a real application. In order to better understand how interaction techniques behave in context, we are developing a model and associated tool to describe interaction techniques and predict their comparative performance for multiple tasks representative of different interaction contexts. The model, called CIS[30] [35] (Complexity of Interaction Sequences) introduces the notion of complexity for interaction techniques. We have successfully tested the model by evaluating 3 interaction techniques (fixed palettes, bimanual palettes and toolglasses) and shown that the most efficient technique depends on the interaction context, confirming the results of one of our earlier studies [47]. We intend to develop CIS in several directions like improving predictions precision and automating the identification of best and worst contexts for an interaction technique.


previous
next