Multimodal Flexibility Index

Multimodal flexibility refers to the ability to allocate sensory modalities to the environment while using an interactive device.

Overview

  • Want to develop a system that is maximally usable when the user is mobile, but cannot conduct full-fledged field experiments?
  • Want to understand how demanding a prototype is in mobile situations, but don't know how to approach the problem?

Benefits of our laboratory-based method:

  1. Offers a precise meaning for users' multimodal flexibility
  2. Captures a wide range of outcomes in a single study
  3. Is reasonably cost efficient—in our study, running one subject took about an hour and the blockings could be administered with inexpensive materials
  4. Enables practitioners, who may not have time to work with theories to predict the outcomes of complex situations (we could not predict the outcomes of our study), to get quick feedback.

The paper

Oulasvirta, A., & Bergstrom-Lehtovirta, J. (2010). A simple index for multimodal flexibility. Proceedings of CHI 2010, ACM Press, New York.

Download the PDF here

 

The presentation

Background

In the development of “multimodal interfaces,” researchers have traditionally devoted effort to the question of how to orchestrate sensorimotor capacities optimally for interaction with an interface. Multimodality as viewed in this context could be termed “intra-interface multimodality.” In this paper, we turn the question upside down: what modalities are available to be allocated to tasks other than the current one the user is engaged in? This question, of “extra-interface multimodality,” is a timely one, particularly in the area of mobile HCI. For example, if, during writing of a text message, something happens that causes distraction or reallocation of a sensory modality—e.g., someone asks for directions, a cyclist suddenly approaches, or it is so cold that the fingers start freezing—will you still be able to finish the message without significant costs to performance? We believe that the flexibility of allocation is important whenever there are 1) secondary tasks, distractions, or changes in multitasking strategy; 2) environmental factors such as noise, light, smell, or vibration; or 3) physiological changes that lower transduction capacity (for example, due to brightness or low temperature).

Figure: Example of an MFI study utilizing ear protection for
blocking audition and a cardboard for blocking vision

Procedure and data collection

The method is based on measurements of performance under blocking (see image above) conditions. All combinations of blockings are tested, which implies that up to 4 modalities can be tested in a single experiment. The magnitude of change in user's performance caused by blocking is a quantitative indicator of a task’s “dependency” on the blocked modality. Intuitively, the flexibility index denotes user ability to reach high performance despite modality withdrawals.

Overview of the recipe:

  1. Decide on the modalities that will be blocked. Identification of candidates could be based on user observations or analytical work.
  2. Implement blockings. In the study we report on, we sought to employ inexpensive means for blockings, but there are more options suggested by related literature. Our preliminary ideas are listed in Table 1 of the paper.
  3. Develop a dependent variable for performance of the main task that is reliable and sensitive.
  4. Ensure comparable conditions, particularly the modalities and interface solutions in different blocking conditions.

The rest of the steps follow standard experimental procedures, with the following precautions:

  1. Employ a within-subjects experiment design, counterbalancing the order in which 1) the interfaces and 2) blocking conditions and 3) the tasks (if more than one) appear across subjects.
  2. Decide on the level of statistical power desired and calculate the required sample size.
  3. Design pre-trial instructions and practice so as to ensure that performance under blocking conditions does not overly reflect the novelty of the situation.
  4. After running a pilot, execute the experiment.
  5. After preprocessing the data to address outliers and missing data, normalize the scores and calculate the MFI and derivatives.

Calculation of MF-index and D-values

The formula for MFI is below. More formulas are given in the paper, e.g. for dependence values and bimodality indices. These characterize how performance depends on the allocation of one or more modalities. The formula for calculating MFI is simple: it is the average of changes from baseline caused by the blockings, the baseline being the undistracted condition. Detailed information is given in the paper and in the Excel sheet.

Download an Excel sheet containing:

  • Templates for two simple experiments to calculate MFI and D-values
  • Instruction on how to normalize scores for the indices
  • Data example from the paper

Please notice that the Excel misses calculations of the bimodality indices (see the paper). We're working on it!

Example Study: Comparison of three mobile input devices

We compared three common input interfaces for mobile devices: Physical-Qwerty, Touchpad-Qwerty, and ITU-12 keypad. The task was to enter text messages (based on the Soukoreff-McKenzie corpus) as fast and accurately as possible. The roles of audition (ear protection), tactition (plastic layer to prevent feeling edges of buttons), and vision (a large cardboard under the chin that occludes the phone selectively) were studied.

The results show an interesting interaction effect: The interface that was best in absolute performance (Physical-Qwerty) was the worst in terms of multimodal flexibility. In other words, performance was compromised proportionately more when all modalities could not be allocated to it.

Figure: MFI results

The Dependence-values show why this was for:

Figure: Dependence-values

The D-values show that users' performance on the Touchpad-Qwerty and Physical-Qwerty decreased about 70% when vision could not be allocated to interaction. Interestingly, the ITU-12 keypad suffered much less from blocking of vision, but it was relatively more dependent on tactition (ability to feel the borders of buttons and the physical device).

The conclusion is that two interfaces that nominally involve the same sensory modalities may be very different in how well they allow modalities to be employed simultaneously for something else.

See a Youtube video explaining the main results.

 

Links and contact

Ubiquitous Interaction (UIx) research group at HIIT

Email Antti Oulasvirta: aoulasvirta at acm dot org, personal homepage

Acknowledgements

This work was funded by the Tekes project Theseus and by the Emil Aaltonen Foundation

 


Edited by Antti Oulasvirta and Joanna Bergstrom-Lehtovirta


Last updated on 14 Apr 2010 by Antti Oulasvirta - Page created on 23 Dec 2009 by Antti Oulasvirta