Wasinger, R and Wahlster, W, Multi-modal human-environment interaction, True Visions: The Emergence of Ambient Intelligence, Springer, E H L Aarts and J L Encarnacao (ed), Berlin, Germany, pp. 291-306. ISBN 3-540-28972-0 (2006) [Research Book Chapter]
Copyright 2006 Springer Berlin Heidelberg
AmI environments require robust and intuitive interfaces for accessing their embodied functionality. This chapter describes a new paradigm for tangible multi-modal interfaces, in which humans can manipulate, and converse with physical objects in their surrounding environment via coordinated speech, handwriting, and gesture. We describe the symmetric nature of human– environment communication, and extend the scenario by providing our objects with human-like characteristics. This is followed by the results of a usability field study on user acceptance for anthropomorphized objects, conducted within a shopping context.
The talking toothbrush holder is an example of a consumer product with an embedded voice chip. If activated by a motion sensor in the bathroom it says "Hey don’t forget to brush your teeth!". Talking calculators, watches, alarm clocks, thermometers, bathroom scales, greeting cards, and the pen that says "You are fired" when one presses its button are all products that are often mass-marketed as gimmicks and gadgets (Talkingpresents, online; Jeremijenko 2001). However, such voice labels can also offer useful help for people who are visually impaired, since they can be used to identify different objects of similar shape or to supply critical information to help orientate the users to their surroundings (Talkingproducts, online). All these voice-enabled objects of daily life are based on very simple sensor–actuator loops, in which a recognized event triggers speech replay or simple speech synthesis.
This chapter presents a new interaction paradigm for Ambient Intelligence, in which humans can conduct multi-modal dialogs with objects in a networked shopping environment. In contrast to the first generation of voice-enabled artifacts described above, the communicating objects in our framework provide a combined conversational and tangible user interface that exploits situational context such as whether a product is in or out of a shelf, to compute its meaning.
|Item Type:||Research Book Chapter|
|Research Division:||Information and Computing Sciences|
|Research Group:||Information Systems|
|Research Field:||Computer-Human Interaction|
|Objective Division:||Information and Communication Services|
|Objective Group:||Computer Software and Services|
|Objective Field:||Application Software Packages (excl. Computer Games)|
|Author:||Wasinger, R (Dr Rainer Wasinger)|
|Deposited By:||Information and Communication Technology|
Repository Staff Only: item control page