eCite Digital Repository
Multi-modal human–environment interaction
Citation
Wasinger, R and Wahlster, W, Multi-modal human-environment interaction, True Visions: The Emergence of Ambient Intelligence, Springer, E H L Aarts and J L Encarnacao (ed), Berlin, Germany, pp. 291-306. ISBN 3-540-28972-0 (2006) [Research Book Chapter]
Copyright Statement
Copyright 2006 Springer Berlin Heidelberg
DOI: doi:10.1007/978-3-540-28974-6_15
Abstract
AmI environments require robust and intuitive interfaces for accessing their embodied functionality. This chapter describes a new paradigm for tangible multi-modal interfaces, in which humans can manipulate, and converse with physical objects in their surrounding environment via coordinated speech, handwriting, and gesture. We describe the symmetric nature of human– environment communication, and extend the scenario by providing our objects with human-like characteristics. This is followed by the results of a usability field study on user acceptance for anthropomorphized objects, conducted within a shopping context.
The talking toothbrush holder is an example of a consumer product with an embedded voice chip. If activated by a motion sensor in the bathroom it says "Hey don’t forget to brush your teeth!". Talking calculators, watches, alarm clocks, thermometers, bathroom scales, greeting cards, and the pen that says "You are fired" when one presses its button are all products that are often mass-marketed as gimmicks and gadgets (Talkingpresents, online; Jeremijenko 2001). However, such voice labels can also offer useful help for people who are visually impaired, since they can be used to identify different objects of similar shape or to supply critical information to help orientate the users to their surroundings (Talkingproducts, online). All these voice-enabled objects of daily life are based on very simple sensor–actuator loops, in which a recognized event triggers speech replay or simple speech synthesis.
This chapter presents a new interaction paradigm for Ambient Intelligence, in which humans can conduct multi-modal dialogs with objects in a networked shopping environment. In contrast to the first generation of voice-enabled artifacts described above, the communicating objects in our framework provide a combined conversational and tangible user interface that exploits situational context such as whether a product is in or out of a shelf, to compute its meaning.
Item Details
Item Type: | Research Book Chapter |
---|---|
Keywords: | Mulitmodal Interaction |
Research Division: | Information and Computing Sciences |
Research Group: | Library and information studies |
Research Field: | Human information interaction and retrieval |
Objective Division: | Information and Communication Services |
Objective Group: | Information systems, technologies and services |
Objective Field: | Application software packages |
UTAS Author: | Wasinger, R (Dr Rainer Wasinger) |
ID Code: | 90122 |
Year Published: | 2006 |
Deposited By: | Information and Communication Technology |
Deposited On: | 2014-03-27 |
Last Modified: | 2016-11-04 |
Downloads: | 0 |
Repository Staff Only: item control page