Multimodal Input for Perceptual User Interfaces

Abstract

The use of multiple modes of user input to interact with computers and devices is an active area of human computer interaction research. With the advent of more powerful perceptual computing technologies, multimodal interfaces that can passively sense what the user is doing are becoming more prominent. In this chapter, we examine how different natural user input modalities – specifically, speech, gesture, touch, eye gaze, facial expressions, and brain input – can be combined, and the types of interactions they afford. We also examine the strategies for combining these input modes together, otherwise known as multimodal integration or fusion. Finally, we examine some usability issues with multimodal interfaces and methods for handling them.

Publication
In Interactive Displays, Wiley.
Avatar
Corey Pittman
Computer Science, PhD

My research interests include augmented reality, novel user interfaces, and gesture recognition.

Related