MSc-IT Study Material
June 2010 Edition

Computer Science Department, University of Cape Town
| MIT Notes Home | Edition Home |

Other Senses, Other Devices

So far, we have looked at how sound and vision can form parts of the interface between user and computer. We now ask the question of whether some of our other senses and physical abilities can be used in order to facilitate human-machine communication. The use of other senses is usually proposed for one of two reasons. The first is to replace a sensory channel that is more often employed in user interfaces. This is typically so as to make a system accessible to users with sensory impairments. For instance, if touch can be used to do some or all of the work of vision, then a system will be more accessible to blind users. The second reason is to add to or augment the experience created by senses already used. For example, if the web site about cookery were able to allow users to smell the results of cooking, the effect might be a richer and more enjoyable experience for the user.

Touch

In everyday life, we use touch, and related senses (that give us information about temperature, texture, pressure, body position and orientation, and so on) as yet another way of gaining information and feedback about objects and events around us – especially those with which we are interacting most directly.

Currently, little use is made by interface designers of users' sense of touch. However, a number of experimental prototypes and products meeting specialised requirements serve as excellent examples of what is possible and what may become more significant for interface design in the future.

Several devices have been designed that allow the computer to produce output in a form that can be sensed by touch and related senses. Such devices seem to be most successful is in providing output that is tactile (i.e., that is amenable to the sense of touch – especially by having distinctive texture or shape) and providing force feedback (where the user experiences the computer output as a force on their body). In some cases, these devices are intimately connected to the input devices through which the user issues commands to the computer, and are therefore capable of providing instant and very direct feedback to the user.

Tactile Output

Output devices exist that convert the output of the computer into tactile form. For example, the text normally displayed on a computer screen can be rendered in a tactile language, such as Braille, making it available to users with vision impairments.

For example there is product called "Braille ‘n' Speak" that allows blind or partially sighted users to take notes using a specialised personal organiser. Notes can be "read back" via either a synthesised voice or a "refreshable Braille cell" that turns text into tactile form.

Tactile output devices of this type essentially use touch as an alternative output medium. Other devices rely on an observation about the way most human interaction with the physical world takes place. The separation between input and output that is so clear-cut in many conventional computer interfaces is not so sharp in other contexts. As we take an action in the real world (such as taking hold of a cup and lifting it up) we get immediate feedback (about the texture, temperature, and weight of the cup), through the sensation of pressure in our muscles, joints, and fingers. Thus the actions we take and the feedback we get as a result are very closely connected.

A simple but effective way of combining input and output for user interfaces has been to allow a pointing device such as a mouse or joystick to provide physical feedback my making it resist the user's actions in a context sensitive way, and provide tactile feedback as the mouse pointer is moved over screen objects.

For example, the Moose is a mouse-like device that gives various kinds of physical feedback. If the user is dragging an object, the mouse will feel heavier than normal, and the user will be given tactile cues as the pointer moves over different kinds of screen object (for instance, moving across the edge of a window could produce a "click" sensation).

Such force-feedback mice are a relatively simple way of combining user input and feedback, but more sophisticated products exist. For example, the Phantom allows the user to move a pen-like stylus through space in order to interact with a computer using gestures. Feedback is provided by mechanisms attached to the stylus that make it easier or harder to move, or simulate the effects of different objects and textures that it comes into contact with.

A similar input/output technique that has been developed for use with virtual reality systems involves the use of a "data glove", which senses the position, orientation, and movement of the user's hand, allowing gestures to be used as inputs, and objects in a virtual reality system to be grasped. Several products exist that add force, pressure or vibration feedback to a data glove, allowing the user to "feel" virtual objects they touch or grasp.

Other Senses

So far, we have discussed how the senses of vision, hearing and touch might be employed by user interface designers to provide a richer and more compelling experience for the user. But that still leaves several senses that have not been utilised in human-computer interaction. The senses of smell and taste are unlikely to be very effective as ways to convey large amounts of structured information, but both these senses are highly evocative and profoundly shape our experience of real-life situations.

Surely, though, smell is not a feasible medium for computers to produce their output? At least one company is developing a product that allows computers to generate olfactory output: you can read about the "iSmell" device in an article in Wired magazine.

Activity 3 – Smell, taste and touch in the interface

It is clear how designers can make use of user's auditory and visual senses in their designs. Assuming the necessary hardware was available, how could the senses of touch, smell and taste be used as well as vision and audition for e-commerce applications over the Internet?

A Discussion on this activity can be found at the end of the chapter.

Multiple modalities

We have now discussed a number of modalities – or human sensory channels – that interface designers routinely require users to make use of (and a couple of modalities that aren't yet used in human-computer interaction, but may one day be). Each of these has properties that make its use more or less suitable for particular kinds of function in particular kinds of situation. A design challenge that we haven't mentioned yet, however, arises when more than one sense or means of providing input to the computer is employed in an interface in a way that requires the user to use them in co-ordination.

Several systems have been developed that allow the user to mix commands expressed in various ways. One example is MATIS, a system for finding information about air travel. The user may specify their constraints (such as origin and destination, date of travel) using a mixture of typed information, pointing with the mouse, and speech. The design challenge for such a system is to be able to correctly interpret a command expressed in a simultaneous mixture of these forms. So the user may specify travel by speaking "from here to there", while pointing at origin and destination cities with the mouse, and the system must match the occurrence of words like "here" with the corresponding mouse actions.