MSc-IT Study Material
June 2010 Edition

Computer Science Department, University of Cape Town
| MIT Notes Home | Edition Home |

Evaluation and User Feedback

Landauer states that user evaluation is the ‘gold standard’ for usability. In the previous section we mostly discussed ways that designs can be analysed for usability, largely in the absence of users themselves. Landauer argues that methods for predicting usability are all well and good, but the only real way for evaluating usability is by giving users finished products and analysing how they interact with them.

Evaluating user behaviour is one of the most advanced and well researched fields in HCI.

The classic example of designing with user feedback is the IBM 1984 Olympic Messaging System (See Landauer for a detailed description).

IBM had to quickly develop a messaging system for use by the competitors in the 1984 Olympic games. Because of the huge diversity in users (different languages spoken, different levels of IT competence, different expectations of the system, different cultural backgrounds, etc) there was no way that the designers could accurately predict the usability of the system before the athletes arrived at the Olympic village and started using the system. Furthermore the games only lasted for a few weeks, so there would be no time to correct the system during the games; it had to be right first time. The designers therefore conducted initial user studies with passers-by at the IBM research centre, followed by more extensive trials at a pre-Olympic event with competitors from sixty five countries. The system was then run on a large scale with American users before the opening of the games. Each of these tests identified errors with the system and designers did their best to fix them. The final system that was used at the games was robust and was used extensively without major problems.