Software Engineering for UsabilityPrev section Next section Top of document
5. HCI methods
- 5.1 Maturity of HCI
- 5.2 Principles for a usability design process
- 5.3 Early Focus on Users
- 5.4 Integrated Design
- 5.5 Early - And Continual - User Testing
- 5.6 Iterative Design
Human-Computer Interaction is a mature field of study. There is a large body of research literature and a general agreement on principles and effective procedures. The field is easily accessible via a large number of comprehensive textbooks and survey articles: (Gould 1988; Bass and Coutaz 1991; Mayhew 1992; Shneiderman 1992; Dix, Finlay et al. 1993; Hix and Hartson 1993; Wiklund 1994; Baecker, Grudin et al. 1995; Butler 1996).
John Gould and Clayton Lewis proposed four principles that should guide any development process where usability is important: (Gould and Lewis 1985; Gould, Boies et al. 1991).
"Early Focus on Users: Designers should have direct contact with intended or actual users - via interviews, surveys, participatory design. The aim is to understand users' cognitive, behavioral, attitudinal, and anthropometric characteristics - and the characteristics of the jobs they will be doing.
Integrated Design: All aspects of usability (e.g. user interface, help system, training plan, documentation) should evolve in parallel, rather than be defined sequentially, and should be under one management.
Early - And Continual - User Testing: The only presently feasible approach to successful design is an empirical one, requiring observation and measurement of user behavior, careful evaluation of feedback, insightful solutions to existing problems, and strong motivation to make design changes.
Iterative Design: A system under development must be modified based upon the results of behavioral tests of functions, user interface, help system, documentation, training approach. This process of implementation, testing, feedback, evaluation, and change must be repeated to iteratively improve the system."
Gould and Lewis comment :
"'Getting it right the first time' plays a very different role in software design which does not involve user interfaces than it does in user interface design. This may explain, in part, the reluctance of designers to relinquish it as a fundamental aim. In the design of a compiler module, for example, the exact behavior of the code is or should be open to rational analysis. ... Good design in this context is highly analytic, and emphasizes careful planning. Designers know this. Adding a human interface to the system disrupts this picture fundamentally. A coprocessor of largely unpredictable behavior (i.e. a human user) has been added, and the systems algorithms have to mesh with it. There is no data sheet on this coprocessor, so one is forced to abandon the idea that one can design one's own algorithms from first principles. An empirical approach is essential. The involvement of human users escalates the need for an empirical approach well above the usual requirements for testing to make sure a system works." (Gould and Lewis 1985).
"What is the optimal degree of user participation in development? If you are developing a compiler, users' involvement will be minimal. If you are copying features from an existing product in a mature application area, limited contact with potential users can be adequate. If you are developing an interactive system in a new domain, full collaboration with users can be essential." (Grudin 1991b).
(Lewis and Rieman 1994) (on getting access to users):
"... go to a professional meeting and offer a unique T-shirt to people who'll talk with you (yes, there are people whose time is too expensive for you to buy for money who will work with you for a shirt or a coffee mug)."
Participatory design and JAD (Joint Application Design) are approaches to design based on the idea that the users are the domain experts; they know what is involved in their work -- and so they should be participants in designing the computer programs that they will use. One difference between participatory design and JAD is the amount of structure imposed on the design process by the HCI people. Some recent references on participatory design are: (Greenbaum and Kyung 1991; Schuler and Namioka 1993) and on JAD: (Wood and Silver 1995). These approaches are compared in (Carmel, Whitaker et al. 1993).
Task analysis is the study of the work to be done by the users. There are many forms of task analysis which differ according to their scope (some study the whole organization and social aspects, others are very fine-grained, focusing on individual physical actions), their goals, and the format (e.g. notation) in which the results are recorded. Some current references on task analysis are: (Diaper 1989; Stammers, Carey et al. 1990; Kirwan and Ainsworth 1992; Preece 1994)
Bruce Tognazzini tells companies:
"Stop spending money assembling fat manuals. Spend it on sleek software instead. ... Our manuals tend to be thick, clumsy testaments to [the] lack of early planning." ... "The primary job of a technical writer should be to work early with the designers and engineers to reduce drastically the amount of material that has to be explained to users. Writers should be judged heavily on their skill at working with the team to reduce the need for writing ..." (Tognazzini 1995)
I note that for this idea to work, the money for writers and programmers must come from the same budget. I recall my own experience in programming a BASIC compiler where I also had to write the manual - it was difficult to explain which expressions got optimized by the compiler, so I spent the time instead on improving the compiler so that no explanation was necessary.
Technical writers are often the first to notice usability problems. One writer said:
"... I keep trying to tell them there's no way in the world I can describe this so it seems sensible, but they won't listen. They say they know there are rough spots but I should just explain them in the manuals." (Lewis and Rieman 1994)
At one company whose development process was studied, Poltrock and Grudin noted:
"interface designers reported that the technical writers helped find problems in the specifications. `Once they started writing the books, it became obvious where the holes were even before the developers had gotten to a lot of things' ". (Poltrock and Grudin 1994)
(Gould, Boies et al. 1991) emphasize the importance of defining measurable usability goals for products and tracking the improvement of usability with time in a similar manner as hardware performance is tracked:
"The lack of general metrics, and their use, suggests that the usability and productivity of people and organizations who use computers is not a serious goal for much application development."
They suggest that we need such metrics to "get the focus on 'how are we doing' with respect to 'users'".
Some aspects commonly tested include learnability, speed of task execution, errors made, and likeability (attitude towards the software).
(Grudin, Ehrlich et al. 1987) point out that some human factors involvement may be too late to be useful for the current release but it may be very useful to higher level management who are planning for support of the product and for future releases:
"User studies of the final product in real settings can provide exactly the information these managers need (the real causes of the customer complaints, and recommendations for dealing with them), when they need it."
(Nielsen 1994) is the definitive reference on techniques that can be used by HCI specialists to evaluate an interface design.
In "heuristic evaluations", a few usability specialists evaluate an interface design by judging its compliance with a small set of very general design guidelines.
A more thorough (hence more expensive) way of evaluating the usability of an interface design is called a "cognitive walkthrough". It corresponds to the code walkthroughs that are used for checking the correctness of computer programs. In a cognitive walkthrough, the evaluators imagine executing a set of representative tasks using the proposed interface, keystroke by keystroke, mouse click by mouse click.
(Grudin 1991b) points out that it is difficult to judge the usability from a design specification:
"A user's dialogue with the computer is narrowly focused and extends over time, in contrast to the static, spatially distributed written design." ... "Paper presentation may disguise interface problems that users will stumble over repeatedly. Consider an entire set of pull-down menus displayed on one page. Readers searching for a particular item can find it in seconds, even if it is not under the most obvious heading. But actual users never see the entire set of menus simultaneously. If an item is under the wrong heading, they may wander around, inspecting possible synonyms or dropping down to search lower menu levels fruitlessly. They may do this several times before finally learning the location of the item."
(Rudd and Isensee 1994) give tips on effective prototyping, a few of which are listed here:
* Obtain upper-level management support: essential to communicate the power of prototyping.
Start early: "... an initial draft of the prototype should be available before the product objectives are published. ... The initial prototype serves as a straw man to get the customers thinking and talking about their requirements."
* Make the prototype the functional specification: a "living spec" which is easy to understand and review.
The customer is king: "... customer involvement is essential." When customers try out prototype software, they often make "suggestions that we never would have thought of. Customers are a powerful driving force in setting requirements."
Karat and Dayton comment that most development companies don't find enough time in their schedules for the iterations of prototyping & usability testing and that the big problem is figuring out how to solve usability problems once they are found. "... evaluation is not how best to influence product usability - design is where the action is !" (Karat and Dayton 1995) Without some intuition as guidance, a designer may be faced with a long sequence of iterations before the "right" design solution is found. Marvin Minsky calls this "The Puzzle Principle: The idea that any problem can be solved by trial and error - provided one already has some way to recognize a solution when one is found." (Minsky 1994)
(Brooks 1995) discussed the advantages and disadvantages of using formal languages for software specifications and concluded that it is best to have both formal and informal (natural language) specifications but it must be clear which one is the primary specification and which one is derivative. A similar recommendation could be made for the specifications of the user interface: the best situation is probably to have a "living spec" prototype complemented by a written specification.
It is important to document the reasons for a particular interface design. This "design rationale" is needed to ensure that subsequent designers understand the design well enough to be able to extend it. A design rationale document may also be useful in defending a design to higher management who are likely to have their own ideas as to the ideal interface. By capturing the reasoning behind its interface designs, a company builds up a storehouse of design experiences that will aid future efforts. Some references on design rationale are: (Carroll and Moran 1991; Preece 1994; Moran and Carroll 1995).
Prev section Next section Top of document