sword in the stone   Hayne of Tintagel
HCI  /  Software Engineering for Usability

Software Engineering for Usability

Prev section Next section Top of document

4. Traditional software engineering


4.1 Historical context

Software engineering had its start in the context of large-scale military software (i.e. the "contract development context" of the previous section). It was in this context that the "waterfall model" arose and the influence of this model underlies most work on methodologies. The waterfall model describes software development as a series of stages: feasibility study, requirements specification, preliminary design detailed design, coding, testing, integration, installation, operation, maintenance. Each stage is presumed to finish before the next stage starts and the output (documents) from each stage fall down to become the input for the next stage.

The waterfall model is a reasonable fit to contract development (Grudin 1991a), especially for the large non-interactive systems common in the early years of the computer age. The contracting organization prepares the requirements specification (it knows what it wants) and then contracts out the succeeding stages of development. But since the only communication between the users (in the contracting organization) and the developers is a written specifications document, user needs are often not met and there is a "wall" between the users and the developers.

(Strong 1994) points out the inadequacy of the waterfall model:

"Some organizations have responded to the challenge of diversity in skill sets by establishing well-defined sequential models for analysis, design, implementation, delivery, and maintenance. Waterfall models provide a well-known class of examples. The contributions of distinct disciplines tend to occur within single steps of such a model. In practice, this separation of functions can fatally limit the bandwidth of communications between successive stages and different disciplines. There is a tendency to push problems off to later stages. One anonymous description of this tendency follows: 'Inadequacies in the analysis are left for implementation to resolve. Implementation creates a set of usability problems. Human factors workers do what they can with these after the design is relatively fixed. What they can't fix is given to technical writers to repair in documentation. Anything that the documentation doesn't fix is left to trainers. If the training curriculum can't fix a problem, then the hot-line takes it on. If we're very lucky, some record of these problems gets into the hands of the next iteration.' "

It was only recently that the MIL specs were modified to permit anything other than a strict waterfall model. The waterfall model still has enormous influence even though it has been discredited. The military context of much early software development in large projects and the focus of associated software engineering research has resulted in a subconscious feeling of "one true path to salvation" that underlies a lot of methodological thinking.

It is worth noting that the Software Engineering Institute and its Capability Maturity Model (Paulk, Curtis et al. 1995) come out of the military tradition as well, so the influence of this heritage leads them to emphasize processes that are effective for the contract development context, with perhaps less concern for users.

4.2 Disdain for human factors

The problem with usability really predates the era of user interfaces. The true problem is a disdain for human issues - a technology-first attitude that is forgivable when the technology is new and expensive and the comparison is to doing it "by hand". We can see the mind-set that users and their needs are just another bother to be dealt with in DeMarco's advice about getting the user involved in requirements specifications:

"This will give him a sizable measure of responsibility for the system when it is delivered, and will make him feel every deficiency is at least partly his fault. That frame of mind makes him doubly helpful during analysis and more than normally docile at acceptance time." (DeMarco 1978).

(Bansler and Bødker 1993) note that "The basic ideas and procedures of Structured Analysis are in many respects identical with the ideas expressed by Taylor in his 'Principles of Scientific Management' from 1911" and lament that structured analysis "treats workers as all-purpose human information processors, programmed and manipulated by the systems department", that it neglects human considerations such as the amount of judgment often required on the job, the importance of error and exception handling, the significance of work organization and informal communication among the workers.

4.3 The fallacy of a Cartesian dichotomy

There is a common (fallacious) conception of application software existing independently of the user interface -- in fact, the "internals" live and die according to the needs of the user interface. If there is no way provided on the user interface for access to a certain piece of internal functionality, that functionality is "dead code" and it might as well not exist. There is no (need for) functionality except what is needed (by the user).The term "user interface" is perhaps one of the underlying obstacles in our quest for usable programs since it gives the impression of a thin layer sitting on top of the other software which is the "real" system.

"This is the type specimen of the `peanut butter theory of usability', in which usability is seen as a spread that can be smeared over any design, however dreadful, with good results if the spread is thick enough. If the underlying functionality is confusing, then spread a graphical user interface on it. ... If the user interface still has some problems, smear some manuals over it. If the manuals are still deficient, smear on some training which you force users to take." (Lewis and Rieman 1994).

Even HCI researchers fall prey to this fallacy - I quote from a recent article:

"In general, an interface based on the engineering model allows full access to the system's capabilities, whereas an interface based on the task model is easier to learn and use but provides access to only a subset of the system capabilities." (Gentner and Grudin 1996)

In software, (in contrast to the industrial engineering examples given in that article), the system ought to be designed to support specific tasks and has no business having additional functionality beyond what is needed for those tasks.

4.4 Risk management and the spiral model

(Boehm 1988) notes that the waterfall development model "does not work well for many classes of software, particularly interactive end user applications". He proposes a "spiral model" where the risks of various software components are evaluated at each stage and decisions made to change the development process so as to minimize the most important risks. For example, the high risk areas of the project are investigated and specified in more detail than the low risk areas. Typically the spiral model has higher than usual user involvement, prototyping and iterative design. Depending on the evaluation of risks that arise during the development process, a project using the spiral model may end up moving toward one of the other models. For example, Boehm comments that if a project has a low risk of user-interface problems but a high risk of schedule unpredictability, it would move towards the traditional waterfall model. Two recent books by Capers Jones provide empirical data on software risks (Jones 1994; Jones 1996).


Prev section Next section Top of document