CHAPTER 7: Putting the User in Charge – User Interface Design for Programmers

Putting the User
in Charge

The history of user interfaces—from the early 1970s when interactive systems first appeared, to today's most modern GUI interfaces—has followed a pendulum. Each generation of user interface designers collectively changes its mind about whether users need to be guided through a program or whether they should be left alone to control the program as they see fit. Following trends in user control is a bit like following the hemlines at the Milan fashion shows. Plus ça change, plus c'est la même chose. Here's a bird's-eye view of what happened.

The first computer systems weren't very interactive at all. You created a program by punching holes on eighty-column cards using a giant hole-punching machine that looked like something from the ship in Lost in Space and made an incredibly satisfying clacking sound. Of course, there was no way to fill in a hole you made by mistake—so if you made even one mistake you had to repunch the whole card. Then you carefully took your deck of cards over to a large machine called a hopper and piled the cards in. (It was called a hopper because it would hop all over the floor doing a happy overstuffed-washing-machine dance unless you bolted it down.)

The hopper ate most of your cards, choking on a few, but eventually, begrudgingly, it accepted your program. On a good day, it wouldn't even chew up any of your cards, forcing you to painstakingly repunch them.

Once the hopper successfully choked down your card deck, you walked across campus to the student union and got some lunch. If you lingered a bit in the comic book store after lunch, by the time you got back to the Computer Center your program would have worked its way halfway up the queue. Every ten minutes or so, the computer operator printed out the status of the queue and pinned it up to the bulletin board near the Card Mangler. Eventually your program would run and a printout would appear in your cubbyhole telling you that there was a syntax error on line 32, that it took four seconds of CPU time to run, and that you now had fourteen seconds of CPU time left out of your monthly budget.

Interactive Computing

All of this changed dramatically when the first interactive computer systems started showing up. They introduced the famous command-line interface (CLI). You literally sat down, typed a one-line request to the computer, and when you hit the enter key, you got your response right then and there. No more time for lunch. No comic books. It was a sad day. When you sat down with a command-line interface, you stared at a prompt. "READY," said some of the systems, "C:\>," said others (that's a picture of an ice-cream cone that fell over). In a fit of stinginess, some systems managed to squeeze their prompt down to one character. "$," said the UNIX shell. Presumably, UNIX programmers had to pay for their computer time by the letter.

Now what do you do? Well, that's entirely up to you. You can ask for a listing of files; you can look at the contents of a file; you can run a program to calculate your biorhythms; whatever you want. The method by which you completed tasks as disparate as sending an email or deleting a file was exactly the same: you typed in a previously-memorized command.

The CLI was the ultimate example of an interface where the designer gets out of the way and lets the user do whatever they want. CLIs can be easy to use, but they're not very learnable. You basically need to memorize all the frequently used commands, or you need to constantly consult a reference manual to get your work done. Everybody's first reaction to being sat down in front of a CLI is, "OK, now what do I do?" A typical computer session from 1974 is shown in Figure 7-1.

Soon another style of interface developed: more of a question and answer model. When you sat down to a program, it asked you questions. You never had to remember a thing. See Figure 7-2 for an excellent piece from this period.


If you didn't have the manual, you had to guess or ask a guru.


An excellent example of one of the great, lost interactive computer programs of the early 1970s, reconstructed here from memory. Notice that the program helpfully asks you questions, so you never need to remember a command.

Interface designers of the Middle Command Line Era eventually realized that people didn't want to sit with a manual in their lap to get things done. They created question-and-answer programs, which basically combined the manual with the program itself by showing you what to do as you went along.

Soon, programs starting sprouting more and more features. The silly biorhythm programs sprouted features to tell you what day of the week you were born on. The more serious Star Trek games (where you chased a little Klingon ship around a 10 × 10 grid) gave you choices: you could fire photon torpedoes or move your ship. Pretty soon, the newest innovation was having a menu-driven program. This was thought to be the height of UI coolness. Computer software advertisements bragged about menu-driven interfaces (see Figure 7-3).


A screenshot from WordStar, a bestseller in 1984.

Around the peak of menu-mania, with office workers everywhere trapped in a twisty maze of complicated menus, all alike, an old philosophy swung back into fashion: suddenly, it once again became popular to let the user be in control. This philosophy was sharply expounded by the designers of the original Apple Macintosh who repeated again and again, let the user decide what to do next. They were frustrated by the DOS programs of the day, which would get into nasty modes where they insisted, no, demanded that the user tell them right now what file name they want for their new file, even if the user couldn't care less at that moment and really, really just wanted to type in that stupid toll-free phone number they saw on TV to order a combination vegetable shredder–clam steamer before they forgot it. In the eyes of the Macintosh designers, menu-based programs were like visiting Pirates of the Caribbean at Disneyland: you had to go through the ride in the exact order that it was designed; you didn't have much of a choice about what to do next; it always took exactly four minutes; and if you wanted to spend a bit more time looking at the cool pirates' village, well, you couldn't. Whereas the sleek new Macintosh interface was like visiting the Mall of America. Everything was laid out for you, easily accessible, and brightly lit, but you got to make your own choices about where to go next. A Macintosh program dumped you in front of a virtually blank white screen where the first thing you did was start poking around in the menus to see what fun commands were available for you. Look! Fonts!

This is still how many Windows and Macintosh programs work. But around 1990, a new trend arose: usability testing. All the large software companies built usability labs where they brought in innocent "users," sat them down in front of the software to be tested, and gave them some tasks to do.

Alas, usability testing does not usually test how usable a program is. It really tests how learnable a program is. As a result, when you plunk a wide-eyed innocent down in front of a typical program they've never seen before, a certain percentage of them will stare at the screen googly-eyed and never even guess what it is they are supposed to do. Not all of them. Teenagers will poke around at random. More experienced computer users will immediately start scanning the menus and generally gefingerpoken und mittengrabben around the interface and they'll soon figure it out. But some percentage of people will just sit there and fail to accomplish the task.

This distresses the user interface designers, who don't like to hear that 30% of the "users" failed to complete the task. Now, it probably shouldn't. In the real world, those "users" either (a) wouldn't have to use the program in the first place because they are baristas at coffee shops and never use computers; or (b) wouldn't have to use the program in the first place because they aren't project managers and they don't use project management software; or (c) would get somebody to teach them how to use the program, or would read a manual or take a class. In any case, the large number of people that fail usability tests because they don't know where to start tends to scare the heck out of the UI designers.

So what do these UI designers do? They pop up a dialog box like the one in Figure 7-4.


The first screen you see when you run Microsoft Power-Point. Almost anyone could figure out how to open and create files using the menu commands; this dialog really only helps absolute beginners.

As it turns out, the problems people were having in usability tests motivated Karen Fries and Barry Saxifrage, talented UI designers at Microsoft, to invent the concept of wizards, which first appeared in Microsoft Publisher 1.0 in 1991. A wizard is a multipage dialog that asks you a bunch of questions in an interview format and then does some large, complicated operation based on your answers. Originally, Fries conceived of the wizard as a teacher that merely taught you how use the traditional menu-and-dialog interface. You told it what you wanted to do and the wizard actually demonstrated how to do it using the menus. In the original design, there was even a speed control to adjust how fast the wizard manipulated the menus: at its highest speed, it basically just did the work for you without showing you how to do it.

The wizard idea caught on like wildfire, but not the way Fries envisioned it. The teaching functionality rapidly went out the door. More and more designers started using wizards simply to work around real and perceived usability problems in their interface. Some wizards were just out of control. Intuit's wizard for creating a new company with their QuickBooks accounting package seems to go on for hundreds of screens and asks you questions (like your employee's social security numbers) that you aren't likely to know the answers to right now but would be happy to input later. The Windows team decided that wizards were so cool that they created wizards for everything, even some silly one-screen-long wizards (see Figure 7-5).


The Windows team liked wizards so much that they went a bit overboard making some degenerate, one-screen wizards with a vestigial Back button that is perpetually greyed out.

The thing about wizards is that they're not really a new invention, they're just a fashionable swing back to guiding people through things step-by-step. The good thing about taking people by the hand like this is that when you usability test it, it works. The bad thing about taking people by the hand is that if they have unusual needs or if they want to do things in a different order than you conceived, they get frustrated by the maze you make them walk through.

I think it's time to find a happy middle. As humane designers, we need to remember to let users be in charge of their environment; control makes people happy. A modern word processor is perfectly happy to let you type all your text without any formatting, then go back and reformat it. Some people like to work this way. Other people like to have every word appear on screen in its final, formatted form. They can do this, too.

Let people do things in whatever order they like. If they want to type their name and address before they provide a credit card number, that should be just as easy as providing the credit card number first. Your job as designer is not to have a conversation with the user; your job is to provide a well-stocked, well-equipped, and well-lighted kitchen that the user can use to make their own masterpieces.