When I was in college many years ago, a friend of mine down the hall pulled an all-nighter. A critical term paper was due the next day, and he stayed up until 6 A.M. banging away on his Macintosh. Finally, bleary-eyed, he turned off the computer and tried to catch a couple of hours of sleep before the paper was due.
He turned off the computer.
Notice I didn't say that he saved his work and turned off the computer. At 6 A.M., he forgot about that little thing.
At about 7:45 A.M., he came knocking on my dorm room door in despair. "Um, you know computers," he was practically crying. "Can't I get my paper back?"
"You didn't save it at all?" I asked.
"Never? All night long you never once hit Save?'"
"No. It was still called Untitled.' But it's in there somewhere, isn't it?"
The Macintosh in its WYSIWYG glory simulates the act of typing on a piece of paper so perfectly that nothing interfered with my friend's sad idea that his paper was in there, somewhere. When you write on a piece of paper, that's it! Done! The paper is now written. There's no Save operation for paper.
A new user who sits down to use a program does not come with a completely blank slate. They have some expectations of how they think the program is going to work. This is called the user model: it is their mental understanding of what the program will do for them.
If they've never used a computer before, and the computer shows them what looks like a piece of paper and lets them type on it, then they are completely justified in assuming that they won't need to save their work.
Experienced users have user models, too: if they've used similar software before, they will assume it's going to work like that other software. If you've used WordPerfect but not Word, when you sit down to use Word, you assume that you must save.
The program, too, has a model, only this one is encoded in bits and will be faithfully executed by the CPU. This is called the program model, and it is The Law. Nothing short of electrical storms and cosmic rays can convince a CPU to disobey the program model.
Now, remember the cardinal axiom from Chapter 1? You should have memorized it by now:
A user interface is well designed when the
program behaves exactly how the user
thought it would.
Another way of saying this is:
A user interface is well designed when
the program model conforms to
the user model.
That's it. Almost all good user interface design comes down to bringing the program model and the user model in line. The Macintosh UI would have been more successful (especially for my poor friend) if it saved your "unsaved" work for you. Of course, in 1985, the slow speed of floppy disks made this impractical. But in 1988, by which time everybody had hard drives, this became inexcusable. To this day, most popular software doesn't automatically save your work.
Let's look at another example. In Microsoft Word (and most word processors), when you put a picture in your document, the picture is actually embedded in the same file as the document itself. You can create the picture, drag it into the document, then delete the original picture file, but the picture will still remain in the document.
Now, HTML doesn't let you do this. HTML documents must store their pictures in a separate file. If you take a user who is used to word processors and doesn't know anything about HTML, then sit them down in front of a nice WYSIWYG HTML editor like Microsoft FrontPage, they will almost certainly think that the picture is going to be stored in the file. Call this user model inertia, if you will.
So, we have an unhappy conflict of user model (the picture will be embedded) versus program model (the picture must be in a separate file), and the UI is bound to cause problems.
If you're designing a program like FrontPage, you've just found your first UI problem. You can't really change HTML; after all, it's an international standard. Something has to give to bring the program model in line with the user model.
You have a couple of choices. You can try to change the user model. This turns out to be remarkably hard. You could explain things in the manual, but everybody knows that users don't read manuals, and they probably shouldn't have to. Or, you can pop up a little dialog box explaining that the image file won't be embedded—but this has two problems: it's annoying to sophisticated users; and users don't read dialog boxes, either. We'll talk more about this in Chapter 9.
So, if the mountain won't come to Muhammad, Muhammad must go to the mountain. Your best choice is almost always going to be to change the program model, not the user model. Perhaps when the user inserts picture, the program should make a copy of the picture in a subdirectory beneath the document file—this, at least, conforms to the user's idea that the picture is copied (and the original can safely be deleted).
This turns out to be relatively easy. Just ask some users! Pick five random people in your office, or friends, or family, and tell them what your program does in general terms ("it's a program for making Web pages"). Then describe the situation: "You've got a Web page that you're working on and a picture file named
Picture.JPG. You insert the picture into your Web page." Then ask them some questions to try and guess their user model. "Where did the picture go? If you delete the
Picture.JPG file, will the Web page still be able to show the picture?"
A friend of mine is working on a photo album application. After you insert your photos, the application shows you a bunch of thumbnails: wee copies of each picture. Now, generating these thumbnails takes a long time, especially if you have a lot of pictures, so he wants to store the thumbnails on the hard drive somewhere so that they only have to be generated once. There are a lot of ways he could do this. They could all be stored in one large file called
Thumbnails in someplace annoying like
C:\. They could all be stored in separate files in a subdirectory called
Thumbnails. They might be marked as hidden files in the operating system so that users don't know about them. My friend chose one way of doing it that he thought was the best tradeoff: he stored the thumbnail of each picture,
picture.JPG, in a new file named
picture_t.JPG within the same directory. If you made an album with thirty pictures, when you were finished, there would be sixty files in the directory including the thumbnails!
You could argue for weeks about the merits and demerits of various picture-storing schemes, but as it turns out, there's a more scientific way to do it. Just ask a bunch of users where they think the thumbnails are going to be stored. Of course, many of them won't know or won't care, or they won't have thought about this. But if you ask a lot of people, you'll start to see some kind of consensus. As it turns out, not very many people expected the
picture_t.JPG file, so he changed the program to create a
The popular choice is the best user model, and it's up to you to make the program model match it.
The next step is to test your theories. Build a model or prototype of your user interface and give some people tasks to accomplish. The model can be extremely simple: sometimes it's enough to draw a sloppy picture of the user interface on a piece of paper and walk around the office asking people how they would accomplish x with the "program" you drew.
As they work through the tasks, ask them what they think is happening. Your goal is to figure out what they expect. If the task is to "insert a picture," and you see that they are trying to drag the picture into your program, you'll realize that you had better support drag and drop. If they go to the Insert menu, you'll realize that you had better have a Picture choice in the Insert menu. If they go to the Font toolbar and replace the word "Times New Roman" with the words "Insert Picture", you've found one of those old relics who hasn't been introduced to GUIs yet and is expecting a command-line interface.
How many users do you need to test your interface on? The scientific approach seems like it would be "the more, the better." If testing on five users is good, testing on twenty users is better!
But that approach is flat-out wrong. Almost everybody who does usability testing for a living agrees that five or six users is all you need. After that, you start seeing the same results again and again, and any additional users are just a waste of time. The reason being that you don't particularly care about the exact numerical statistics of failure. You simply want to discover what "most people" think.
You don't need a formal usability lab, and you don't really need to bring in users "off the street"—you can do "fifty-cent usability tests" where you simply grab the next person you see and ask them to try a quick usability test. Make sure you don't spill the beans and tell them how to do things. Ask them to think out loud and interview them using open questions to try to discover their mental model.
When I was six and my dad brought home one of the world's first pocket calculators, an HP-35, he tried to convince me that it had a computer inside it. I thought that was unlikely. All the computers on Star Trek were the size of a room and had big reel-to-reel tape recorders. I tried to convince my dad that the calculator worked simply by having a straightforward correlation between the keys on the keypad and the individual elements of the LED display, which happened to produce mathematically correct results. (Hey, I was six.)
An important rule of thumb is that user models aren't very complex. When people have to guess how a program is going to work, they tend to guess simple things rather than complicated things.
Sit down at a Macintosh. Open two Excel spreadsheet files and one Word document file, as shown in Figure 2-1.
Almost any novice user would guess that the windows are independent. They look independent.
The user model says that clicking on Spreadsheet 1 will bring that window to the front. What really happens is that Spreadsheet 2 comes to the front, as shown in Figure 2-2, a frustrating surprise for almost anybody.
As it turns out, Microsoft Excel's program model says, "you have these invisible sheets, like cellophane, one for each application. The windows are 'glued' to those invisible sheets. When you bring Excel to the foreground, you are really clicking on the cellophane, so all the other windows from Excel should move forward too without changing their order."
Riiiiiiiiight. Invisible sheets. What are the chances that the user model included the concept of invisible sheets? Probably zero. The user model is a lot simpler: "The windows are like pieces of paper on a desk." End of story. So new users are inevitably surprised by Excel's behavior.
|Guess what happens when you click on Spreadsheet 1?|
|Wrong! Microsoft Excel's program model includes the bizarre and unlikely concept of an invisible sheet that all the other sheets are glued onto.|
Another example from the world of Microsoft Windows concerns the
Alt+Tab key combination, which switches to the "next" window. Most users would probably assume that it simply rotates among all available windows. If you have windows A, B, and C, with A active, Alt+Tab should take you to B. Pressing Alt+Tab again would take you to C. Actually, what happens is that the second
Alt+Tab takes you back to A. The only way to get to C is to hold down
Alt and press
Tab twice. It's a nice way to toggle between two applications, but almost nobody figures it out because it's a slightly more complicated model than the rotate-among-available-windows model.
Users will assume
the simplest model possible.
It's hard enough to make the program model conform to the user model when the models are simple. When the models become complex, it's even more unlikely. So pick the simplest model possible.