Udostępnij za pośrednictwem


New Users

I just watched two usability study participants use our tools (Microsoft Visual Studio 2005 Tools for the Microsoft Office System). Neither of them had previously done any Microsoft Office programming, which is what our tools are for. So I got to see them using the documentation and trying to figure things out based on the UI. That is always humbling.
One user went straight to the docs looking for information when he wanted to figure something out, but the other used tools in Visual Studio, notably IntelliSense and the object browser, to try to work things out from the bottom up. Eventually the second participant went to the docs too, and because he easily found what he was looking for on the first try, he started using Help more often. Both of them were successful at the tasks they were given, but I think they came away with different ideas of what our tools are and what they can do. The reader has more of an overview and a familiarity with the tools and what they were designed for, and the experimenter has more knowledge of the Office object model.
Both participants thought in terms of tasks. We knew this theoretically before, but it was something else seeing it in action. The first part of the study was just to become familiar with the tools and poke around without any leading or prompting from the usability engineer. The reader went to the table of contents in the docs, found the Getting Started section, and started looking for tasks that he could do. He kept going back to the "common tasks" topic and another called "Getting Started Writing Code," but he didn't really seem to find what he was looking for. I think the common tasks were too granular and the code writing topic was too conceptual. The experimenter looked through the Office object model using the object browser, examining the classes to see what Excel could do. Presumably if he found an interesting method, he would have come up with a small task to test it, but the Excel object model is very big and we moved on before he found anything specific.
When the participants had a specific task to do as part of the study, they both searched the docs (eventually) and found topics right away that answered their questions. Sometimes the participants didn't know they had the answer; it was hard to watch them stare at a topic that had the answer (or a link to the answer), skim past it, and jump to a different topic because they didn't see what they were looking for. We use lots of headings, tables, and bullet points to make information easier to skim, but some information has to go in paragraphs and it looks like it just isn't read there.
We are making some changes to the docs because of what we saw in the study, and we'll keep the results in mind as we plan the next version of the documentation. It would be great if we could do more studies like this throughout the writing cycle.

Comments

  • Anonymous
    September 23, 2005
    Usability tests are eye-openers, all right, but I would hesitate to draw conclusions too broadly from some of what you're seeing. Searchability and discoverability, yes. However, people's use of narrative (conceptual) text, maybe. No matter how much you reassure users, most people can't help but think that they're being tested, and moreover, that they're being timed. Thus during a usability test they do not stop and read hunks of text. Never. Real-world usage suggests otherwise -- even if they don't read the text in docs, they do read articles they find on the Internets. So someone somewhere is reading conceptual information. Just never in usability tests. :-)