HOWTO: Design a task list for an API usability study
In my previous post I talked about setting up an API usability study. In this post, I'll talk about how I design the tasks that participants work on in a study.
After getting some level of familiarity with the API, I tend to start designing a task list by asking the API team what they want to learn. Sometimes teams will have specific research questions they would like answers to, such as 'Will users be more successful with the FileObject class for doing file I/O than they would be with the StreamReader and StreamWriter classes?'. Very often though, teams tell me that they want to know whether or not the API is usable and what the usability problems are.
In that case, I ask the team what scenarios the API is designed to help the user accomplish and which of the three personas the API is designed for. I then start working on the scenarios, trying my best to think and act like that persona (some personas are easier to act like than others :-) ). As I work on the scenarios, I take notes about how I expected the scenario to work, what I actually had to do to make it work, naming issues that tripped me up, conceptual issues that caused me some difficulties etc etc. Each of these notes turns into research questions that I might consider trying to answer during the study. For example, if I thought that a particular class was named poorly, one research question would be to determine whether or not that class name made sense for participants. I'd design a task that required participants to use that class and then measure the effect that the name of the class had on participants abilities to complete the task.
I also look for assumptions that the team has made about how the API will be used so that I can design tasks that verify those assumptions. For example, in the Indigo study I completed recently, one assumption that the team had made was that separating out the configuration of a service from it's implementation would make users more successful. This separation was achieved by using configuration files to set things like the address of a service and using the implementation of a service to set things like the instance mode of that service. So I wanted to make sure that I designed tasks that determined whether or not participants knew when to work with the configuration files and when to work with the implementation of the service. I needed tasks that required the user to modify the configuration of the service and tasks that required the user to modify the implementation of the service.
After coming up with a list of tasks that participants will work on, you then have to think about the order in which you present these tasks. This is a pretty critical aspect of designing a good usability study and if done wrong can really ruin the results that you get. When thinking about the order of the tasks, you need to be aware of the side effects of each task, (the additional things that the user will learn or do while working on a task that isn't critical to that task), the dependencies between tasks and the degree of difficulty of each task. You don't want participants to learn something critical about task B while working on task A since that will compromise the data you collect about task B. It effectively means that the behavior you observed while participants were working on task B will only generalise to all users who work on task B, after working on task A. It tells you nothing about how users will work when they work on task B alone. It's also a good idea to start participants off with easier tasks since being successful in the usability lab early on makes participants a lot more comfortable.
The last thing I work on is the wording of the tasks. There are two things that you need to make sure of when wording the tasks:
- The description of the task will not lead participants to attempt a particular solution unless that is what you want them to do.
- The description of the task is clear enough that participants know what you would like them to do, but do not know why you are asking them to do the task.
Obviously you don't want to bias the way that participants work while in the lab. The best study is one that mimics how real users work in the real world as closely as possible. You also don't want participants to know why they are doing what you are asking of them, nor do you want them to know about the measures that you are taking, otherwise participants (whether consciously or sub-consciously) will start behaving in a way that they think will give you the measures or data that you need, instead of behaving in a way that will help them accomplish the task that you have asked them to work on.
You might also want to consider repeat tasks so that you can compare the observations you made the first time the participant worked on a task to the next time they work on a similar task. I never repeat a task word for word, but I do ask participants to work on something that is very similar to something they have done earlier. Most often I'll make sure that there is a reasonable space of time in between the repeat tasks, at least an hour, but sometimes a couple of days or more.
When I complete the task list, I review it by asking myself whether or not the data that I think I will collect from each task will help me find answers to the research questions that I collected at the beginning. I also try to picture myself presenting the results from the study to the team and defending the way that I designed the study - sometimes people can be a bit defensive when hearing the results of a usability study on their API and they will try to explain away the results by attacking the study design rather than looking at the API. If I can't explain why I designed a task the way that I did, I'll take another look at it to see how I might do a better job.
Comments
- Anonymous
May 05, 2005
With the task list in place and participants recruited, it's time to run the study. My experience has... - Anonymous
July 11, 2005
I just realised that I never got around to finishing off a series of posts on how to design and run an...