Freigeben über


What's better than usability testing?

Catching up on my reading, today I finally read this Microsoft Watch article on the Office 2007 redesign: How Microsoft Wrapped the "Ribbon" in a Bow.

Among other things, it talks about how the Office team placed early (way pre-Beta) copies with testers in Fortune 500 companies in the Seattle, with an aim to evaluating how people adapt to the new "Ribbon" UI in Office 2007 over time (what we call in the biz a 'longitudinal study'). On top of this, consider the 1.3 billion sessions of user data Microsoft has collected about how users make use of the features of Office 2003.

Now consider how much usage data you are able to collect when considering the UI design for your application or product. ... Sheesh, it's OK if you're a global mega-company, huh?

For most of us, the best usage data we get to influence our designs is the humble usability test. Microsoft does heaps of usability testing too, but through the sorts of activities above, they (we - must remember that) have been able to address a couple of fundamental issues with usability testing :

  1. Usability testing almost always only tests people's initial reactions to a user interface (because we don't have time/money to watch people use our products for a long time). This is problematic for systems like Office, where people's long-term performance is arguably more important than their initial reactions.
  2. Samples sizes for usability testing are usually very small. We justify this by pointing out that even if we only test with a small number of users, the problems we see are still problems. We just don't know how representative they are.

Now there are those that are not big fans of usability testing for these and other reasons. Let me say that I am not one of them. Even when you think you are a genius designer, there is still stuff to learn from usability testing. Teena Harkins taught me this lesson many years ago. I was designing the UI for part of an application for a federal government department. As usual, the timeframes were ridiculous, and I decided that I needed to spend the time available designing, not testing. So Teena sat down next to my desk with a couple of sample users and printouts of my PowerPoint screen mockups. She ran a couple of users through the UI design in an informal usability test while I half-listened from my desk. The stuff Teena discovered from this simple usability testing exercise caused me to make some fundamental changes to the UI design.

So, by all means go forth and usability test, but where you have a chance, consider how you can address the concerns of sample size and long-term usage.

If you are working a web site, the news is good. Firstly most websites are designed with an emphasis on the 'first use' scenario, since usage of almost all websites is discretionary. Secondly, you can do as many organisations do today and trial design alternatives, and completely new ideas, with actual users in live sessions, and observe their behaviour through logging.

Designers of line-of-business applications (like Office) have it worse. One advantage of an in-house audience, however, is you might at least be able to do some longitudinal studies by getting the same people to come back for a series of usability tests over time (make sure you keep track of how many sessions each user has attended so you can interpret your observations properly). On the other hand, you may not be able to try out multiple versions of your UI 'live' on users - either because you don't have the resources to make multiple field-ready versions, or possibly because you have a relatively small audience of users to start with. If you have a small pool of users to draw from, then you have less scope to try out multiple design concepts, simply because too many different versions will add to the confusion overall. Ie in the interest of finding the most usable UI, you can find yourself adversely affecting usability, by confusing people.

Sheesh, I knew I should have become a bicycle courier instead...

What experiences have you guys had trying to address these usability testing issues of sample size and long-versus-short-term usage?

Comments

  • Anonymous
    March 06, 2007
    '1.  Usability testing almost always only tests people's initial reactions to a user interface (because we don't have time/money to watch people use our products for a long time)' I call it 'benchmarking' the existing system - I go usability test what they're currently using, before beginning design of the new system.  I often do this. ' 2.  Samples sizes for usability testing are usually very small' We can afford for them to be small because they're targeted - 5 or 6 participants per key user role.  If there's a few user roles, then I look at the common activities across roles, and target the exception activities for an individual user role with a couple of extra participants.  It's not an exact science.

  • Anonymous
    March 07, 2007
    An additional source of information to increase sample size is web analytics or web forensics. Solid analysis of user flows through the site, drop off points etc. can help you target appropriate areas for more extensive usability testing / improvement. In my current work (at a large news organisation) our Web Analytics Guru (as we like to call her) is an important part of the team and can produce all sorts of interesting numbers and information on what has been effective and what is not performing.

  • Anonymous
    March 08, 2007
    The comment has been removed

  • Anonymous
    March 08, 2007
    The comment has been removed