16

I've just completed a round of lab-based, co-present usability tests as part of a usability audit for a web-based map application. I have ten tasks for the participants that require them to perform typical map activities (zoom in on this area; find this location, measure the distance between x and y, etc).

The first task is to zoom in (there is a little magnifying glass in the toolbar that lets the user do this) on the map in a location of their choosing.

But for one of the participants, they simply couldn't find the zoom tool. This is good information because we can take actionable steps to make it more visible. However, a lot of the subsequent tasks required the use of this zoom tool; without knowing how to zoom the test couldn't be completed. So I pointed out to the participant where the zoom tool was. I realize this is a big no-no in usability testing but I felt it was necessary in order to complete the test.

My question is: Should I have intervened and helped the user or should I have ended the test? At what point do you abandon a usability test for the sake of keeping a more realistic context?

7 Answers 7

10

There are plenty of folks who will say that your test should have ended at that first step -- when you rightly noted that the participant could not get past step one -- and you should just record that data point and move on to the next participant. Nothing wrong with that.

Once I was that tester who couldn't find the zoom tool -- except it was "find the (some esoteric icon representing a) tool in Google Toolbar", many years ago when Google Toolbar was a browser add-on and Google brought a lot of people on campus for usability tests. It was the first question in their test. I said "really, I can't see it. I would imagine it would be here or here or here". They pointed it out to me, much like you did with the zoom tool, and we continued on with the test. I don't know how they used the data, but they did continue on with the same script.

For the sake of ensuring sane data (e.g. all users completing all tests), I wouldn't consider the rest of this user's data (or mine, in the situation above) when calculating results, because to my mind it would be tainted/biased, BUT there's nothing wrong with turning the rest of your time with the user (or potential user) into something from which you could get additional beneficial data. Maybe that's going through the rest of the test and getting answers/seeing actions and just gathering the data but not calculating it or considering it in the same way, but maybe it's shifting the time in the test to a different test.

I would consider your situation similar to that posed in How to rescue a usability test whose participant is lacking confidence?, in which responses included ideas like ditching the canned tasks for a self-defined one, turn the "test" into an "interview," and so on.

Yes, I'm essentially answering "it depends." While I would probably have gone with turning the test into something else, I might also continue the test but not weigh the results quite so much -- depends on how many testers I had in the queue and where I was in the testing process.

2
  • Thanks for the feedback. So should I have annotations with the participant's results indicating that I helped them? I like the idea of turning the test into an interview. That way I can gather some legitimate feedback. Commented Apr 20, 2012 at 14:34
  • @MitchMalone I would annotate the results like that, yes.
    – jcmeloni
    Commented Apr 20, 2012 at 14:47
8

You don’t end the test, but you don’t point out the tool either.

If someone is hopelessly stuck, then obviously for summative purposes you score that session as “Unable to complete task (without help)” and include it in the No Joy category for statistical purposes. As long as you have consistent rules for judging when the user cannot continue on his/her own, your quantitative data will be perfectly valid, and you can continue the session and collect more data to help inform the design.

The reason for not pointing out the tool is, ironically, to collect more data. You know the user couldn’t find the tool, but you probably don’t know why. In general, an inability to find something on a page may be because:

  1. Users looked at it, but it didn’t recognize the label/icon.

  2. Users looked towards it but didn’t see it because it was lost in clutter.

  3. Users were looking for it somewhere other than where you put it.

  4. Users were looking for something entirely different than what you used.

Each of these reasons has very different design responses. Eye-tracking data can help narrow the possible reasons, but interview data is also often necessary.

So don’t show the user the tool. First, ask questions to diagnose the problem:

  • What are you trying to do? (I need to zoom in)

  • What are you looking for to zoom? (A sliding thingy, like in Google maps).

  • Okay, that’s a good way of doing it, but we’re trying out a different method. What label or icon would Zoom have? (I don’t know. Usually it’s a magnifying glass)

  • (pause)

  • Where are you looking for it? (Right here at the top.)

  • What do you see there? (A printer for printing, the Save icon, a push-pin to mark a point.)

  • Oh. The push-pin is supposed to be a magnifying glass. We’ll work on that.

There. Now you know two ways to improve the design (use a slider if feasible, re-work the magnifying glass image), and four ways not to improve it (changing from a magnifying glass to some other object, making the control bigger or bolder, moving it somewhere else).

This is the general rule for usability testing. For each problem the user encounters, avoid giving the solution. But don't just give up. Instead, ask questions to gather data and, in the process, guide the user progressively closer to the solution. And then continue with the usability test.

7

Of course you should help the user so that you can get as much information out of the usability session as possible. You found one usability issue, but there could be others and those issues could be quite independent (can be addressed individually without stepping back and redesigning the whole thing).

You don't want to engage in N usability tests and N rounds of fixing to fix N usability problems one at a time.

That is time-consuming and expensive.

A usability test is not 100% scientific. If it was, you would make it double blind or something; you would not even be present in the room so that you could not influence the subjects in any way (body language, etc).

You're not trying to publish something in a scientific journal or trying to get a government research grant, etc. (And those people fudge plenty.)

7
  • +1 for "A usability test is not 100% scientific." Usability test is a tool, unless it's done in a scientific context. Commented Apr 19, 2012 at 10:59
  • +1 again for 'A usability test is not scientific'. And yes - 'help them out' - because you're trying to get as much data as possible, and during the whole test you should be making a judgement about whether you should give them a nudge in the right direction / or whether you're just going to sit there for an hour watching them fail to work out how to do just one thing.
    – PhillipW
    Commented Apr 19, 2012 at 15:12
  • Footnote: This constant 'making a judgement about intervening' thing is why good moderation is hard work... particularly as you need to investigate why they didn't see the search box in the top right corner, without giving them the verbal clue about 'the search box in the top right hand corner'..
    – PhillipW
    Commented Apr 19, 2012 at 15:17
  • You clearly have to balance somewhere between holding the users' hands, so to speak, which tends to invalidate the test, and not letting them struggle with some buggy UI element for 30 minutes!
    – Kaz
    Commented Apr 19, 2012 at 23:31
  • 1
    I have to disagree with "A usability test is not scientific." It's not quantitative; you're not looking for statistical significance. It's a qualitative study, but still scientific. You're gathering data about where users have trouble, and the nature of that trouble. I bring this up because I've seen clients (and coworkers) with unrealistic expectations of usability tests make a big mess of things. Commented Mar 16, 2017 at 16:19
1

We had this exact same issue. The user had been given a task and couldn't figure out how to do it. I patiently waited a couple of minutes for them to try to find it. Eventually, I did what you did and just told them, so they could get on with the rest of the test.

I think that when people say "Don't help the person doing the usability test," what they really mean is, you shouldn't sit there and handhold them, and guide them through it step by step. You're trying to simulate what will happen when your program or website is being used out in the wild.

But, if they get into a situation like you're describing, out in the wild, they're going to give up, go to a different site, uninstall the program, or otherwise discard your product.

As you mention, if this happens, you need to note that problem, because it's a big deal. Especially if it happens with lots of users.

But once you get to that point, there's nothing wrong with cutting your losses, telling them just enough to get started again, and move on to do usability testing on something else while you've got them there. There's no sense throwing out any possibility in learning additional things, just because they get stumped on one thing.

1

Maybe just to see the results of further testing, the intervention was a good decision that time, but learn from this information, because either:

  • Maybe it's not good that your tests all rely on one other functionality;
  • If your application does rely mostly on that functionality, it's certainly disastrous that any part of your test group was not able to find and/or use that functionality.

In other words, you should either make your application more usable when that functionality isn't used (even if you make clear where it is accessed, maybe there will always be people who can't operate it properly / without much difficulty), or you should make that functionality stand out more in the UI, by making it draw attention (size, colour, motion, whatever you find appropriate). And then make sure that working with it is child's play.

1

Our protocol was very simple. Some users start asking questions before they even start to look at the screen (sad but true :)# ). So the first time the user asks, you answer: "Please try and find the solution yourself by looking at the screen" (after a while you can say stuff like that quite convincingly). If the user is still stuck (keep a quiet eye on your timer, and give them one minute or whatever time interval you agreed with the rest of your team) then point out the feature they should be using to them with a gentle but not patronising smile. But before you let them carry on, ask them why they had a problem here. Your user testing results should include the information that the user had a problem [here] and that the reason for the problem was [this]. Don't rely on video or audio recordings, yada, they take hours to review and code (usually 3x as long). Just talk to the user and make a note on your clipboard.

At the end of the session ask your user to retrospect for a minute and to tell you where and why they had the biggest problems, and where they felt they were really shooting along (AKA 'critical incident' gathering). Ahem... make a note of this data: it's your gold-mine.

Of course, this will not give you good data for 'time on task' or any other measures based on performance, but that's a different kind of evaluation.

1

Facilitator intervention is something that crops up because people are unpredictable! And also, this is an example that illustrates the most valuable aspect of testing with actual people - assumptions can be blown away...

Intervening was necessary in this case and no, you shouldn't have ended the test - it's unrealistic given the logistics of getting participants. But instead of pointing out the location of the tool, you can prompt them by asking "Where do you expect to find this?" "Where would you normally look for this?" "Where is this located on the application that you usually use?"

This helps guide them to locating the tool (or piece of information etc) without making them feel "silly" or stressed for not being able to figure it out for themselves. Remember, it's stressful enough to be observed as a participant!

I once took part in a session where I was the participant and the facilitators did not intervene as I got stuck trying to complete a task and they got nothing out of session except for 20 minutes of me getting increasingly frustrated.

Not the answer you're looking for? Browse other questions tagged or ask your own question.