1

Here's the situation -

I'm at a new (for me) company and I'm running tests with some users of our product. Due to the nature of our product/users, we can't really recruit so much as we can ask the clients to select users from their small user base for us. From the first session, it was obvious they had come with a list of things they wanted to address, feedback wise. It wasn't ideal, but we did get some good feedback, and I was able to run some task based tests during the session.

The issue arose when administering SUS and NPS questions towards the end. I want to establish a baseline for how UX is performing and (hopefully) improving over time with my involvement, and NPS is important for our CEO, so I need to ask these quick questions at the end of a session.

One of the users was a little hostile to them and pretty reluctant to answer. I explained that we were using a standard scale of questions in order for us to better evaluate the product's performance and give us a baseline to judge against as we continue to help our users(him) through design of new features and improvements. Still, it was like pulling teeth and a bit frustrating, and it took much longer than the few minutes it should have to get these 1-10 answers.

What's the best strategy for dealing with situations with users like this? They may feel expert in their opinion and use, and look down upon this line of questioning. Because of what our product is and the pool of users, I'll likely be dealing with this same participant again for future tests. Anyone had a similar experience?

3 Answers 3

0

One of the users was a little hostile to them and pretty reluctant to answer.

It's not uncommon to have the occasional participant with more of an uncooperative attitude, but it's nothing personal and shouldn't be taken too literally.

Anyone had a similar experience?

Yes, everyone who has done a lot of interviews or testing with people runs into this at some point.

The solution is just getting more experience. Do more and more sessions. You'll get practice handling people and personalities and navigating those moments a little more smoothly.

Every case is unique and should be handled with tactics appropriate for the circumstances, there's no one size fits all script - but you can have a uniform general approach.

For example, if I encountered something like what you described I'd appeal to the participants' expertise by making it clear to them that we need their input regardless, just so we have a uniform approach. I'd acknowledge their point but try to get past it: "Yeah, you may actually be right about that, but for the sake of consistency it'll still help just to have your best guess."

And also - remember that with more experience comes more confidence, and that confidence will help put even the hostile participants more at ease. They're way less likely to give you crap over stuff like that when you direct the session in a way that is very practiced and easy for them to follow.

1
  • I feel fairly confident as a tester, but I'll agree that this one in particular threw me off because of the simplicity of the questions I was asking at that point (SUS). The approach that I took following that was to emphasize that we were trying to measure how well we were serving them and that we know we can do better, but that it was in everyone's benefit. These were scientists, after all - they should appreciate the importance of data!
    – jukesyukes
    Commented Mar 15, 2018 at 19:39
0

If you're mixing tests in one session, chances are you'll always find some hostility, let alone if you're dealing with users of an existing system you want to change (this is called aversion to change and it's a huge issue when revamping old systems).

Now, remember that for EACH test, you need to perform a screening survey to select the candidates. For what I understand, you're performing at least 3 tests. If so, there are some things to consider (length of tests, the cognitive load for each of them, the difficulty and so on). For example: you may have 3 short tests, but all 3 of them with a huge cognitive load. In that case, I can guarantee you'll find users complaining.

As an example, here's just a short snippet of SUS Test Screener

If Not Eligible to Participate “At this time, it appears that your interests and experience are different from the profile we are seeking for this project. We will keep you in mind for future opportunities. Thank you for your time today.”

As you can see, you had to perform an screener before hand and it may happen that the contestant is not eligible for some reason

Bottom line is: did you screen users beforehand, explaining the TESTS they'll need to answer, the length and the time it will take? On top of that, did you compensate them for each of those tests?

If the answer to any of these questions is NO, then there you have the reason. And one or more users will complain. After all, this is to benefit you, so you're the one who has to accommodate to them, not otherwise

So, what to do?

Assumptions aside, it doesn't matter how or why this happens: if a user is reluctant to answer the test as planned, then he has to leave. Or simply do not consider their answers. But remember: he or she should never have been there to start

1
  • Unfortunately, one of the constraints of these tests is that we can't screen. This is due to client relationships and small user base. It's just not possible with the product and environment. Test explanations - All users knew the format and length ahead of time, and had that re-stated during the test session intros. Compensation - again, client agreement. Compensation is not feasible in this scenario, because these are client users. As such, there's no possibility of dismissing them during a session. User research requires some flexibility, and I'm not sure those rules apply here.
    – jukesyukes
    Commented Mar 14, 2018 at 19:21
0

SUS and NPS are quantitative attitudinal research methods, while usability testing is a qualitative behavioural research method. It is possible that mixing the two methods felt jarring.

One option is to show the participants the SUS at the end, offer them a link, and ask them to fill it out online (although you may run the risk of non compliance, and may have to chase participants up.)

It is also possible that the questions feel light and fluffy, especially coming at the end of the behavioural tasks. To counter this, you could explain why you are asking for scores - - in order to establish a benchmark, and why these particular questions - - they have been tested from a larger set to elicit the most definitive scores.

Since the participants are scientists, they may want to know why.

5
  • Possibly. Though isn't it standard practice to issue a post test satisfaction questionnaire? SUS falls into this category conversionxl.com/blog/8-ways-to-measure-ux-satisfaction I do agree with you that the "why" is important here (and everywhere!)
    – jukesyukes
    Commented Mar 15, 2018 at 21:31
  • The survey questions you refer / link to are specific to the tasks the user has just completed. In this sense, they are not jarring, but fit naturally into the flow of the usability test. They are questions about the efficacy of the test itself. A System Usability Survey, by contrast, is a whole other test, with a different purpose and goal - - to establish a benchmark. Commented Mar 15, 2018 at 21:39
  • Nope, scroll down to "Test Level Satisfaction." ......"If task level satisfaction is measured directly after each task is completed (successfully or not), then test level satisfaction is a formalized questionnaire given at the end of the session. "....."There are, again, a variety of questionnaires used, but I’m going to focus on two popular ones: SUS: System Usability Scale (10 questions) SUPR-Q: Standardized User Experience Percentile Rank Questionnaire (13 questions)"
    – jukesyukes
    Commented Mar 15, 2018 at 21:41
  • Fair point. I didn't scroll that far! I guess, in my own experience, the only questions I've seen asked have been task related or general post-test 'any other feedback?' questions , and even then they have been informally asked rather than formal quantitative surveys. So to your previous question of 'isn't it standard practice...' I can only tell you that I personally have never seen SUS administered during a usability test. But would be interested in other perspectives. Commented Mar 16, 2018 at 8:33
  • 1
    Agreed @michael-heraghty. I appreciate your input! Like most things, I suppose, the answer is "it depends." The reason I'm pushing for a quantitative measurement in addition to the qualitative aspects of the user tests in this case are for establishing a baseline and measuring UX progress for the company. This venture has never had an in house UX person before, so it's of real importance to show quantifiable progress IMO.
    – jukesyukes
    Commented Mar 16, 2018 at 16:36

Not the answer you're looking for? Browse other questions tagged or ask your own question.