2

I am planning to conduct a number of task-based moderated think aloud tests for a service that I am designing. I will be repeating the test with the same setup regularly throughout the year and I am looking for a framework of questions that I can use to compare the results.

I would like to be able to track the changes made in relation to 5 different parameters e.g. Ease of Use, Language, Functionality, Content, Information Architecture after each test.

Can anyone propose a simple and quick framework of questions that I can use in order to measure these parameters after each user test. I would like to translate the results into radar diagrams on these so that I can communicate the changes to the stakeholders.

An example of such a diagram based on different parameters can be seen below e.g. enter image description here. In this case the blue diagram is the outcome of the first user test, the target is the wanted result, while the green diagram is the second user test

I have already looked at SUS but I feel like it's missing questions regarding the content and language and that 10 questions are quite a lot to ask in relation to one parameter only that is usability

4
  • I'm not really sure what you are asking. Could you refine your question, maybe show some examples of what you are already using for each parameter? Commented Feb 19, 2014 at 23:38
  • I tried to clarify my question a bit more, but what I am mainly looking for is a widely tested set of questions that tracks a number of different parameters (not just usability as SUS does) and can be used to quantify results of recurring user tests
    – FoF
    Commented Feb 20, 2014 at 10:54
  • 1
    I think you have 1) a too high expectation of the current state of measuring frameworks and 2) a very strange theoretical model (most models will see some of your variables as metrics of the others).
    – Rumi P.
    Commented Feb 20, 2014 at 12:20
  • Typologies are essentially vague and broad brush (and can just end up with endless arguments as to what fits under which category) I think you're in danger of trying to make something look accurately measurable that actually isn't. ( There are "Lies, Damn Lies and Typologies" ... )
    – PhillipW
    Commented Nov 21, 2014 at 15:03

1 Answer 1

2

The ten questions of SUS are not "quite a lot to ask" if you only administer the SUS questionnaire once at the end of all the tasks (which is how it's meant to be used). The thing with the SUS is that although there are 10 questions, they all fit the same basic format ("here is a statement, and a 5 point scale of strongly-agree to strongly-disagree"), and thus the cognitive load fairly light. (I recently conducted a series of usability test sessions - the SUS took maybe 2 mins each, compared to an hour for the whole session)

For Ease of Use you should look to using the SEQ - Single Ease Question, and asking that after each task.

Jeff Sauro also has some suggestions for measuring findability.

However, you might be trying to measure too many different things all in the one test session, and trying to measure some things with a less than ideal method. For example, to test Information Architecture you might be better off conducting card sorts, or doing tree-testing.

1
  • Regarding the ten SUS questions: "quite a lot to ask" was in relation to acquiring results for only one parameter that is usability. Also I am not sure that information architecture is indeed one of the five parameters that I need to track with questions (it was just an example). I will make sure to check the links proposed, but I was trying to avoid creating my own "framework" and was looking for a more widely used set of questions/solution
    – FoF
    Commented Feb 20, 2014 at 10:46

Not the answer you're looking for? Browse other questions tagged or ask your own question.