1

I work at a small web development agency and I'm preparing to run a usability test on a web application we've built with a participant using a screenreader. Our development team just made a bunch of updates to the site to move it closer to ADA (Americans with Disabilities Act) compliance, so we're trying to find out if our first pass actually improved the site's accessibility and what work still needs to be done.

The problem is: I've never run a usability test with a participant using a screenreader. I have basic experience running usability tests, so I have an ok handle on how to moderate a test session, but I want to learn the basics of testing the user-friendliness of web accessibility features so I at least sort of know what I'm doing here :)

Specifically, I'm wondering:

  • Does anyone you have any advice on how to test the usability of a site's accessibility features?

  • What adaptations, if any, should I think to make to my typical usability test setup?

    • The participant and I will be connecting remotely, I'm hoping over video call, with him sharing his screen. I have no idea if this'll work or if asking him to navigate through a video conferencing app (Google Hangouts) could complicate the test unnecessarily.
  • Should I provide the participant instructions for using the site or can I leave them in the dark, let them figure out the site on their own?
    • For a typical usability test, I'd want the participant to know as little as possible about the site under test, but I don't know if omitting usage instructions — included dev team's accessibility work — would prevent the user from even interacting with the site.

Sorry if my questions show my ignorance of web accessibility i.e. anything sounds goofy or dumb. I'm totally new to the topic. Thanks!

1

1 Answer 1

0

My general advice on running any type of user testing is to have a practice run or two with someone so you can be confident with the process and procedures, especially if you are not that familiar with it. Steve Krug's Advanced Common Sense website is a good starting point to consolidate your existing knowledge on usability and testing in general. In Australia we have a different set of criteria that government agency websites have to conform to, but it is probably not all that different from the ADA requirements.

The other advice on running user testing is to try and put yourself in the user's shoes just to see if you've missed anything obvious from their perspective. So for instance, if the point of doing screenreader testing with a participant is for blind users, then why not just try using the website blindfolded and see how easy it is (or isn't)? The caveat is that someone who is blind has probably adapted their way of using the website, so you should try and observe and capture these behaviours and think about using your test session also as research time.

Now to your questions:

  • Does anyone you have any advice on how to test the usability of a site's accessibility features? --> As with all forms of testing, you need to have specific goals, objectives and outcomes, so the focus of your testing on the accessibility features I guess would be trying to tick off the criteria for ADA? If so, then you need to structure the testing around those points.
  • What adaptations, if any, should I think to make to my typical usability test setup? --> Definitely try to run through this with a few different people to simulate issues that you might come across to pick up scenarios that you might have to cater for well before the actual date, whether it is technology issues or participant behaviour/understanding of the process.
  • Should I provide the participant instructions for using the site or can I leave them in the dark, let them figure out the site on their own? --> Again I think it depends on what your goals and objectives for the testing is. If this is a first pass to try and pick up general issues with the accessibility features, then try not to make things too complicated because then you are introducing lots of factors into the analysis. If you already have a very polished tool and want to test very specific features, then you can be very prescriptive with the instructions so you can put the participant in the exact scenario or context you want to test. If you are just doing some basic research and testing and have no specific plans in mind, then you could keep things open but be aware that the results can then be a little bit vague too.
3
  • Hi Michael, thanks so much for your comment. And sorry for my seriously delayed response; I've been a bit slammed the past week. I ran the test and it went pretty well, though some struggles for sure. One of the biggest challenges for me was not instructing the participant when they ran into trouble. I found I helped the participant when they seemed blocked on a test task rather than asking them how they'd attempt to solve their problem, as i would on a non-screenreader test. Any advice on dealing with this instinct? Did I undermine my test by doing this? Commented Jul 26, 2016 at 18:41
  • However you conduct your testing, the key thing is to be as consistent as you can with the procedure so that you can eliminate potential variables that you introduce into the results. One of two things you can do (depending on how many participates you have) is to either end the test when they are stuck (hopefully not right at the beginning) or have an alternate strategy such as providing a cue rather than a direct answer. Then if they are still stuck and you need then to move on you would direct them to what they should do but then ask them why they got stuck.
    – Michael Lai
    Commented Jul 26, 2016 at 21:17
  • Oh gotcha. That makes a lot of sense. Thanks so much for all of that, Michael. Commented Jul 27, 2016 at 20:32

Not the answer you're looking for? Browse other questions tagged or ask your own question.