Usability Tests

What is a Usability Test?

A Usability Test is a method used to evaluate how usable or intuitive your product is. This is done by observing and analyzing representative users as they perform specific tasks in your product. Usability Tests allow you to identify any usability problems, collect qualitative and quantitative data, and determine your users’ satisfaction with your product. It is a key part of the design process because it gives you direct feedback on how real users use your product.

There are two different types of Usability Tests; Quantitative Usability Tests and Qualitative Usability Tests. With Quantitative Usability Tests, you are measuring their performance in tasks to give you baseline metrics to serve as a Benchmark, which can then be compared to the outcome of any new updates. With Qualitative Usability Tests, you are noting their actions and feedback through tasks on the existing product or even on early prototypes, giving you plenty of time to make big changes to your designs before any coding takes place.

Traditionally, you would observe the user in-person while they perform these tasks so you could better see their reactions and expressions. Nowadays, since screen-sharing and video chatting have become a standard, Remote usability testing is becoming more popular as it makes scheduling much easier (which is often the biggest challenge with this method).

For this method, we are just going to discuss usability tests in general. After you have read this article, I recommend looking up the specific details of Quantitative Usability Tests here, and the details on Qualitative Usability Tests here.

What do you need for a Usability Test?

TIME

  • A few days for scheduling participants and setting up the tasks
  • 15 minutes to 1 hour to conduct the test

MATERIALS

  • A prototype or existing product
  • Recording equipment
  • Something to take notes on

How do you conduct a Usability Test?

Step 1: Determine what you want to test and why

You should always have a goal for your tests. Do you want feedback on an existing flow to establish a baseline of performance metrics? Or do you want feedback on a new flow you designed so you can make some informed changes before passing it off to the devs?

Step 2: Decide if you want moderated or unmoderated

Your next decision is to decide whether it will be moderated or unmoderated. A moderated test involves the active participation of a trained facilitator, while an unmoderated test is completed by test participants in their own environment without a facilitator present.

Step 3: Create your test plan and tasks

Next you need to create your test script and tasks. This is where the biggest difference is between Quantitative Usability Tests and Qualitative Usability Tests.

For Quantitative Usability Tests, your tasks should be specific and controlled. An example would be “Find the link to the support centre”, and could be measured by Success Rate and Time on Task. For Qualitative Usability Tests, tasks should be more open-ended. The comparative example would be “Find a way to get help on the site”, in which case users may look for a link to a support centre, or some may look for a phone number or chat dialog window.

Some other examples of a quantitative task would be:

  • Submit a help request from the checkout page
  • Go through the “forgot password” flow

And if we turn those quantitative tasks mentioned above into qualitative tasks, the examples would be:

  • We would like you to pretend that you’re having problems paying for your item.
  • Let’s pretend that you have forgotten your password

For whatever type of survey you create, come up with as many tasks that you need feedback on and that will fit within your timeframe.

Step 4: Schedule and recruit participants

Once you know what you want to test, you will want to find participants. Ideally they are representative of your target audience and have the right amount of experience for the flows you are testing. For example, if you’re testing a new sign-up flow, you may want participants that have never used your product before. If you’re testing an advanced feature, you may need participants with a lot of experience with your product.

When it comes to the number of participants you need to recruit, this is where Quantitative Usability Tests and Qualitative Usability Tests differ again.

For Quantitative Usability Tests, in order to make sure you get statistically significant data, you need a sample size of at least 35 to 40 participants, otherwise the data will vary too much and will not be insightful.

For Qualitative Usability Tests, a good rule of thumb is to conduct your usability test on 5 participants. After that, the feedback you get will start to get too repetitive and it won’t be worth the time anymore. You can later reiterate your designs and test them again with 5 more participants.

Step 5: Have them go through the tasks and note their actions

Now is the big moment. Have your participant go through the tasks and observe their behaviour and actions along the way. Take note of the standard metrics, but also anything else that stands out. For Qualitative Usability Tests, you should also ask them to “think out loud”. This will give you better insight into what they are doing or trying to do.

Some examples of things to make note of can be:

  • Task completion time
  • Task success
  • Where they pause or struggle
  • Where they express delight
  • When they wanted help
  • What questions they asked you during the test
  • And anything else that seems interesting to you

Step 6: Ask questions

In between each task, or once all tasks are complete, you can bring in some other research methods, like Quantitative Surveys and Qualitative Surveys, to ask them some final questions. You can even conduct a small research interview with them, however this can only be done in moderated tests.

When listening to their answers, it’s important to prioritize their actions in the tasks, rather than what they said in their responses. For example, they may tell you that they found the task easy, but in reality, they may have struggled for a long while at various points in the task.

Step 7: Analyze and report your findings

Analyze the findings from each task. Once you have conducted usability tests on all of your participants, you can then cross reference those findings from each task with the findings from all of the other participants. This will surface any patterns or trends in the feedback and possibly point to deeper insights. It will also allow you to make design proposals based off of hard numbers, instead of just your opinions.

Once you have this all in an easy-to-digest report, share your findings with your team and stakeholders. Make sure to include the main insights at the top, alongside any proposals you have on how to improve the designs before your next round of usability tests, or before you hand the designs over to the dev team.

Tips for a great Usability Test

  • If they ask you questions or for help during the tasks, try to avoid answering them until the end of the test. You can simply say “I’m sorry, but I can’t answer that right now, as I would like to see what you would do if I wasn’t here to help you, but I would be more than happy to answer any questions you have at the end of the test.”
  • Do a rehearsal usability test before you do it with an actual user. This will let you test all equipment, and to see how if your script or tasks need adjusting to better fit within your timeframe. For this rehearsal, you can use anyone from your company.
  • Try your best to end on time. Some of these usability tests can be quite long, and it may deter participants to sign up again if it goes even longer than what they agreed to.
  • Schedule the test around your participant’s schedule. This should go without saying, but it’s better to inconvenience yourself instead of the participant, because you may want to use them again in the future.
  • Always remember that it’s more important to observe what they do, rather than what they say in response to your questions. Usability tests are about gathering observations over opinions, and what they say may drastically contradict what they actually did in the test.
  • If you don’t have time to design for each possible edge case, you may need a generic “under construction” page to present to the user, then you can steer them back on track if they hit that screen. Just make sure to note anytime that it happened, and if you can, ask them questions on why they chose to go there.
  • Pressed for time? Nowadays, there are great online tools that facilitate quick usability testing. They range in pricing but they are definitely worth exploring.
  • Record everything! You will probably want to go over the recordings as you compile your final report, and you can also share snippets of the tests with others in the company since everyone loves user feedback!
  • If it can be done discreetly, invite others from your team to observe the session. Different people will get different observations out of the test and may catch things you didn’t.

More resources for Usability Tests