What is a Usability Test?
A Usability Test is a method used to evaluate how usable or intuitive your product is. This is done by observing and analyzing representative users as they perform specific tasks in your product. Usability Tests allow you to identify any usability problems, collect qualitative and quantitative data, and determine your users’ satisfaction with your product. It is a key part of the design process because it gives you direct feedback on how real users use your product.
There are two different types of Usability Tests; Quantitative Usability Tests and Qualitative Usability Tests. With Quantitative Usability Tests, you are measuring their performance in tasks to give you baseline metrics to serve as a Benchmark, which can then be compared to the outcome of any new updates. With Qualitative Usability Tests, you are noting their actions and feedback through tasks on the existing product or even on early prototypes, giving you plenty of time to make big changes to your designs before any coding takes place.
Traditionally, you would observe the user in-person while they perform these tasks so you could better see their reactions and expressions. Nowadays, since screen-sharing and video chatting have become a standard, Remote usability testing is becoming more popular as it makes scheduling much easier (which is often the biggest challenge with this method).
For this method, we are just going to discuss usability tests in general. After you have read this article, I recommend looking up the specific details of Quantitative Usability Tests here, and the details on Qualitative Usability Tests here.
What do you need for a Usability Test?
TIME
MATERIALS
How do you conduct a Usability Test?
Step 1: Determine what you want to test and why
You should always have a goal for your tests. Do you want feedback on an existing flow to establish a baseline of performance metrics? Or do you want feedback on a new flow you designed so you can make some informed changes before passing it off to the devs?
Step 2: Decide if you want moderated or unmoderated
Your next decision is to decide whether it will be moderated or unmoderated. A moderated test involves the active participation of a trained facilitator, while an unmoderated test is completed by test participants in their own environment without a facilitator present.
Step 3: Create your test plan and tasks
Next you need to create your test script and tasks. This is where the biggest difference is between Quantitative Usability Tests and Qualitative Usability Tests.
For Quantitative Usability Tests, your tasks should be specific and controlled. An example would be “Find the link to the support centre”, and could be measured by Success Rate and Time on Task. For Qualitative Usability Tests, tasks should be more open-ended. The comparative example would be “Find a way to get help on the site”, in which case users may look for a link to a support centre, or some may look for a phone number or chat dialog window.
Some other examples of a quantitative task would be:
- Submit a help request from the checkout page
- Go through the “forgot password” flow
And if we turn those quantitative tasks mentioned above into qualitative tasks, the examples would be:
- We would like you to pretend that you’re having problems paying for your item.
- Let’s pretend that you have forgotten your password
For whatever type of survey you create, come up with as many tasks that you need feedback on and that will fit within your timeframe.
Step 4: Schedule and recruit participants
Once you know what you want to test, you will want to find participants. Ideally they are representative of your target audience and have the right amount of experience for the flows you are testing. For example, if you’re testing a new sign-up flow, you may want participants that have never used your product before. If you’re testing an advanced feature, you may need participants with a lot of experience with your product.
When it comes to the number of participants you need to recruit, this is where Quantitative Usability Tests and Qualitative Usability Tests differ again.
For Quantitative Usability Tests, in order to make sure you get statistically significant data, you need a sample size of at least 35 to 40 participants, otherwise the data will vary too much and will not be insightful.
For Qualitative Usability Tests, a good rule of thumb is to conduct your usability test on 5 participants. After that, the feedback you get will start to get too repetitive and it won’t be worth the time anymore. You can later reiterate your designs and test them again with 5 more participants.
Step 5: Have them go through the tasks and note their actions
Now is the big moment. Have your participant go through the tasks and observe their behaviour and actions along the way. Take note of the standard metrics, but also anything else that stands out. For Qualitative Usability Tests, you should also ask them to “think out loud”. This will give you better insight into what they are doing or trying to do.
Some examples of things to make note of can be:
- Task completion time
- Task success
- Where they pause or struggle
- Where they express delight
- When they wanted help
- What questions they asked you during the test
- And anything else that seems interesting to you
Step 6: Ask questions
In between each task, or once all tasks are complete, you can bring in some other research methods, like Quantitative Surveys and Qualitative Surveys, to ask them some final questions. You can even conduct a small research interview with them, however this can only be done in moderated tests.
When listening to their answers, it’s important to prioritize their actions in the tasks, rather than what they said in their responses. For example, they may tell you that they found the task easy, but in reality, they may have struggled for a long while at various points in the task.
Step 7: Analyze and report your findings
Analyze the findings from each task. Once you have conducted usability tests on all of your participants, you can then cross reference those findings from each task with the findings from all of the other participants. This will surface any patterns or trends in the feedback and possibly point to deeper insights. It will also allow you to make design proposals based off of hard numbers, instead of just your opinions.
Once you have this all in an easy-to-digest report, share your findings with your team and stakeholders. Make sure to include the main insights at the top, alongside any proposals you have on how to improve the designs before your next round of usability tests, or before you hand the designs over to the dev team.