Quantitative Surveys

Before we talk about Quantitative Surveys

We first need to talk about what Surveys are in general.

Surveys are questions, often bundled in sets, that are distributed to users so they can answer questions about their behavior, background, or opinions. Surveys are typically unmoderated, quick, cheap, and a relatively easy way to get responses from a large sample size of your users with less effort than a full out Usability Test. You can also distribute them in a variety of ways.

Surveys can be quantitative, qualitative, or a mix of both. For this method, we are going to talk about quantitative surveys, but you can find information on qualitative surveys here.

What is a Quantitative Survey?

Quantitative surveys are focused on getting you basic data points that will allow you to perform a statistical analysis of your respondents, which you can then use for reporting or for a comparison later on. For UX, quantitative surveys are a quick way of measuring the overall usability of your product, or the usability of a specific task or area within your product. Responses are quick to collect with large sample sizes, and the data obtained can be easily visualized and analyzed, which also makes Benchmarking an easy task.

There are some cons though; quantitative surveys will not give us context, motivation, or the cause for the response, which is why many people combine a mix of both quantitative and qualitative questions in their surveys.

What do you need for a Quantitative Survey?

TIME

  • 15 minutes to a few hours to create a proper survey
  • Several days, possibly weeks, to collect responses

MATERIALS

  • A survey creation tool
  • A method to deliver the survey, usually found in the same survey creation tool

How do you conduct a Quantitative Survey?

Step 1: Understand the goals

First you have to decide what you want to collect data on. Are you trying to collect some quantitative data on the overall usability of your entire product? Or just for a specific task? Why are you trying to collect that specific type of data. Is this the first time you will be collecting data for your product? Maybe you plan on using this new data as a Benchmark. Or maybe you have some Benchmark data already and are planning on comparing this new data against it. Understanding this can affect the way you shape the questions or determine right the method of delivery.

Step 2: Decide on your target audience

Part of understanding your goals is also understanding who you want to collect this data from. There may be requirements in your goals that specify the type of user you should collect data from. They can be users that have gone through a specific flow in your app, users that have just recently signed up, or maybe users that match a specific demographic. Again, this can affect the way you shape the questions or determine the right method of delivery.

Step 3: Decide on the quantitative data you want to collect

Before you write out your questions, you should know what specific metrics you want to collect data for. Different goals will result in different metrics. Decide on what are the most important metrics for you and go from there. If you have Benchmark data already, you will want to collect data for the same metrics as that first benchmark.

Some examples of metrics for surveys that are focused on the overall usability of your product are:

  • Perceived ease of use/complexity
  • Perceived intuitiveness
  • Perceived consistency
  • Perceived learnability
  • Likelihood to use it again
  • Perceived performance
  • Satisfaction
  • And more…

Some examples of metrics for surveys that are focused on the usability of specific tasks are:

  • Perceived ease of use, per task (generally, that’s kind of it, however, this question is asked at the end of each task, which can help surface low-scoring tasks that may need some adjusting).

Step 4: Choose how to distribute the survey

Now you need to decide on how you’re going to get the data for these metrics from your target audience. The most popular methods are through a short intercept survey on a live website, via email, or after a usability test.

Depending on your goals and the audience you chose, some distribution methods may make more sense than others. For example, if you’re targeting users that have gone through a specific flow in your product, you may be able to get a list of everyone that has completed that flow and send them several questions via a mass email, or you may be able to set a trigger in your product that prompts them with an intercept survey the moment they complete that flow.

It’s also important to consider distribution before you write the questions because it could affect your questions too. For example, with intercept surveys, you may want to only ask 1 question, otherwise it may seem too lengthy and cause potential participants to close it automatically.

Step 5: Prepare the questions

To ensure the most accurate results with quantitative surveys, the questions must be asked in exactly the same way for all respondents and should be Close-Ended Questions. These are questions that ask respondents to choose from a distinct set of predefined responses (like ratings, scales, or yes/no responses).

Some examples of quantitative questions would be:

  • Question 1: “I thought the system was easy to use”
  • Response: 1 (strongly agree) | 2 | 3 | 4 | 5 (strongly disagree)
  • Question 2: “Overall, this task was:”
  • Response: 1 (very easy) | 2 | 3 | 4 | 5 (very difficulty)

With questions like these, you can easily get measurable numbers, for example, “Amongst our 400 responses, this task had an average difficulty score of 4.2”.

There are also pre-made surveys you can use which help for Benchmarking a usability score in your product. They also allow you to compare that score with Industry Benchmarks or competitor scores. With these surveys, you can then repeat them over time to see if your product is improving. The SUPR-Q and the SUS Surveys are two that focus mainly on Usability and are worth checking out.

Step 6: Send the survey to your audience

At this point you have your goals, your target audience, the method of distribution, and your refined questions. Now you’re ready to send it out, but just make sure you give enough time to collect the results. Remember, you need enough of a sample size to achieve statistical significance (aim for over 100 participants), otherwise your data may vary too much to be credible. So send away!

Step 7: Analyze the responses

Once you have enough responses to achieve statistical significance, you can move on to analyzing the responses. There are many tools out there that can help you plot this information visually which will help with your analysis.

Step 8: Report and share

Finally, turn this into an easy-to-digest report and share it with your team and stakeholders. Make sure to explain why this data is useful and what you plan on doing with it next. You can even debrief with your team or stakeholders and you can start coming up with plans on how to improve those metrics you collected.

Tips for a great Quantitative Survey

  • Don’t make your surveys long! Generally, the longer the survey, the less chance a participant will complete. They may not even start it if it looks too long to begin with. There are exceptions, like if you have a pre-existing relationship with your target audience, or if they are die-hard users of your product, in which case they may take the time to complete a lengthy survey, but it’s rare. Just use your best judgement and try to keep it short.
  • If you can, first test your Survey with a small sample size to gauge your response rates. If response rates are low, consider iterating on the questions.
  • If you’re looking for responses from a specific persona, you may want to send out a few screener questions first. Then you can send out the actual Survey to anyone that meets your criteria, who should be more representative of your target audience.
  • In your analysis and report, remember that these numbers won’t give us context and they may contain a response or recency bias. For example, they may say they found the usability of your product extremely easy, but in reality they could have struggled in many places. So use your best judgment and take the data with however much salt you feel is needed.
  • You can also collect qualitative feedback in a quantitative survey, but it will add to the number of questions on your survey. Remember, quantitative surveys do not give you context, so a common method of getting that context is by putting a follow-up text field box after each question for the participant to fill in.

For example:

  • Question 1: “Overall, this task was:” (measures the usability of a task)
  • Response: 1 (very easy) | 2 | 3 | 4 | 5 (very difficulty)
  • Question 2: “Please explain why you chose that response: (qualitative follow up)”
  • Response: (text field box)

Just remember that the longer your survey is, the less likelihood that participants will complete it, so make sure your questions are deliberate and necessary to achieve your goals.

  • It’s also good practice to randomize the options for responses in each question. This is typical functionality in most survey tools, and prevents results from being skewed by participants who select the first response for each question in order to get through the survey quickly.

More resources for Quantitative Surveys