A/B Testing

What is A/B Testing?

A/B Testing, or Split Testing, is a method of testing one change on a page to see how it performs compared to the original. The idea is that you change only one variable, then show these two versions to two similarly sized audiences and analyze which one performed better over a specific period of time (long enough to make accurate conclusions about your results).

Common things to A/B test are the wording of a headline, the color or placement of a button, or an image on the page. It can be even larger, such as adding a tutorial flow to the page. The point is that your test has to be controled and limited to one thing you are measuring so you can compare the metrics against the original version, then keep the changes that have a positive impact. A/B tests are valuable to a business because they're low in cost but can be high in reward.

Two similar highligher caps, side by side, positioned in the same way, but one is pink and the other is yellow.

What do you need for A/B Testing?

timeTIME

  • Preparation: 1 Day
  • Testing Time: 1-3 Weeks
  • Analysis: 30 minutes

materialsMATERIALS

  • Your live site, product, or app
  • Software that facilitates A/B tests and that is compatible with your site (see below for links)

How do you run an A/B Test?

A person writing down notes in between two laptops displaying two different screens.
  1. Step 1: Determine your success metric

    This is very important, which is why it is the first step. This requires an understanding, or a baseline, of your current metrics. Look for areas that need improvement and target one of those. This may be time spent on your site, clicks on a button, sales, conversions, e-mail signups, or number of support tickets submitted. Try to keep it simple and only pick one. This may be called your "dependent variable".
  2. Step 2: Determine what needs to be altered and tested

    Remember, to truly evaluate how effective a change is, you will want to isolate one variable and measure its performance -- otherwise, you can't be sure which one was responsible for changes in performance. Some common changes may be the wording of a headline or paragraph, button visibility or color, images, adding a tutorial flow, or anything else that you feel has been ambiguous to users. The element you decide to change here is also known as an "independent variable".
  3. Step 3: Formulate a hypothesis

    Formulate a hypothesis regarding how to fix your problematic page element. What do you think will happen to the metric you want to affect if you change X to Y? Write that out.

    Example:
    We believe that by changing our blue "buy now" button to a bright red, we will increase clicks on it by 15%.
  4. Step 4: Create a "control" and "challenger"

    Now you need to make two copies of your site, and apply the change to only one of them. The unaltered version of your site is called the "Control", and the one with the proposed fix from your hypothesis is called the "Challenger".
  5. Step 5: Split your sample groups equally and randomly

    Now you will want to split your sample groups equally and randomly to ensure you get the most unbiased and conclusive results. There are many tools out there that will help make this an easy step for you (see below for links).
  6. Step 6: Run the test on both versions

    You should run the test for however long you need to in order to reach a statistically significant number of users. Sites with more usage may not need as long than a site with low usage. Some A/B testing tools also allow you to set limits or a percentage on the number of users that will see the altered version of your site. All of this can change the length of time your test may run, but when in doubt, run your test for around 2 weeks. It's important to run the Control and the Challenger at the same time so the results are not skewed by timing or seasonality, for example.
  7. Step 7: Analyze the data and take action

    Now that it's over, take a look at the data and compare it to your hypothesis. If there has been a positive change in the data, then keep the Challenger. If not, discard it.
  8. Step 8: Start another A/B Test!

    You're a pro now, so kick off another A/B test. Remember, these are easy to run, low risk, and high reward.

Tips for great A/B Testing

  • Make sure you're only running one test at a time on any campaign. If you run more than one, you may not be able to trace which one is affecting the metric you are tracking.
  • Do not stop the test as soon as you see a statistically significant difference between the two pages. Continue the test until its scheduled end. This is especially important if you plan to publish results or your research is funded with grant money.
  • Serve both variations simultaneously in order to account for seasonal variables.
  • Use an A/B testing tool. There are many out there that faciliate small changes on your site, with little-to-no dev work. You might even have some A/B Testing functionality on tools you are already using.
  • Serve the original version of the page to Google’s bots to avoid being penalized in search results.
Two handheld gaming devices, side by side, positioned in the same way, but one is pink and the other is black.

More resources for A/B Testing

WireframesAffinity Diagram