Brief Guide to Iterating Effectively with A/B Testing

A/B testing on your site lets you quickly test different versions of your site to find which variation of your site sits best with your visitors. This guide will detail how to get started with A/B testing on your site.

A note on statistical significance

In order to get accurate and reliable results from A/B testing, you’ll need a certain number of visits and successful conversions per variation before a true winner can be declared. There’s plenty of literature out there on the topic, here’s a calculator to give you an idea. If your site is small, be prepared to wait for some time to get accurate results.

Typically when starting out it’s a good idea to run the Null Hypothesis Experiment. This is a quick and easy way to make sure that there are no unaccounted variables in your testing platform that could spoil your data. Create an experiment with only 1 variation: nothing. The purpose of this experiment is to have a variation that’s the exact same page as the control page to make sure that the result is 0% improvement. If the result of an experiment with no change is not 0%, there is something wrong with your testing platform!

Figure out what’s worth testing

Come up with some questions you want to answer with the help of solid data. For example:

  • Would a value-proposition based headline lead to more conversions compared to a concise explanation of my product?
  • Would including my prices up front help anchor my prospective clients’ expectations or scare them away before I have a chance to make my case?
  • Should I move this big paragraph about returning merchandise on the product page to the bottom, giving more attention to our free shipping policy?

Inspectlet’s user session recording tool can be handy here, allowing you to record and watch your real visitors. Watching a few people use your site can help you find areas causing confusion and allow you to witness unexpected behavior that you want to correct.

Give it a go!

It’s time to run our experiments! We like to use Optimizely at Inspectlet, but you can use any tool you fancy. When you’re creating the experiments, keep in mind that more variations per experiment means you’ll have to wait longer before reaching statistical significance. Set up some goals that align with your business metrics to track how well each variation performs.

Iterate based on results

split testing results

After some time you should have empirical data on each question you wanted to answer. You can also use Inspectlet to watch user sessions of visitors experiencing different variations of your site.

Watching user sessions can not only help you understand why a specific variation performed the way it did in the numbers, but you’ll also witness any other changes to the user experience caused by the variation.

Rinse, repeat

Congratulations on running an experiment! Once you’ve found an improvement over the baseline, you can now push that change to all your visitors with confidence.

Brief Guide to Iterating Effectively with A/B Testing