When the word testing comes to mind, there are many definitions. From using RSpec or Cucumber which test the validity and robustness of the code base to visual testing frameworks which find hot spots on a website. The one testing definition that I am going to touch on is that of A/B testing.
So what is it?
Some of the major players in the web industry practice this on a regular basis. Amazon.com and Netflix are two that come to mind that regularly throw different layouts or calls to actions in front of a given user. They have gotten so good at it, as a regular user myself of both services, it is hard to tell that you are currently in a bucket.
In short sense, it’s the process of showing two different versions of a product and seeing which performs better. The performance is tracked by conversions, which can range anywhere from click-thrus to increased traffic to even a lower bounce rate. Whatever the measurement is, it is always good to have a handle on what will be the key factor in knowing which one wins.
A/B testing can go even deeper through use of multi-variant testing — that is doing a C, D, etc. version. The one thing with this is to know how the individual tests as a whole effect the outcome of the performance. For this blog entry, I am just going to cover a quick how-to guide on getting up and going with A/B testing.
Sounds great, where do I start?
Before you start changing up the layout of your application or website, it’s best to have a plan in place. This plan will be beneficial to getting everyone’s buy-in on how to mark which option wins. A good starting point is to create a decision tree of the testing process. After your plan is set, think about how you will know when to mark your test as complete. And of course, you will need a good testing framework to make all of this run.
Set guidelines
At Real Travel, I created a simple decision tree that outlined the steps in the process of creating a new hypothesis, testing it, gathering results and making a decision. This helped not only the team, but the CEO understand why we were choosing the option A or B. Creating a good visual representation of this that everyone can use for reference saves a lot of back and forth time.
Find your result set
Figure out how and when you will know if your test is complete. At Real Travel, our money making pages are our highest traffic pages so it made sense to test those pages. We would mark a test complete if one of the following was met:
- Enough participants in the pool — about 1,000
- A test runs for 24 hours
- Our testing framework tells us that the results are statistically significant — this must always be met
Of course, we had run some tests that didn’t meet some of these requirements because our conversion rate was so low even though the 24 hour period expired or had enough participants. When this happens, we decided to fall back on the hypothesis instead of using the null hypothesis, knowing we would revisit this test at another time. That is why having a statistically significant result is a major factor in determining a winner.
Figure out your conversion trigger
When running an A/B test, in order for it to be successful, there has to be a measurement of a conversion. A conversion can be anything from a click of a button, increased traffic or even more signups.
Again, with Real Travel as an example, we make money when people click our check rates buttons so this was our conversion trigger. Testing different layouts, presentations, colors, etc. were all tracked through clicks on our check rates buttons. In many of our A/B tests we were able to create up to a 4x conversion rate.
Install a testing framework
There are many frameworks available to install based on the code base of your choosing. Since we develop in Ruby on Rails, we decided to implement ABingo because of its simplicity and quick stats. I say it is a simple install, but we did have to make a few changes to it to work with our development environment.
A few other frameworks for Ruby on Rails include:
- Vanity
- Google Analytics - this is a little more involved in setup
Last, but probably pretty important
To really make these tests work to your advantage, a key point to remember is to have an environment setup that allows you to continuously release — that is to push out your code to a live production at a whim.
My good friend Paul implemented this method at Real Travel through the use of Hudson and deploy scripts. Without this, we would not be able to turn around tests in 24 hours.
Testing can be fun
After all is setup, dream up some interesting A/B tests to run. You will be surprised at what information you find from running these tests. Like I mentioned, we were able to increase conversion rates on simple changes to UI. These might also put to rest the crazy ideas that your marketing team has when they say the want the logo bigger…