Designing a website is more of an art than a science. There are a million different ways to design a website and achieve a particular goal. We want our websites to eventually become popular and make money. Once the site is designed, it cannot be stagnant for long either. But how do we know if the users will like the new design? User base is critical and losing them is very risky. Once the users lose trust, it’s very difficult to earn it back. We want to take the guesswork out of website optimization and enable making decisions based on real data. By measuring the impact of the changes, you can ensure that every change produces positive results. So how do we do it?
What is A/B?
A/B testing is a simple way to test changes to your webpage against the current design and determine which one produces positive results. It is a method to validate that any new design or change to an element on your webpage is improving your conversion rate before you make that change to your site code. A/B testing allows you to show visitors two versions of the same page and let them determine the winner. Constantly testing and optimizing your page can increase revenue, donations, leads, registrations, downloads, and user generated content, while providing valuable insight about the site visitors.
At its core, A/B testing is exactly what it sounds like: you have two versions of an element (A and B) and a metric that defines success (like purchases, sign-ups, etc). To determine which version is better, you subject both versions to experimentation simultaneously. In the end, you measure which version was more successful and select that version for real-world use.
Real world example
Imagine a company, XYZ, that operates a web store selling apparel. The company’s ultimate goal is to sell more apparel and increase their revenue, thus the checkout funnel is the first place XYZ will focus the optimization efforts on. The “buy” button on each product page is the first element visitors interact with at the start of the checkout process. The design team postulates that if they make this button more prominent on the page, it would lead to more clicks and therefore more purchases. The team simply makes the button red in variation 1 and leaves the button grey in the original. They now set up an A/B test that pits the two variations against each other.
As the test runs, all visitors to the XYZ site are bucketed into a variation. They are equally divided between the red button page and the original page. The design team then measures the number of visitors who saw each version of the button and then clicked it. It also measures the number of visitors who completed the purchase funnel and landed on the final confirmation page.
Why do we need it?
Over the past decade, the power of A/B testing has become an open secret of web development. It’s now the standard means through which websites improve their online products. Using A/B, new ideas can be essentially tested in real time. Without being told, a fraction of users are diverted to a slightly different version of a given web page and their behavior compared against the mass of users on the standard site. If the new version proves to be superior (by gaining more clicks, longer visits, more purchases, etc), it will replace the original. If the new version is inferior, it’s quietly phased out without most users ever seeing it. A/B allows seemingly subjective questions of design like color, layout, image selection, text, etc to become matters of data-driven social science.
Results of A/B tests have lasting impact. It’s important to know which design patterns work best for your users so you can repeat winning A/B test results across the site. Whether you learn how users respond to the tone of your content, calls to action, or design layout, you can apply what you learn as you create new content. Data also plays very well with decision-makers who are not designers. A/B tests can help prevent drops in conversion rate, alienation of a user base, and decreases in revenue.