A/B testing guide
What is A/B Testing?
The purpose of A/B testing is to estimate the efficiency of different approaches running simultaneously by providing you with enough statistical information to help you make a data-backed decision on which strategy works best. A/B testing involves comparing a control version and an optimized variant. The test compares versions of the same customer experience against the baseline control sample and measures the success of the newly introduced changes. With version A and version B of the same shopper experience juxtaposed in terms of usage, you can identify the superior tactic, since the success metrics of each strategy will be available for you to analyze.
2Checkout enables you to compare different aspects of the shopper experiences you offer to your customers, such as:
- Purchase flows
- Promotions
- Up-selling campaign placements
- Download Insurance Service
What should you test?
We advise you to test everything from purchase flows to designs, styles, copy text, forms, content, navigation, links, images, pricing strategies, cross-selling campaigns, and so on. All of these are fair game when it comes to A/B testing.
Equally important, this tactic helps you avoid rolling out an update that could negatively impact conversion rates. Optimizations might look good on paper, but A/B testing can easily tell you just how well your shoppers respond to them. Worst case scenario, you only sacrificed a portion of the overall traffic in case the impact is not the one expected.
A/B testing best practices
- It's critical that you run the test scenarios simultaneously. Without parallel testing, the comparison results lose their value. Make sure to split traffic between the scenarios you're testing and you'll easily be able to tell what modifications generate the desired results.
- Minimize the number of changes between versions A and B. This way, A/B testing will provide accurate insight into the efficiency of a specific modification over another, and you'll find it easier to tell which variations work for your customers.
- Don't ignore sophistication, if it's required. While keeping the rule above in mind, when comparing different cart designs and purchase flows, you won't really be able to keep variables to a minimum. Don't even try if the scenarios you're testing are designed to measure more than just a specific modification.
- Take your time when A/B testing. While A/B tests can be time-limited, it's best to ensure that you gather enough statistics to draw the right conclusions. A/B testing data needs to reach statistical significance, a point at which you can feel comfortable with shutting it down and starting to analyze the data. Don't draw conclusions based on small sample sizes and allow your tests to run for at least a full week before making any decisions. This helps you rule out seasonality and out-of-your-control factors that might influence your traffic and, ultimately, the test's confidence level.
- Reach a consistent traffic volume. Along with the time factor, statistical significance also requires a volume of shoppers large enough to be relevant, so you can accurately measure the success of your test.
- Restrict traffic when necessary. Classic A/B testing scenarios involve spreading the traffic equally between two test scenarios, making it simpler to compare the results, assess the accuracy of success metrics and the efficiency of one of the experiments vs. the baseline control version. Still, only a portion of the overall traffic can be redirected to the variable scenario. Restricting traffic might prove necessary in certain situations, such as to limit impact, or when you need to test a collection of variations simultaneously, rather than just two.
- Run A/A/B tests. This type of test involves testing a variant against itself in order to identify whether the data you obtain from the test is accurate or not. Large differences between identical variations may point out problems with your tests.
- Define the control variation to be identical to your current setup. The best way to identify the most efficient changes is comparing them to your current scenario.
A/B testing statistical significance
You can use multiple free tools available online to easily calculate A/B testing significance. This helps you understand when the statistics harvested as a part of a certain campaign are relevant.
Split and A/B Testing Significance Calculator is capable of calculating conversion rates, a significance score and even able to recommend sample sizes based on the volume of traffic and the number of conversions you're getting. When the significance score reaches at least 95% for the variations you're comparing to the control version, the statistics gathered also achieve relevance.
2Checkout A/B Testing experiments
2Checkout's proprietary A/B testing technology enables you to run campaigns that fall under the following categories:
- Conversion rate experiments
- Average Order Value experiments
Some A/B testing campaigns are only available if the specific features involved in the testing process are enabled for your account, such as DIS.
Important: Returning visitors who have been redirected to one experiment will always land on the same experiment as long as the A/B testing campaign is running. 2Checkout uses cookies to prevent them from accessing different variations of your site and avoid confusion and keep the statistics relevant.
Success Metrics
You can evaluate the test results by looking at the success metrics highlighted by the 2Checkout platform:
- Success rate
- Total revenue (sales volumes)
- Average order value
- Confidence level
What type of shopper interactions does 2Checkout track?
The reports produced by the tests refer to Finished orders and Success rate. The figures available under each of these categories represent finished orders only as interpreted by the A/B test.
Important: The finished orders interpreted by the A/B tests are not the same as the Complete orders in the 2Checkout platform.
Note: It's important to underline that 2Checkout A/B testing is focused on the purchase funnel starting with the checkout page:
- If a shopper reaches the checkout page but doesn't place an order, the A/B test counts the test and takes into account the order's value, but marks it with a 0% success rate.
- If a shopper completes the purchase process and actually places an order, the A/B test adds a new finished order and it's corresponding value, regardless of whether the actual payment goes through or not.
How to configure, edit and run A/B tests
- Go to the A/B Testing section under the Marketing tools menu.
- Click the New campaign button under the desired experiment type area, if available. Alternately, you can configure an existing campaign by clicking the Edit button.
- Configure the campaign details in the Campaign info section. Give each campaign a unique name to make it easier to identify. Define the maximum number of tests you want to be run as well as an end date for the test. Entering 0 (zero) for the maximum number of tests and leaving the end date field empty allows the campaign to continue indefinitely until you manually stop it.
- The Unallocated traffic section informs you about the amount of available traffic that you can assign to the campaign's variants.
- Add variations to the campaign. Specify the volume of traffic that you want to be redirected to each variation. Make sure to cover 100% of the unallocated traffic when splitting it between variations. You can create multiple variations per campaign, although most A/B testing campaigns are usually done between two variations. Click the Add variation button when you're done defining each variation. Before you can start a campaign, it's mandatory to define a control variation.
- Hit the Back to campaigns button once you're done configuring the campaign. You can edit the campaigns only before you start them. You cannot edit Running or Finished campaigns.
- When you're done configuring a campaign, hit the Start campaign button to give green light to the testing process and watch the statistics pile up. All ongoing campaigns are available in the A/B testing area under the Active campaigns tab. You can run a single campaign for each of the available experiments.
F.A.Q.
- What type of traffic does 2Checkout count in A/B tests?
- 2Checkout's A/B testing mechanism counts traffic coming from desktop platforms. The only mobile traffic is included in A/B testing campaigns is the traffic aimed towards A/B testing campaigns for promotions.
- Who can use 2Checkout's A/B testing features?
- The A/B testing feature is enabled by default for the 2Monetize. The feature is also available on 2Sell & 2Subscribe (if they have add-ons), and Growth Edition accounts.
- How does 2Checkout determine the winning variant?
- 2Checkout determines the winner of any test based on the confidence level of each variant. To qualify as a winner, a variant needs to reach a minimum of 95% confidence level. If multiple variants have an equal confidence level above 95%, the one with the higher average order value (AOV) is declared a winner. Winning variants are determined based on the Chi-squared distribution statistical model.
- How does 2Checkout determine the confidence level of a variant?
- 2Checkout uses advanced statistical calculations to determine the confidence level of a variant by taking into account multiple metrics, such as your current conversion rate and the current sample size (the amount of traffic that lands on each variant). The calculations do not take into account any recommended sample size.