top of page

What A/B testing can’t tell you

According to research by VWO, most marketers only see a statistically valuable result from one out of every seven A/B tests they run....

Profile picture of Gal Zohar

5.29.2021

5 min read

According to research by VWO, most marketers only see a statistically valuable result from one out of every seven A/B tests they run. Comparatively, dedicated conversion rate optimization (CRO) agencies see valuable results from one out of every three tests.


The reason? As Convert.com CEO Dennis van der Heijden explains in an article on CXL, is because dedicated CRO agencies have a better understanding of A/B testing’s role within the conversion optimization process compared to most marketers.


Don’t get us wrong, A/B testing is a great tool for website optimization. Statistically, it’s one of the most reliable ways to validate the effectiveness of a specific page, CTA or even a color story. But it’s not the cure-all many marketers think it is. And this misunderstanding costs marketers a great deal of time and money that could be spent on more valuable activities.


Learn the three common misconceptions about A/B testing and get more useful results from your tests.



 


A/B testing can’t tell you why users are or aren’t converting


An A/B test can tell you what action users are more likely to take when exposed to a certain variation of a webpage. But it doesn’t tell you why they’re more likely to take that action.


Let’s say you have an eCommerce store and decide to update the photos on one of your product pages to see if they convert better than your old photos. After a week of testing, total conversions (the number of people who click “Add to Cart”) go up 5% on the variation with newer photos.


This may seem like a win because you got more of the response you wanted. But what if conversions increased because of something circumstantial, such as a lot of your existing customers happening to have friends with birthdays in the same week who then buy your product as a gift?


Without understanding your users’ behavior, you can’t truly understand the value of a website change you made. It’s still possible that the conversion rate increase was caused by an external force:

  1. It could have been a coincidence. The conversion rate increase could be totally random and just appear to be caused by a change you made to your website.

  2. It could have been a deductive error. A website change you’re testing could have increased conversions, but not for the reason you think it did.

It could have been a co-dependence. The conversion rate increase could have been the result of a website change you made and a second variable you’re unaware of.



 


A/B testing can’t tell you how to maximize conversions


A/B testing can tell you if one version of a webpage is better than another version. But what it won’t tell you is if either of those versions is the best version you could create.


For example, let’s say you A/B test the layout on one of your product landing pages and find that moving the FAQ section higher up on the page increases conversions by 3%. That’s great! But what if you could increase conversions by 15% by adding a vital, unknown question and answer to the FAQ that you left out?


The A/B test wouldn’t show you that — only customer research would.


This is especially important when you start talking about the element of a page that has the biggest impact on conversion rates: your offer. (“Your offer” is what a person gets in return for taking the action you want them to take, such as your product, your lead magnet, or the information you sent them in an email).


As Ramit Sethi, an online business expert, explains from his experience with A/B testing, “I learned that I can eke out an extra 5-15% from improving the subject line… or 500% from creating a better offer.”


In other words, the conversion rate you can achieve by optimizing the call to action (CTA) of a landing page is limited by how desirable the lead magnet you want users to download is. For emails, even if your subject line is perfect, your open rate is limited by whether subscribers care about your email topic.



A/B testing can tell you if one version of a webpage is better than another version. But it won’t tell you if either version is the best version you could create.

Simply put, an A/B test can help you validate the best optimization opportunities, but it doesn’t reveal them to you. In order to maximize conversions, you need to perform additional customer research and dig through historical data to find the right things to test.



 


A/B testing can’t always validate a website change


An A/B test might not be able to validate a website change at all, depending on the size of your business or the context of the results of your testing.


In order to run a successful A/B test, you need a statistically significant number of conversions in order to determine whether the results are valid or not. But as Peep Laja from CXL explains, a lot of businesses run A/B tests when they shouldn’t because they don’t have enough conversions to validate their results:


“Roughly speaking, if you have less than 1,000 transactions (purchases, signups, leads, etc.) per month — you’re gonna be better off putting your effort in other stuff,” Laja says.


However, even if you do reach statistical significance, an A/B test still might not be able to validate a website change because it doesn’t guarantee that the results of the change actually help your business.


For example, A/B testing a change to one of your product pages may increase the number of people who add that product to their shopping cart. But it doesn’t mean that those people actually completed their purchase and increased sales, even if you have enough transactions to reach statistical significance.



 


You can’t rely on A/B testing alone


Research by MarketingSherpa shows that the businesses that use historical data to formulate their A/B tests see greater success with their testing than those that don’t.


Van der Heijden confirms this from his experience with his clients: “Many of the most successful agencies using [Convert.com] spend an extensive amount of time researching consumer behavior to understand what is most important to the customer & why certain behaviors take place.”


So if you want to ensure that your A/B tests actually help you answer business-critical questions, you need to conduct additional customer research before you start testing.


Some of our favorite ways to research user behavior include:

  1. Google Analytics. Website traffic and behavioral data that can help you understand how people navigate your website and where they may be getting hung up.

  2. Heatmaps. Seeing which parts of a page attract the most attention help you better understand what customers care about on your website’s pages.

  3. Session Recordings. By recording a user’s website visit, you’ll be able to see exactly how they move around your site/a specific webpage to determine how you can better optimize it.

  4. Customer Surveys. Customers make purchase decisions for qualitative reasons, and engaging with them directly to learn about those reasons is often the best way to find opportunities to improve your site and offer(s).



RELATED ARTICLES

The power of interaction design

SUZANNE SCACCA

Microcopy: How to boost conversion with words

KALINA TYRKIEL

6 psychological principles for better landing pages

KALINA TYRKIEL

Find new ways FWD

Thanks for submitting!

By subscribing, you agree to receive the Wix Studio newsletter and other related content and acknowledge that Wix will treat your personal information in accordance with Wix's Privacy Policy.

Do brilliant work—together

Collaborate and share inspiration with other pros in the Wix Studio community.

Image showing a photo of a young group of professionals on the left and a photo highlighting one professional in a conference setting on the right
bottom of page