When Is A/B Testing Not Really A/B Testing? Understanding the Differences Between A/B Testing and Before-and-After Testing

There’s this tool out there – I won’t name names – that’s started claiming it can do A/B testing. Basically, you tell the tool when you’ve made a change to your content, and it monitors your Google Search Console data before and after that change. Then, it tells you how your content performed pre-change and post-change. They’re calling this an A/B test.

Now, I don’t want to knock what they’re doing entirely. In fact, at Keywords People Use, we do something quite similar. But here’s the difference: we don’t call it A/B testing because it’s not. It’s actually before-and-after testing. And there’s a significant difference between the two.

Understanding Before-and-After Testing

Let’s start with before-and-after testing because it’s probably the easier of the two to get your head around. Essentially, you make a change to your webpage – maybe you update a headline, redesign a button, or add some new content. Then, you compare performance metrics like click-through rates or conversions from before the change and after the change.

For example, suppose you add an image and some text to a page, and the following week, you notice your conversions have increased by 20%. Brilliant news, right? Well, possibly.

The problem with before-and-after testing is that it doesn’t account for external factors that might influence your results. There are all sorts of variables that could be at play:

Seasonality: Your after-data might coincide with a holiday period or a major event in your industry.

Traffic Changes: Perhaps you ran an ad campaign, or maybe there was a spike in organic traffic because someone influential linked to your site.

Algorithm Updates: Google might’ve rolled out a core update or tweaked its algorithms in a way that affects how your content ranks, entirely unrelated to your changes.

Before-and-after testing assumes that the only variable impacting your results is the change you made. But in reality, the digital landscape is teeming with noise and other variables that aren’t just about the content or design of your page. So, while before-and-after testing can give you a basic indicator, you can’t fully attribute all the success (or failure) to the changes you’ve implemented.

So, What About A/B Testing?

A/B testing, sometimes called split testing, is a bit more rigorous. In an A/B test, you create two versions of a webpage:

Version A: The original page as it currently exists.

Version B: A variation with the changes you want to test.

You then show these versions to different segments of your audience simultaneously. The key here is the simultaneous part. By running both versions at the same time, you can eliminate many external variables like seasonality or sudden traffic spikes, because both versions are subject to the same external conditions.

Now, here’s where it gets tricky in the realm of SEO and page rankings. When it comes to testing whether a change positively impacts your ranking on Google, it’s virtually impossible to do an A/B test at the individual page level. Google will only rank one version of a page at a time; it won’t have two different versions of the same page appearing in search results.

However, there’s a workaround if you’ve got a site with templates across many similar pages – say, an e-commerce site with numerous product pages. You can leave half of your pages with the current template and update the other half with the new template. Then, you wait for Google to index the new pages and monitor how they perform as a group compared to the unchanged pages.

But even then, variables can creep in. Some pages might have better internal links or more backlinks than others, which can skew your results. Tools like SearchPilot can help with this kind of testing, but they can be pricey and somewhat complex to implement.

Key Differences Between Before-and-After Testing and A/B Testing

To sum up, here are the main distinctions:

Timing: Before-and-after testing happens sequentially (one after the other), whereas A/B testing happens simultaneously.

Control Groups: Before-and-after testing lacks a control group, making it harder to attribute changes directly to your modifications. A/B testing includes a control (Version A), allowing for a direct comparison.

Reliability: Before-and-after testing is more prone to false positives or negatives because it doesn’t account for other influencing factors. A/B testing offers more statistically significant results, assuming you’ve got enough traffic and run the test long enough.

Complexity: Before-and-after testing is simpler to set up – no special tools required. A/B testing needs specific tools to split traffic and analyse results, which can add complexity and cost.

When Should You Use Each Method?

Before-and-after testing is still quite valuable in many scenarios, especially when:

– You can’t split traffic for technical reasons.

– You’re testing something obvious or significant, like launching a new website design or changing your pricing model.

– You need a rough idea of performance without requiring statistical rigour.

On the flip side, A/B testing is ideal when:

– You need precise, reliable insights about how specific changes impact user behaviour.

– You’re testing incremental changes, like tweaking button colours or headlines.

– You have sufficient traffic to split between versions and gather meaningful data.

Pitfalls to Avoid in Both Methods

Regardless of the method you choose, there are common pitfalls to watch out for:

Jumping to Conclusions: Don’t be too hasty in attributing results solely to your changes. Always consider other factors that might have influenced the outcome.

Stopping Tests Too Soon: Especially in A/B testing, make sure your test has run long enough to reach statistical significance. Otherwise, you might be making decisions based on incomplete data.

Testing Too Many Changes at Once: Keep it simple. If you test multiple changes simultaneously, you won’t know which one had the impact.

Ignoring Audience Segmentation: In A/B testing, ensure your traffic split is random and representative. Sending different types of users to each version can skew your results.

Understanding the difference between before-and-after testing and A/B testing is crucial. Each has its place, and knowing when to use which method can save you time and resources while providing more accurate insights.

At KeywordsPeopleUse, we rely on before-and-after testing for our Google Search Console integration. When making changes to content where you don’t have multiple pages to test against, it’s often the most practical method. You watch the graphs, see if they trend upwards or downwards after implementing changes, and use your knowledge to interpret the results, considering factors like seasonality and algorithm updates.

So next time you’re looking to test changes to your site, take a moment to consider which method suits your needs best. It can make all the difference in understanding the true impact of your efforts.

Leave a Reply

Your email address will not be published. Required fields are marked *