We are now part of Comprend

Your partner in tech-enabled marketing and communication for transformative impact

Meet Comprend

Skip to main contentSkip to navigationSkip to search

Analytics

A/B/n Testing: Trial-and-error Marketing

01/24/23

This or that? It’s a question we struggle with on a daily basis because choice is hard! But when it comes to deciding which campaign to run or which website layout to choose, A/B/n-testing gives you the hard data you need to make tough decisions. Here’s a top 10 list of dos and don'ts to help get you started.

What is A/B/n-testing? 

A/B/n-testing is a process that adds much-needed science to the way in which we engage with our audiences online. Essentially an experimental environment, two or more variants of a page are shown to users. Throughout the testing period, data is collected and statistically analyzed to scientifically understand which version yields maximum impact and drives business metrics. For example, this empowers you to accurately define the most optimal version of a campaign landing page. 

While A/B/n-testing can be time-consuming, its advantages are enough to offset the time investment. After all, if you are going to drive traffic to a page through paid media campaigns, you want to make sure that the page is optimized.  If not, you might be wasting valuable resources. This kind of conversion rate optimization (CRO) activities go hand-in-hand with activities that drive traffic to a site such as paid search or other longer-term campaigns.

Do's

Set up test-specific tracking points if needed

A quantitative analysis is only as good as the data the analysis is based on, and A/B/n-testing is no exception. Imagine going through an entire A/B/n-testing process  – from a conversion hypothesis to analyzing the final results – and then realizing that the test KPIs aren’t measured in a relevant way. At that point, the only option could be to completely redo the test all together. To avoid this make sure business-relevant KPIs, such as clicks on an important call-to-action button, can be tracked and can be set as the test goal in your testing tool.

Use heatmaps for qualitative test analysis

The quantitative results tell us which variant performed the best, but sometimes the reason ‘why’ isn't clear. At that point, a quantitative analysis can be complemented with a qualitative analysis based on heatmaps. A heatmap illustrates where most of the activity takes place on a webpage, for example it shows you how far people are scrolling down a page – which is particularly important if you have a conversion goal at the bottom of a page. Heatmaps provide qualitative data, in the sense that it doesn't show the conversion rate but adds a different context to why certain actions are or aren’t occurring. 

Heatmaps are quite easy to use in your analysis if you have the right tools in place. Therefore, this is something to consider when you are purchasing a testing tool, e.g. if there is a heatmap integration option.

Setup a multivariate test to narrow down potential conversion issues

When starting a test, we always want to have a clear test hypothesis. But what happens if you do not have a clear test hypothesis for why a certain page is underperforming? If you have a page with high traffic that is underperforming for unclear reasons, a multivariate test might help you narrow down the likely causes. This is because, unlike an A/B/n-test which compares different versions of a page, a multivariate test is used to compare changes on a more detailed level. You can create different versions of several page sections and the testing tool will combine different versions of sections and show which combination is performing the best. This method can be used to discover which section of the page that requires change, and can therefore inspire future A/B/n-testing activities as well.

Test your changes for different devices and internet speeds

Often we set up A/B/n-tests from a well-performing computer with a stable internet connection. But visitors to the test page are often visiting from a range of different devices, sometimes without optimal internet speed. To find out how visitors from different devices will actually experience your test, you can preview test variants and then change the browser window size to align with different devices. Furthermore, slower internet connection and central processing performance can be simulated from popular browsers such as Google Chrome. This allows us to make sure that our changes load as expected for the test participants, which in turn ensures a good test result.

Think about the test target group

When A/B/n-testing is performed, we often include all visitors that accept cookie consent. But what if we only aim to improve performance for certain target groups? Let’s say that we are having low conversion rates among mobile visitors overall as well as desktop visitors from certain paid search campaigns, then we shouldn’t necessarily target all visitors. To get quicker results, we can set up one test targeting mobile users and one test targeting desktop visitors from paid search campaigns. These tests can have totally different test hypotheses which can enable us to improve the mobile version of our website as well as gather results for future personalizations towards paid search campaign target groups. Which allows us to build a website adapted for our target groups.

Now we have covered our top five do’s, let’s explore the don’ts!

Don'ts

Don’t formulate a too broad test hypothesis

When it comes to A/B/n-testing, it is very important to remember to only test one change or several very closely related changes at a time. Otherwise, it might be very difficult to draw a conclusion regarding why a certain version of a page is performing especially well. If you want to test more than one hypothesis, you can create a multivariate test and create one test hypothesis per change. See more about multivariate testing under point three of the “dos” list.

Don’t use the same best practices for all target groups

Don’t be afraid to challenge general best practices when formulating your A/B-test hypotheses for specific target groups – just as people are individuals, target groups are unique. Therefore, general best practices will not work for all target groups, in fact, they might harm performance.  Some websites are focused on selling a product, while others are tailored to building brand awareness, and this requires very different conversion strategies. For example, do you want your audience to stay on the website and engage, fill in their email address or quickly buy a product? These end goals require very different strategies.

Don’t end your test before statistical significance

A common mistake when A/B/n-testing is to end a test too early. At that point, even a smaller number of individual users might be able to skew the test results. You need both a large enough sample of visitors and a conversion rate difference between the variants to get a statistically significant result. If you stop your testing too soon, then you are only making a qualified guess that is devoid of mathematical significance. 

Some A/B-testing tools have built in statistical significance calculators but there are also many available online. 

Don’t compare results from different date ranges

When evaluating results, one might be inclined to compare page results over time and attribute increasing or decreasing page conversions to specific changes. This can be a big mistake since pages do not exist in a vacuum, and general website performance as well as non-website specific factors can always have an impact on how individual pages perform. The starting point should be to A/B/n-test all major changes with an even traffic split and only compare data from the test date range to ensure that you are getting a fair comparison between page variants.

Don’t decide the end date at test start

One of the most common mistakes when A/B/n-testing is to decide the test start date and end date before launching the test. Like previously mentioned, ending a test without statistically significant results is a problem. And if an end date is set from the start, it might ensure that a significant test result will never be achieved. So keep an open mind regarding the test end date and prioritize a reliable test result above meeting a strict timeline. 

The importance of A/B testing can’t be overstated – it’s a marketing department’s best friend! After all, working out what your audience responds to, and if you are meeting your end goal is crucial for the success of your business.

Discover more about A/B testing, contact Nordic Morning

By Pontus Eklund Technical Digital Analyst