A/B testing, also known as split testing, is a method of testing what engages the recipient the most. The A/B test consists of two variants of the same asset (e.g. a page on the website or an e-mail) and is tested on two different randomly mixed groups. The difference between the variants depends on what you want to test, but it can for example be the subject line of an email or perhaps the CTA text on the button of a page on the website.
You can actually test most things, but here are some examples and suggestions for A/B testing of email content:
To optimize open-rate:
Subject line.
Use of dynamic tags in the subject line
Preview text
Day of the week or time to send out an email
Frequency of sending out
Sender (Personal vs. general)
Mailing frequency
To optimize click-rate:
The text of the CTA
The color of different elements such as the button
Longer vs. shorter content
Use of images
Use of dynamic tags
It is important to remember when conducting an A/B test not to test more than one element per test. If you test several things at the same time, you will not be able to know what it was that either improved or worsened your results. It will also be impossible to continue optimizing your mailings.
Another important aspect is measurability, i.e. how will you measure? What data should be used?
This may seem obvious, but it is often the case that you set up a complex test and then you do not know what the result actually was.
For example: You want to optimize clicks on emails. Or do you? What you REALLY want is to optimize conversion, and clicks from email generate traffic to the landing page. You can make the email super click-bait and generate lots of clicks, but they won't buy. On the other hand, if we have the "right" target to go after, we can measure which variant generated the most conversions regardless of the clicks.
There is also something called a control group. This is when you don't send out a particular campaign or any marketing at all to a randomly mixed group. The control group can act as the B in the A/B test or simply become a third part.
With the help of the control group, you can see if the marketing you do helps or hurts. Could it be that your control group that did not receive a certain campaign actually converted better?
Multivariate testing is another way to test and optimize conversion. The difference between multivariate testing and A/B testing is that you compare more variables and see how they interact with each other. For example, it can go like this:
Variant A: tests button #1 and subject line #1
Variant B: tests button 2 and subject line 1
Variant C: tests button 1 and subject line 2
Variant D: tests button 2 and subject line 2
It is important not to look at the result as an answer without questioning the statistical significance of the result.
Statistical significance is the likelihood that the difference in the result between the different versions is not due to error or chance.
To summarize, you want the result to be statistically significant so that you know you are choosing the right version from your test to proceed with.
There are several different tools for this online, we usually use this one
Which test is best for your purpose? A/B or multivariate?
Should there be a control group?
What should be tested?
How will you measure?
What is the objective of the test?
What will you do with the results?
When is the end date of the test?
How do we calculate the statistical significance of the result?
Do you have a good documentation of what tests have been conducted to build on your strategy instead of repeating tests of the same hypothesis?