Skip to content

A/B testing for results that matter

blog-curve-grey-top

One of the best things about email marketing is the fast feedback. With A/B tests, you can get the results back in as little as an hour and the right tweaks can make your results soar. The trick is testing for the right A/B results – which takes a little forethought. Follow this step-by-step guide to really increase those KPI’s.

AB testing results matterA/B split testing is the missing piece in your results puzzle

1. Start with what you know

Look at your data. That’s your reports, surveys, visitor metrics and customer responses. What is your audience saying? Are click rates down? Fewer opens? Higher unsubscribes?

2. Pick a goal.

For example, getting more people to click through. You need to be able to compare results afterwards, which is why KPI’s (key performance indicators) get used so much.  Focus on just one area of dissatisfaction at a time. With email’s speed and responsiveness, it’s easy enough to run a series of tests without getting bogged down in multivariate testing.

3. Align your test with your goal.

Your open rates will be tied to your subject line and preheader, so you would focus on those aspects for that goal. If it’s the click-rate, you would be looking at your layout and calls to action (typically a button or hyperlink). If you are concerned about people leaving your list, you might create a unsubscribe journey, asking them why, or giving them a “pause” option on your emails.

4. What’s your hypothesis?

For example “I think fewer people are clicking through because we have a vague call-to-action”. Or “I think fewer people are clicking through because they are not scrolling down to the hyperlink.” Be specific.

5. Build your A/B test.

This is the fun bit. Obviously, your test needs to answer your hypothesis, but you also get to be creative about how you do it. You may well come up with a dozen ideas! Choose just one, but keep the rest for later. I won’t go into the nitty-gritty of building the actual test as it varies from platform to platform, but it’s a routine feature for popular email service providers, with help available.

6. Run it.

How large should your sample size be, for an accurate split test? This is where statistical analysis comes into play. I recommend reading Campaign Monitor’s excellent article on A/B split testing ratio, but the short answer is; the smaller your goal number, the larger your sample size needs to be, to allow for false positives. So, if you are testing on an industry-average 21% open rate, you would need 1066  subscribers for a statistically significant result.

7. Repeat

Congratulations! You have a result (even if it turns out not to be the one you expect) and you can now repeat the cycle to increase your chances of success.

Your mantra: “Beat the Control”.

This is was the watchword of Brian Kurtz; a highly successful Direct Mail giants in the 1980’s. It was costly to post out promotional letters and brochures, so every single campaign was carefully balanced and tested. The best campaign for that product line became the control and the marketing team’s aim was to beat the results for the control the next time around.

The speed, ease and economy of email marketing has made us all lazy. If you can switch to a direct marketer mindset, where every send is significant, your emails will improve.

Share:

Leave a Comment

Related Posts

curve-green-top

Let's connect

Questions? Problems? Quotes?
Simply fill the form opposite and I will respond within 24 hours.