Tel: 01865 242425

Tel: 1-855-835-0471

Tel: 1300 993 579

Testing, testing … Let’s set up your email testing program

testing

Why is your email marketing program so successful?

Before you say anything, know the answer is not “Because we send a lot of email and a lot of people buy our stuff.” Don’t say that to anybody ever, because it’s wrong.

The really good marketers say, “We tested a whole lot of stuff and we keyed in to what our customers want.”

Smart marketers – the people I call First-Person Marketers – have testing down. They test everything they deploy in a campaign. Everything, that is, but subject-line testing.

As I’ve said in other blog posts, subject-line testing is worthless when that is all you’re testing. If you aren’t testing anything else, then you’re not really learning anything. After all, subject-lines can’t tell you the “why” your email program is working and that is what is most important.

You need to know why things you do are successful. The “why” is how you make money or achieve other goals. Focus on your testing regimen to help you understand exactly why things have resulted in the way that they have done.

Four tactics for successful testing 

You have to get testing figured out because it’s your job. If you’re starting from absolute zero, here are four guidelines to follow:

1. Set a reasonable testing timeline.

Define the time period that’s long enough to give you meaningful results. I work with marketers who test for a month, for a quarter, for six weeks – there’s no set length, but it has to be long enough to generate reliable results. Also, specify the beginning point and the end point.

For example, suppose you want to test button placement. That’s your starting point. The end point is when you can say, “I have the button in the right place.” How long that takes depends on your product and your customers, so it’s different for everybody.

It certainly takes more than one A/B split test, which could give you great results once but maybe not if you were to test again. Any test has to have repeatable results.

If you really want to make me angry, tell me “We tried that once. It didn’t work, so we didn’t try it again.” Maybe you did it wrong!

2. Write it down and print it out.

Buy a binder and a hole punch. Label the binder “2017 Test Results.” Every time you run a test, write down the test you ran, the results you got and what the winner was. Add it to the binder along with copies of the emails you tested.

Keeping detailed records like this will help you stay organized, track results and know where you’ve been and how far you’ve come.

Testing is an essential aspect of incremental innovation, which is all about driving your program forward by building on change. Documenting your testing results allows you to track your progress towards your goals.

Right now, without leaving this post, tell me what you tested 12 months ago, what results you got and why you’re testing that particular thing. I’m betting maybe 9.5 out of 10 of you can’t do it. That’s why you don’t depend on memory.

3. Develop a focus group.

I talked about this tactic in my blog post about setting up a strategy day, but I’ll provide more detail here.

Get permission from your Customer Service group or your CMO to pick five or 10 customers from your active subscriber file. Set up regular phone interviews where you can solicit feedback one-on-one instead of sending surveys. A small budget to compensate your panel members helps, too.

It always surprises me to see how many companies don’t know how subscribers consume their emails. Adestra compiles a report every year about consumer attitudes toward email, but you also need to know how your customers feel about your emails.

I know I’ve said in the past to take consumer feedback with a grain of salt, but it’s good to understand what consumers think of your product communications. Any testing plan has to include feedback from the people who actually receive your emails.

4. Set up control groups.

I know – this is the scary part. Control groups scare people, either because they don’t understand them or they freak out at not emailing a specific group of people. Their argument is that you’ll lose money because those customers won’t see the email and won’t buy.

That might be true, but in order to find out whether your email program is successful, you need to see what people do when they don’t get the email.

Your control groups can be set for the long term or a short term, across all channels or just in email. Someone in your CRM group can help you build good control groups that balance the need for revenue against the need to understand what people are doing.

By holding out people from a campaign, you learn what they do when your email isn’t there to prod them into action. The variance between the two – recipients versus non-recipients – tells you how good your email program is at creating revenue.

Any testing plan has to have good control groups, whether you set them up at the campaign level or more broadly based on a longer timeline. No matter what, control groups are vitally important.

The takeaway

Ideas for testing programs are everywhere. The methodology can be overwhelming. Focus first on working out your top-level strategies about how and why to test.

Getting away from subject-line testing will actually make you a smarter marketer because nobody is judged or paid on getting more opens. You need to know why people buy, register or sign up, not why they open.

Focus on testing because that’s what’s going to increase your revenue, improve your email performance and KPIs, and get you to the promised land.

Tags: , , ,