A few weeks back, we released what we believe to be one of the most extensive pieces of research done into subject lines for marketing emails. If you’ve not read it yet, you can download it here.
When we undertook the research, we thought the results were interesting… but we had no idea how interesting it was.
To this end, the report generated a bit of coverage from various news sites and blogs around the internet. For example, here, here, here, here, here, here, here, here, here, here, here, here, here… and perhaps most interestingly, here.
Some of the responders did indeed note, our results looked at correlations. And as we all know, correlation is not causation.
I’m sure you’ve heard this phrase before, but let me explain what it means in case it’s unclear. Causation is when variable A causes B to happen. Correlation is when A and B both happen, but they happen independently. For example, there is a high level of umbrella sales in London in November, and there is also a lot of rain. The rain may cause umbrella sales, but umbrella sales are merely correlated with the rain (although it often seems that the one time you leave your umbrella at home it starts raining!)
So I will admit, our research looked at correlation. But this is not a bad thing. Let me explain why.
What email marketers ultimately care about is increased response. With the tools available to us, repeated and frequent testing is the best method to achieve this. But, in a competitive marketplace, there are so many potential causal variables that it is impossible to isolate the exact factors that will drive response. A certain level of educated guessing is ultimately required.
For example, our study found that longer subject lines, in general and across a massive sample size, tend to be correlated with increased open rates. This doesn’t mean that if you use longer subject lines your results will inexorably go up. What it does mean is that by testing long-vs-short subject lines you can isolate subject line length as either a causal or correlative variable, and then carry on testing other elements.
But, I do feel that the most important element of our research was the part that focused on the content of the subject lines. Which words are positive, and which are negative? Once again, perhaps the results are correlative but, does it matter?
The key is, if you want to put in place an effective testing programme, in a perfect laboratory world you would isolate all but one variable. However, in email marketing, this is inherently impossible.
Back when I was roaming the mean streets of suburban Vancouver, I studied marketing statistics as my major for my degree. I had a professor, Gary Mauser, who took our class to the gun range one day and had us shoot all sorts of wild firearms. But, that’s not the point.
His point was, when you’re conducting clinical trials, perfect information is required, as it’s a matter of life or death.
When you’re conducting marketing research, imperfect information is expected, so it’s a matter of “marketing intuition.”
If you are able to test out correlative relationships and deliver increased results, great! If you’re able to use a weak P-value on a one-tailed test (apologies for the nerdiness) but still make profitable decisions off the back of it, great!
If you do nothing because there are elements of uncertainty in the statistics… well, then you’ll never do anything.
If you take away one thing, I hope it’s this: you will never be able to make perfect decisions in email marketing. But, making decisions with more information is better than less information. This report will help you think about different things to test, to see if they are causal factors in your lists’ response rates. If they aren’t, then that’s ok – but if they are then you’ll be laughing all the way to the bank.