2 weeks ago, I wrote a post on PPC ad copy testing that ended up being my most popular post for April. One of the recommendations I made was to write a lot of ads, but only test 2 ads at a time, so you can get to statistical significance faster.
But Kirk Williams had another reason not to test multiple ad variations: profitability.
I’ll admit it had occurred to me that running too many ads could hurt profitability, but I’d never run the numbers. And Kirk’s numbers in the table above were made up. So I decided to dig through historical data to see if I had any actual figures to analyze.
We inherited a large account that had up to 12 ad variations running in some ad groups. It’s a high volume account, so that many ads made some sense – except for the fact that most of this client’s conversions come in over the phone, and phone calls can’t be tracked back to ad variations. So looking at just online form fills, each variation often had only 1-2 conversions, and some had none.
I decided to use the actual data to create hypothetical scenarios, where we assume that only the best 2 ads in the ad group ran at the same time.
Scenario 1, Actual Data
In this scenario, there are 6 ads with wildly varying statistics. I should note here that the previous agency also used “optimize for clicks” in some campaigns, but not others. Anyway, there’s one version, Version 4, with a high conversion rate, but each variation had less than 10 conversions each.
Scenario 1, Hypothetical
Here I took the total number of impressions for the ad group and split them evenly, and then calculated the rest of the metrics based on actual CTR and conversion rate. It’s pretty clear which ad is the winner here – and it’s also clear, based on the actual statistics, that about $1,600 was wasted on ads that weren’t converting as well as the top 2.
But was this ad group a fluke? I looked at a second example to be sure.
Scenario 2, Actual
Here we had 5 different ads. Version 1 had the most conversions, but also the lowest conversion rate. The ad that converted the best didn’t have many impressions. There’s no clear winner here either.
Scenario 2, Hypothetical:
The winning ad wins by a landslide here. Cost for the 2 ads was similar, but the winner converted at more than twice the rate of the 2nd-best ad.
The caveat with Scenario 2 is that, in the actual scenario, the winning ad had so few impressions that I hesitate to extrapolate its performance over more impressions and clicks. Often I see ads have “beginner’s luck” where they do very well initially, and then settle in to a more average performance. But even if the winner didn’t convert quite as well, it likely would have beat the contenders in this instance. And in this case, about 80% of the budget was spent on losing ads. I’d hate to have to tell that to the client.
Based on these examples, it’s pretty clear that, at least hypothetically, running 5-6 ads wastes more money than running 2 ads. I’m willing to hear examples to the contrary, though. I know at least a few of my readers know a lot more about statistical theory than I do – what say you? Is this a legit analysis, or are there holes? Share in the comments!