PPC Ad Test Settings: The Great Debate

Share with:


PPC ad testing is a topic that’s near and dear to my heart. It’s one of the most fun aspects of managing PPC campaigns: learning what ad copy performs best. It’s always fun to try something wild and crazy and have it perform well, or to prove an insistent client right or wrong with test data.

Lately, there have been several opinions thrown around regarding which PPC ad test settings should be used for best results. It’s been interesting to watch the debate play out in blog posts and on Twitter. And to add confusion to the mix, Google recently announced that they were reducing the options available for ad rotation to two: optimize and rotate indefinitely. Google claims the change was rolling out in September, but I’m still seeing 4 options in my campaigns.

Anyway, there are a few PPC experts who have suggested that it’s better in the long run to use the Optimize settings, rather than Rotate Evenly. PPC Hero recommends running 3 or more ads per ad group, and letting Google choose the winner. This is also Google’s recommendation, incidentally. Their argument is based on a case study showing that clicks increased when they chose the Optimize setting and ran 3 or more ads.

If you’re optimizing for clicks, you probably have bigger problems than choosing ad rotation settings.

At HeroConf London, Marty Röttgerding gave a presentation on ad rotation. I wasn’t at the conference, but his deck is up on Slideshare. I strongly recommend you check it out – while paging through Slideshare isn’t the same as hearing the presentation in person, you can get the drift. He talks about statistical significance and essentially says it’s a red herring. He also points out that the search partner network, and its low CTRs, throws things off. So does ad position and the fact that quality score and other factors are determined at the time of the auction. Marty also advocates letting Google handle ad rotation.

I disagree.

Now before you dismiss me as a Luddite who wants to manually control all aspects of Adwords, let me remind you that I wrote a post not long ago advocating for using bid management tools. I’m a big fan of automation. Just not when it comes to ad copy testing.

I’ve tried using optimize for conversions, more than once. We’ve inherited accounts full of campaigns with that setting. And when we’ve evaluated results, we’ve always come to the same conclusion: Optimize for conversions is flawed.

It’s flawed for the same reason that Facebook ad “rotation” is flawed. Both systems pick winners too soon. (To be clear, I’m not talking about the brand new split testing feature that FB just announced here.)

I’ve seen Adwords choose a winning ad that’s had 10-20 clicks. That’s just not enough clicks to be significant at any level. I’m not looking for 99% confidence, but when an ad could get 5-10 additional clicks and show a totally different result, that’s not a winning ad in my mind. There isn’t enough data to confidently say that the ad Google deems a “loser” won’t actually perform better with more clicks.

I’m not a fan of the “run at least 3 ads” logic either. We inherited a client nearly 2 years ago that was running 5-10 ads in every ad group. Each ad had a handful of clicks. There was no way to see which ad was winning – and no tests would ever come close to statistical significance. Here’s what happened when we took over and started running systematic tests, 2 ads at a time:

Of course, we were doing other optimization here, but ad copy testing was a huge part of it.

Here’s the bottom line. I get that automation is great and helps us focus on strategic PPC management. But why hand all your automation over to Google? We all know Google has Google’s best interest at heart, not ours.

I prefer using third party tools. For bid management, I like Acquisio. For ad copy testing, I’m a huge fan of AdAlysis. AdAlysis tells you when you have statistically significant test results, and can even automate your ad testing. It’ll pause losing ads, based on the KPIs you choose:

You can also set up draft ads that will automatically start running when loser ads are paused:

You can test a whole new ad, or have AdAlysis pick up elements of the previous ad. In the example above, I’m testing descriptions, so I want to keep the headlines the same as before. Just check the box, and the tool will do that.

It takes some time and thought to set up the automation, but the same is true of setting up tests via Google. And Google won’t automatically pause losing ads, unless you run a script telling it do to so. AdAlysis has so many other features besides ad copy testing, but is worth it for the testing tools alone.

When it comes to PPC ad test settings, I like to choose Rotate Indefinitely and make my own decisions on winners and losers.

What do you think? Are you in the automation camp for PPC ad test settings? If so, do you let Google automate, or are you using a tool? Share in the comments!

Related Posts:

Comments

  1. Once again, we’re on the same page Mel.

    Fact is there are many nuances that impact ad performance. Few of these nuances are accounted for in Google’s optimization feature, or other tools for that matter.

    One main one is, if your lead quality is based on offline metrics how can google map this back to the ad’s optimization? Not their fault, but they can’t. We push all PPC variables into our lead scoring and full conversion data, which often tells us that most ads with fewer clicks, lower CTR perform the best. Why? Simple, because the ads are pretty honest and disqualify low interest prospects from clicking in the first place.

    Google’s general philosophy is to broaden with generic keywords, increase bids and use dynamic ads (again broaden scope of intent). All of this can result in higher CTR, Quality Scores, impressions and clicks but does not correlate directly to quality prospects of full conversions.

    A well structured PPC account that isolates variables combined with a PPC pro who understands this structure/variable combination is how you can make the best decisions for your campaign.

    • Melissa Mackey says

      100% agree Jerry, and excellent point about lead quality. This is a big deal for our clients as well. There is no way Google can know this. CTR does not predict conversions, and conversions do not predict quality.

  2. Melissa, I agree with you 100%. To me, anyone that uses optimize for their ads is either a newbie, lazy, or a SEMoron.

    While optimize MAY give you the best results, the fact is that CTR has always be more or less of a factor since Google gave into my nagging them for a “Rotate for conversions” option. I was happy when they did that, but they refused to exclude CTR as a factor. For us that skewed the results away from conversion winners in most cases.

    Some people may want to get as many clicks (eyeballs/traffic) as possible, but we are always results/conversion driven.

    We should form our own advisory board to give Google the clear advice they need on this and so many other things. Don’t get me started on this new disaster of a beta UI they are pushing as “faster”(it’s not). Unfortunately they would not pay us any attention…

  3. Hi Melissa,
    Thanks for including my presentation!

    The reason I advocated letting Google handle ad rotation (Disclaimer: not in all cases, but as a default) is that we cannot evaluate ad tests ourselves. We have very little data and it doesn’t reflect most of what is actually going on.

    Take the example on search partners from my presentation: An ad can have the better CTR overall, but if you segment by network, it actually loses on both Google and Search Partners. (The reason it has the better CTR is that it has fewer impressions on Search Partners, which drag down overall CTR.) This happend in 12% of all ad tests.

    The same thing can happen with regards to top and bottom placements on Google: In 6% of all ad tests, the overall winner actually loses on all fronts.

    In other words: Even if you ignore Search Partners, in 6% of all ad tests, the data points to the wrong winner.

    These are just some problems I could actually evaluate because it’s possible to segment these things. There are many more problems far beyond our reach.

    I also want to point out that ad rotation is not just about ad testing. In ad testing, we are usually looking for winners and losers. Ad rotation, on the other hand, is about the right ad at the right time. That means that there can be multiple good ads.

    A (lame) example: One ad might resonate better with women, another one does better with men. Standard ad testing is meant to find the best ad for the average user. Ad rotation is meant to pick the best ad for the current situation.

    I don’t mean to say that Google’s solution is perfect. It’s just better than the semi-automated approach that our industry has come up with.

    • Melissa Mackey says

      Interesting. What do you make of the fact that Google chooses winners too early, though? I’ve seen them choose a winner after only a couple days and a handful of clicks. New ads don’t stand a chance in this scenario.

      Also – if you are running ads in Google and Search Partners, do you really care about how performance differs between the networks? All ads are running in both places, so any differences in performance will affect all ads. Wouldn’t you want to choose the overall winner? What am I missing?

      • Google’s early focus on one or a few ads is a mystery to me, too. Still, I’ve seen new ads rising to the top as well. I feel this is too anecdotal, though… the answer should be in our data.

        Anyway, if Google picks winners too soon, it’s possible to force them to collect more data through even rotation before going back to the ‘optimize’ setting.

        About the networks, check out this slide: https://www.slideshare.net/MartinRttgerding/debunking-ad-testing/35
        Ad A has the best overall CTR. Ad B has the better CTR on Google AND Search Partners. B outperforms A on all fronts, yet A *looks* better if you only look at the overall numbers.

        B is the ad you would want to pick but A is the one you’ll end up with if you follow best practices.

        It’s counterintuitive, but it’s a real thing. According to my data, it happens in about 12% of all ad tests. This leads to the question: How can you ever be 90% sure of an ad test if 12% of the time the data points in the wrong direction?

      • To follow on from Marty’s comment, I’ve found the best way to explain the different Network thing is to use hypothetical extreme examples. Imagine:

        Ad A:
        Google Main – Clicks 50, Impressions 100. CTR 50%
        Search Partners – Clicks 5, Impressions 1000. CTR 0.5%
        Total – Clicks 55, Impressions 1100. CTR 5%

        Ad B:
        Google Main – Clicks 50, Impressions 200. CTR 25%
        Search Partners – Clicks 1, Impressions 500. CTR 2%
        Total – Clicks 51, Impressions 700. CTR 7.2%

        Using these extreme examples we over halved the CTR in both of the segments but ended up getting a higher “overall” result. This is due to the differing volumes of the low performing and high performing segments.

        Now the real numbers will look more like in Marty’s slides. But if you have a play around with the numbers yourself you can see how easy it is to manipulate the figures around if you have an aggregate of 2 segments with vastly different performance stats.

Speak Your Mind

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.