PPC Ad Copy Testing: 2015 Edition

Share with:


I love PPC ad copy testing. It’s one of my favorite aspects of PPC. Where else can you systematically test different elements of ad text and learn what works the best – all in a relatively short period of time?

Like with many things in PPC, ad copy testing has evolved over the years. I’ve written about it a few times, and every time I look back at those articles, I realize how much has changed – and how much I’ve learned since then.

Here’s my 2015 approach to PPC ad copy testing.

Mind your campaign settings.

Google continues to insist that “optimize for clicks” and “optimize for conversions” are the “ideal settings” for ad rotation:

ad rotation

They’re still wrong. I always recommend “rotate indefinitely” unless your goal is site traffic (which it shouldn’t be).

Create a bunch of ads for each ad group.

I used to create 2 ads, run with those, and then scramble to think of new copy to test. I started my PPC career in an in-house setting, so winging it with ad testing is usually fine. When I started working in an agency, with enterprise-level clients, I realized that winging it wouldn’t cut it. Clients need to review and approve ad copy before it runs.

Now, I create as many variations as I can think of: at least 5 per ad group and 10 or more if I can. Think of everything you might want to test. Here are some ideas:

Testing Matrix
Send the whole shebang to your client or boss for approval. It’s much more efficient this way.

Test only 2 ads at once.

At this point, you may be thinking “wait a minute – why did I create 10 ads if I’m only going to run 2?” Simple: You now have new ads at the ready to swap in when a winning ad emerges.

By testing only 2 ads at once, you’ll reach statistical significance much sooner than you would if you test 5-10 ads all at the same time. You’ll also learn what’s working: did DKI perform better? Did one call to action work better than another? Then you can use this knowledge in your upcoming tests. The result? Higher conversion rates, faster.

Create an ad testing matrix.

I illustrated how to do this in an article for Search Engine Watch, so I won’t repeat it here. Suffice it to say that systematic, rather than random, ad testing is best.

Review results regularly.

How regularly depends on how much traffic you get. For high-volume ad groups, you might want to review ad test data weekly or even daily (although weekly is usually best to get a full view of performance over days of the week, especially weekends). For lower-volume ad groups, monthly may be too frequent. Find the cadence that works for you.

As for how to review results, my process has evolved dramatically over the 13 years I’ve been doing PPC. I used to use spreadsheets and manual summing, and then I’d pick the ad with the highest conversion rate. That’s problematic for a number of reasons, the biggest being the lack of statistical significance.

Then I started using statistical significance calculators, some of which have come and gone (anyone besides me still lamenting the demise of SuperSplitTester?). My current favorite manual calculator is the one from Visual Website Optimizer. Download it and use it!

My very favorite ad testing tool, though, has to be AdAlysis. It’s a paid tool, but you’ll recoup its modest cost many times over in the hours you’ll save in calculating significance. Check out this geeky goodness:

(click to see the whole thing)

At a glance, you can see which of your ads has reached significance on several metrics: CTR, conversion rate, conversions per impression, cost per converted click, conversion value/cost, and conversion value/impression. Whew! It also shows the impact on campaign performance if you pause the losing ad. Projections: done.

You can also do multi-ad group testing with AdAlysis. So if you have multiple ads with the same headline or call to action across ad groups, you can lump them together to gauge the overall impact of each element. I’ve done account-wide call-to-action testing with AdAlysis, for instance.

Its only downfall is that it only works for Adwords at the moment. So you’ll still need to analyze Bing and social ads manually.

(I’m not paid for this endorsement, by the way – I just love the product!)

Rinse and repeat.

This is my final tip in all the PPC ad copy testing articles I’ve written, and even in 2015 it needs to be said. So many ad tests are set up and never analyzed. Don’t let that happen to you! You’re better off not testing – at least you won’t see how much money you wasted on an underperforming ad.

Always review your test results and, at a minimum, pause the losers.

What are your favorite PPC ad copy testing techniques? Share in the comments!

Related Posts:

Comments

  1. Mel – I love it! Insightful and practical advice.

    One area I’m challenged with and also see many other PPC managers struggle with is determining which part of the campaign to fix or test first.

    Adgroup – Keywords, Ads, Landing page, Landing page assets.
    Often I see marketers testing variations on two or more of these elements at a time, making the results impossible to decipher. Which end of the string do you start straightening out first and why?

    Ad testing:
    When you are creating ad variations what is your primary driver of each iteration?
    Keyword intent? Keyword volume? Overall ad intent? Dynamic or Mobile only ads?

    What KPI do you hit before you decide to turn your focus onto the next part of the chain?

    Jerry

    • Melissa Mackey says

      Thanks Jerry! As with nearly everything PPC, it depends. But in general, if CTR is low, I start looking at keywords & ad copy first. Usually either keywords aren’t relevant or have another meaning besides what you intended, or there’s a disconnect between your ad copy and keywords.

      If conversion rates are low, you have to start with your landing page. Fix any barriers to conversion.

      As for what to put in ad copy, I look at keyword intent first. That drives the initial creative. Then I think about context, and that’s where mobile comes in. We focus on conversions per impression as a measure of success, and we use AdAlysis to measure that.

      Actually, you’ve inspired a follow-up post! I’ll be working on that for publication soon!

  2. Cool – I think even the best Marketers start getting lost when they don’t have a well thought out catalyst for change.

    Ad hack – Sometimes when all my thinkery has not produced the results I desire I use the simple George Costanza ad hack. I do the exact opposite of everything I think should work. Essentially this hack tactic is a reset button.

  3. I don’t know about your ad rotation setting. While it’s great for just A/B testing, what about the other competing goals? Ultimately traffic IS needed for conversions so I don’t why that isn’t a goal. And also by not letting the higher converting ad copy show more often, you are affecting the average CTR. This in turn affects your Quality Score. And the QS has an impact on everything from average CPC, position, etc.

    • Melissa Mackey says

      I see your point, and I agree to an extent. But what should we really be optimizing for: quality score, or conversions? My vote is for conversions every time. Quality score shouldn’t be ignored, and we need to be mindful of it, but if I have to choose between conversions and quality score, it’s no contest.

      You’re right that you need traffic to get conversions. That’s why I focus on conversions per impression. You can have a 100% conversion rate with 1 click and 1 conversion – and that won’t move the needle for your business. Conversions per impression takes both conversion rate and, indirectly, CTR into account.

      Also, Google picks winners way too quickly in the “optimize” settings. I’ve noticed one ad getting 80-90% of impressions after only a day or 2. That’s not good testing – it’s not giving the other ad(s) a chance. I’d rather control that myself.

      Thanks for your comment!

Speak Your Mind

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.