Last week, I wrote about the new Adwords option to optimize for conversions, and I talked a bit about testing principles. In my experience, PPC testing is one of the most misunderstood aspects of PPC. I’m always surprised by how many advertisers don’t test at all – which is, to be blunt, a huge missed opportunity. Yet many advertisers just put one ad variation into each ad group and let it go, never knowing whether it’s really the best variation or not. Wouldn’t you rather know what ad message generates the most traffic & conversions for your money?
I’ve also frequently seen inexperienced advertisers overreact to normal daily variations in performance. Even the best-managed campaign will have ups and downs on any given day; traffic, conversions, CTR, and any other metric can vary, sometimes wildly, on a day to day basis.
I love it – it’s brilliant. And true. You don’t need to be a statistician to realize that day-to-day fluctuations do not represent statistical significance in any way. Yet I regularly hear from clients, and even PPC managers-in-training, when results go up or down in a day’s time. These fluctuations, especially at the keyword level, should almost never be cause for alarm.
So how do you know when you have meaningful data? Here are some rules of thumb, based on best practices and years of experience.
Look at a large enough set of data.
If an ad group or keyword got 2 clicks yesterday and 10 clicks today, I can tell you right now that you don’t need to worry about it. Not only are the total numbers too small to be significant, at a minimum you should be looking at week over week data. I’ve written about dayparting recently, and the whole premise behind dayparting is that performance varies from day to day. So don’t make any judgments until you have at least a full week of data, if not more.
Another good rule of thumb is to make sure your data set has at least 100 clicks. You may need more than a week to amass that much info. Be patient – it’s worth the wait to know you’re looking at significant data.
Don’t guess – use statistical tools.
At SMX Advanced last year, I spoke about evaluating PPC tests using SuperSplitTester, which is my favorite easy-to-use statistical tool. But you don’t have to use that one. Just use any tool – but make sure to run the numbers and don’t guess. I’ve used SuperSplitTester enough to guess the winning ad correctly a lot of the time, but not all the time. Don’t guess – your clients and/or employer will thank you.
Evaluate test data systematically.
Yeah, that sounds like a stats prof talking, but what I really mean is set a schedule to review test data, and stick with it. We’ve found that a monthly review is enough for most advertisers when you’re talking about ad tests – even our high-volume clients often don’t reach statistical significance before a full month has passed. Having a set schedule to follow not only ensures the work will get done, but also ensures that the test periods are relatively similar from month to month.
And if you’re really freaked out, only change 1-2 things at once.
One of our clients recently shifted their business goals and strategy, which required a pretty big shift in their PPC campaigns, as well. We launched 3 new campaigns all at once. (I don’t always advise this, but in this case it made the most sense.) When I checked it the day after launch, spend had gone through the roof. Like a good PPC tester, I didn’t panic – but I did lower the campaign daily budgets a bit, just to improve my comfort level. What I didn’t do is go in and start pausing keywords and ad variations, and making a bunch of bid changes – it’s too early for that. The point is, if you’re freaked out, do make a couple changes, but then give it at least a couple days to gauge the effect.
Using systematic, smart testing processes will really pay off in PPC campaign success, I promise!