Clinical trials would never give a patient the drug AND the placebo.
So why are you serving the same person both versions of your ad?
Digital advertising makes possible real-time creative testing and optimization at a speed and precision previously unimaginable. While most digital marketers desire optimized creative that respects their customers and drives response rates, confusion reigns around the proper way to test and optimize creatives. As a result, only a few marketers are truly engaged in proper creative testing and optimization, with the majority relying on outdated testing capabilities of ad servers and Dynamic Creative Optimization (DCO) platforms that have fatal flaws in their tests, the result of which is the arbitrary selection of winning creatives.
What insights separate the savvy digital marketers who are leveraging real-time creative testing and optimization? Some critical insights come from the biomedical research space.
The Challenge of Confounding Effects
Most creative tests are impression-level, automated tests. This means that, throughout a test, the volume of impressions allocated to each creative shifts to creatives that appear to be winning based on early, inconclusive test results.
This is the case with ‘rotation optimization’ and ‘automated testing’ capabilities of most ad servers and DCO platforms. While these platforms offer the tantalizing prospect of dynamically optimizing towards winning creatives on a daily or even an hourly basis, the reality is that these tests produce largely random results that waste valuable media dollars and time.
The critical flaw in these tests is that they don’t control the day-to-day factors that affect campaign performance. As a result, these factors are confounded with the results of creative tests, leading to an inability to distinguish between conversion due to ads or an unaccounted for confounding effect.
For example, if two ads are being A/B tested, and Ad A appears to perform better on Friday, an automated test will assign more impressions to Ad A than to Ad B the next day, on Saturday. But if the advertiser always sells more on weekends, then Ad A will unduly benefit from this confounding temporal effect, resulting in even more impressions assigned to Ad A on Sunday and then Monday.
Adacus studies have found that rotation optimization platforms pick winners from A/A tests and, what is worse, the winner is different from test to test.
Some DCO platforms claim to be able to account for time-varying confounding effects, but these claims are also a mirage as most historic time-varying effects in digital advertising are not sufficiently stable to build any temporal models around. In particular, changes in media planning that are made monthly and, with programmatic bidding, daily and hourly, have dramatic effects on the quality of media being sent to an ad server.
The best evidence that automated creative tests produce random output is found by conducting A/A tests on these platforms. When comparing the same ad to itself, there should be no winner. And yet, Adacus studies have found that rotation optimization platforms do pick winners from A/A tests and, what is worse, the winner is different from test to test.
The Critical Insight – Randomized Controlled Assignment of Households
The gold standard for testing is an approach used most prominently in biomedical research.
Trials of new drugs, medical devices or therapies are characterized by high-stakes experiments, and it’s critical to have results that can be reliably replicated on a larger population.
Biomedical tests randomly assign treatments to patients, in order to control for confounding effects such as gender, age, pre-existing conditions, and so on. This is known as a Randomized Controlled Test.
To control for confounding effects in digital advertising, like in biomedical research, it is critical to conduct randomized controlled assignment of creatives against households, and for the proportion of households assigned to each creative to stay the same throughout the duration of the test.
When creatives are randomly assigned impression-by-impression, with no regard for users or households, then users see both creatives being tested. When these users convert, it is impossible to know which creative should be attributed with the conversion.
Cookies Aren't Enough
Some ad servers enable random assignment of creatives by cookie. This approach is also flawed, because most users use multiple devices, and more and more devices do not support third-party cookies. Furthermore, the ad servers that support this type of random assignment don’t report raw conversion results on their dashboards, because conversions are first deduplicated by media channel such that test results take months or years to reach any degree of statistical significance.
Lastly, creative test results are often included in reports from multi-touch attribution (MTA). These comparisons of creative performance are not based on randomized controlled tests, and instead attempt to control for confounding variables like placement, daypart and so on through the use of regression models and related approaches. This is highly unreliable, as the variation in performance by placement, daypart or dimension simply can’t be teased apart from variation in performance by creative. As discussed above, there is simply far too much noise from these confounding factors.
Take Note, Digital Advertisers
Randomized controlled assignment of creatives to households is the only way to test and optimize creatives in digital advertising. This approach leverages householding technology to ensure that all devices associated with a household will see the same creative throughout the duration of a test.
Only randomized controlled assignment of households has the benefit of producing replicable, actionable insights, AND is a transparent testing methodology whose results can be displayed and analyzed with an A/B Test Dashboard.
Download our free eBook for more A/B Testing Insights