Incrementality might be the best measure for ad success, but actually executing a meaningful incrementality test can be both expensive and complicated. This is where testing methodologies like PSAs and ghost ads come in, which randomize user selection and normalize data to maximize the value of your incremental lift testing. Here’s a rundown of each methodology to help you execute a meaningful incremental lift analysis. 

A Review of Incremental Lift Analysis

Incremental lift analysis measures advertising campaigns individually for their effectiveness. An incrementality test starts with the random selection of a test group and a control group. Generally, the control group includes 10% of users and the test group, 90% of users. The test group receives an ad and the control group does not. The difference in the conversion rate of both populations is measured for incremental conversions. This allows for the accurate measurement of the cause and effect of marketing campaign and efficient allocation of ad spend.

Incrementality Test Methodologies

A key to undertaking an incrementality test is making sure that each group of users draws from the same user pool, and is randomly selected for each group. This ensures statistical significance and a meaningful comparison between paid users and organic users. 

Different methodologies achieve an unbiased incrementality test environment. Here’s a rundown of the main types:

Intent-to-Treat (ITT) Testing

This is the most basic methodology and essentially follows the main principles of lift analysis. A treatment group receives an ad while a control group does not. The differing response of each group provides the lift analysis. This method compares the behavior of all users in both groups. This includes both the exposed and unexposed users in the test group and the users in the control group. 

Most advertisers use this approach since it’s low cost and easy to implement. However, this method can result in bias since not all users from the target group are included in the analysis. This approach also creates “noisy” data which can decrease the quality of your analysis. 

 

Overall, the ITT methodology is easy to implement and inexpensive. However, it creates noisy data and an apples to oranges comparison since not all users in the treatment group are included in the analysis.

Quick Sidebar: Noisy Data

Noisy data is meaningless information that stems from the unexposed population of the test group. The fluctuations in the unexposed group’s behavior can overshadow the uplift of the smaller exposed group’s test results. This in turn can lead to unsuccessful uplift tests with no statistical significance.

Public Service Announcements (PSAs)

While ITT testing serves an ad to the test group but not the control group, the PSA methodology serves real ads to both groups. That said, the ads differ. A random selection of the test group receives a brand-related ad while a random selection of the control group receives a PSA. While some players posit the PSA approach is unbiased (more on that later), it’s not cost-efficient since marketers pay to serve a non-branded ad to users.

In terms of driving value, PSAs do work to raise social awareness around important public service issues. Also by serving real ads to both groups, information is obtained on which users within the control group would have been exposed. This allows for the exclusion of unexposed users from the measurement and cancels out “noisy” data.

The greatest downside to PSAs, other than using ad spend on unrelated ads, is that it does not provide an “apples to apples” comparison. A user might react to a blood donation ad but not an ad teasing a mobile game. PSA testing makes the assumption that the control group’s behavior when shown a PSA is completely comparable to that of the test group’s when served a branded ad. It’s a valid argument and one we tend to agree with at YouAppi.

Ghost Ads

Where ITT falls short because of noisy data and PSA testing falls short because it doesn’t provide an apples to apples comparison, ghost ads solve both challenges. Ghost ads provide a low noise and low selection bias environment. They are also cost effective. 

Ghost ads monitor a control group and flag when a brand’s ad would have been served to a user in that group. Control group users receive an ad from another advertiser on the platform, which removes the cost for clicks and impressions. The control group behavior is then marked with a “ghost impression”, giving information on which control group users would have been exposed to the ad. 

The ghost ad itself is invisible to the user, maintaining a quality user experience. They simply see the ad that actually wins the auction. However, the Ghost Ad and its bid data are visible to advertising partners. They can see how users who should have seen an ad behave even though they didn’t actually see it. 

Again, ghost ads achieve the PSA-based A/B testing methodology, but they do it more precisely and affordably. Ghost ads provide an apples to apples comparison of users exposed to ads versus users who would have been exposed to ads. 

The last benefit of ghost ad testing is that it can always be “on”. In other words, marketers can run a ghost ad campaign concurrently to a live campaign. This provides consistent uplift information on users who would have been exposed to ads.

Takeaways

While incrementality is no doubt the best measure for ad success, it can be expensive and hard to operationalize. Testing methodologies such as intent-to-treat (ITT), public service ads (PSAs) and ghost ads are techniques marketers use to create an unbiased testing environment for more meaningful lift analysis.

  • ITT: This method compares the behavior of all users in both groups. This includes both the exposed and unexposed users in the test group, and the users in the control group. Although low cost and easy to implement, this approach can result in selection bias and “noisy” data.
  • PSAs: this method serves real ads to both groups. Although it gets rid of noisy data, it’s costly and does not provide a clear apples to apples comparison between tested users since public service ads are incomparable to brand-related ads.
  • Ghost Ads (our recommendation): Ghost ads eliminate the noisy data and apples to oranges comparison issues of ITT and PSAs by serving “ghost” ads to the control group. A ghost ad is invisible to the user who simply sees the ad that actually wins the auction. However, the ghost ad and its bid data are visible to advertising partners. This gives information on how users who should have seen an ad behave. Ghost ads benefit from an apples to apples comparison of users exposed to ads versus users who would have been exposed to ads.