Incrementality: The Best Measure for Ad Success

Angela Humphrey
Angela Humphrey | May 22, 2020
Director, Business Development

Gone are the days when return on ad spend (ROAS) was the end-all, be-all metric of measuring ad performance. Dividing ad revenue by cost calculates ROAS. While it’s an easy-to-understand formula, it overlooks the complexities of individual ad campaigns on individual user groups.

Incrementality takes it one step further. Incremental lift analysis provides valuable information where ROAS falls short. Evaluating incremental lift measures advertising campaigns individually for their effectiveness. This allows advertisers to better allocate their spending.

What is Incrementality?

Incrementality refers to the incremental lift that advertising spend provides to the overall conversion rate. ROAS calculates the total amount of revenue that offsets the total cost of advertising campaigns. In contrast, incrementality provides the percentage of conversions received as a direct result of an advertising campaign. Incremental lift analysis looks specifically at whether an individual advertising campaign was effective and how effective, by determining what sales revenue would have been without the ad campaign.

You can measure lift in engagement, in-app spend, or conversion frequency. 

As noted by our VP of Growth and Re-engagement, Hilit Mioduser, “At YouAppi, we focus on performance KPIs like Cost Per Engagement (CPE), Cost Per Acquisition (CPA), Return on Investment (ROI), Return On Ad Spend (ROAS) and use incremental lift to measure the increase in Average Order Value (AOV), conversions and total transactions.

Understanding incrementality allows advertisers to optimize their ad budgets by allocating retargeting ad spend in the most impactful way possible.

How to do Incrementality Testing

Incrementality testing starts with the random selection of a test group and a control group. At YouAppi, we hold out 10% of users for the control group, leaving the remaining 90% in the test group. The test group receives an ad and the control group does not. The difference in the conversion rate of both populations is measured for incremental conversions. This allows for the accurate measurement of the cause and effect of marketing efforts, answering the question: “Did showing the ads change users’ behavior relative to not showing them an ad?” 

Here’s an example to give you an idea of this in practice:

Suppose you have an eCommerce app and run a dynamic creative campaign to offer users 15% off. After randomly selecting two sets of users for test and control groups and running the campaign for 4 weeks, the results show that users who receive no promotional ad convert at a rate of 10.47%. This becomes your baseline for measuring the impact of your campaign. The test group that did receive the promotion shows a 13.79% conversion rate, enabling you to calculate a lift of 31% over the control group. This makes it clear that the campaign had a positive impact on conversion.

When we put it all together, this is what an incremental lift analysis looks like:

Lift vs. Incrementality (they’re different)

There’s historically been some confusion between the different metric terms used in incrementality testing. While lift is the likelihood a consumer will convert, incrementality is the percentage of conversions you received because of your ad. 

Here’s the calculation for lift:

Versus the calculation for incrementality:

In our example above the lift is 31.74%, indicating that if we showed the ad to a consumer they would be over 30% more likely to convert. On the other hand, the test resulted in 24% incrementality, meaning that we received 24% more conversions because we showed the promotional ad. Another way to think about this is we would have lost 24% of conversions if we had not shown the ad.

A Note on Seasonality

Before we turn you loose to all the incrementality testing under the sun, we are mindful of seasonality like the Appi Camper you are! Seasonality can skew data drastically, since control group response rates may vary depending on different buying seasons. For example, if we ran this test during Black Friday, the control group response rate may have been much higher than other times of the year as users are more likely to purchase during this time. Seasonality can skew your control group responses and lead to the misallocation of the advertising budget. Try to test on neutral buying days to get a more accurate understanding of your advertising campaign’s true impact on users.

Takeaways

ROAS is no longer the last stop measurement of your ad campaign success. Incremental lift analysis takes your testing one step further by breaking down your advertising campaigns to examine their individual impact.

  • Incrementality identifies the conversions received as a direct result of an advertising campaign, allowing advertisers to better allocate their spending.
  • In incrementality testing, the test group is randomized for incremental conversions.
  • Lift gives you the likelihood a consumer will convert. Incrementality measures the percentage of conversions received because of your ad. 
  • Try to test on neutral buying days to get a more accurate understanding of your advertising campaign’s impact on consumers.