You Might Like These Articles
The Value in Incrementality
Did you Like this Article ? Share It!
Why does McDonalds spend almost half a billion dollars on Advertising if they can’t attribute the ads customers saw to the happy meal they got ?
How does a company like Coca Cola know that their $4 billion marketing spend is the cause behind you buying a Sprite at a gas station ?
How does Unilever measure their $8 billion ad budget against your local grocery store shopping habits ?
How are these brands sure that their ad budget is not cannibalizing their “organic” customer base?
How do brands measure incrementality and sales lift?
Advertising measurement was not invented with digital marketing. Advertisers have utilized methods of measurement for the past 100 years.
Digital Advertisers enjoyed a certain degree of control and access to data over customers journeys, giving them the ability to track the actions of their own customers.
Tracking went further than just consumers actions, as digital advertisers started using unique urls for campaigns to know where users were when seeing or clicking an ad before the conversion point.
This process is named: Attribution.
Attribution is great for tracking, but it’s a terrible method of measurement.
How Attribution Works
Attribution has been the greatest gift digital advertising had to offer digital marketers.
Allowing marketers to assign dedicated links to campaigns and ad vendors, adding parameters for granular information such as location, creative, demographics and so on - gave marketers a data driven picture to analyze marketing results.
Attribution technologies used either semi-persistent user identifiers (such as Cookies, Device ID) or device level data to create a device “fingerprint”.
Attribution using semi-persistent identifiers (IDFA, GAID):
(or: The Value in Measuring VALUE)
Attribution using device level information to “fingerprint” a device:
Each of these use cases works in reverse as well. Which means that if you as a marketer lowered a bid and ROI increased, or if you paused a campaign and organics went up - the INCRMNTAL platform will provide you with these strategic outputs so that you are always unlocking the value in your marketing spend.
INCRMNTAL is an incrementality measurement platform providing Advertisers with incrementality and cannibalization insights over their campaigns, ad networks and any marketing activity. The platform calculates incremental ROI and cost for incremental conversion.
Unlock the full value of your marketing budget.
Return Over Ad Spend ignores total ROI, leading marketers to waste and non incremental results. Uber and AirBNB found that over 80%(!) of their performance advertising spend was redundant. And this is just the tip of the iceberg.
Measuring Value, Not Traffic
Incremental return over investment takes a more holistic approach to measurement, always monitoring the results across the board, while taking into account the results as reported by last-touch attribution and media cost data.
This approach looks at the conversion changes caused by paid marketing, to come up with an understanding of the real incremental sales lift and incremental ROI.
Measuring incrementality requires a continuous layer of prediction analytics overlaid above every comparable combination. Creating a synthetic cohort allows our platform to monitor the difference in difference and come up with digestible insights to marketers.
E.g. “The ROAS for ‘New Vendor’ is cannibalizing Organic traffic. Stopping the activity with ‘New Vendor’ will lead to positive incrementality”
Being king of the obvious - “New Vendor” is winning attribution for customers that would have converted organically if it was not for the additional advertising budgets spent on “New Vendor”.
This would mean that while the new vendor’s ROAS looks positive - the total ROI decreased.
When zooming out to see overall results, the user acquisition graph looks as follows:
When a user clicks an ad, advertisers display an App Store product screen with signed parameters that identify the ad campaign. If a user installs and opens an app, the device sends an install validation postback to the ad network. The Apple-signed notification includes the campaign ID but doesn’t include user- or device-specific data. The postback may include a conversion value and the source app’s ID if Apple determines that providing the values meets Apple’s privacy threshold.
Attribution as we know it - no longer happens on a single user level.
Last Touch Attribution works in real time, providing marketers with a “proxy” to understand correlation to their customer and sales activities.
Attribution is awesome to tell apart between creative performance, as seen in this example:
A marketer with this data would be able to divert budget towards the blue banner, performing 60% better than the red banner.
Attribution & Aggregated Reporting
During the Apple WWDC event in June 22nd, Apple announced the deprecation of the persistent device identifier (IDFA) as well as a ban over fingerprinting.
Apple also announced an attribution solution for developers – SKAdnetwork 2.0 (“SKAd”)
SKAd offers an elegant approach to Attribution without invading users’ privacy.
The API involves three participants:
Ad networks that sign ads and receive install notifications when ads result in conversions
Source apps that display ads provided by the ad networks
Advertised apps that appear in the signed ads
Ad networks must register with Apple, and developers must configure their apps to work with ad networks.
The following diagram describes the path of an install validation. App A is the source app that displays an ad. App B is the advertised app that the user installs.
Attribution Is Not Measurement
Regardless of user level tracking and attribution - using attribution as the measurement for advertising results has caused many great advertisers to spend budgets on redundant “results”.
The graph and table below show the reported ad spend and revenues from new customers attributed to “new vendor”.
Based on last touch attribution data - this new vendor generates positive results.
Incrementality measurement until recently focused on segmenting audiences into a control group and showing those audiences with PSA or Ghost Ads, comparing the results of a campaign shown to the control group vs. the result of the general campaign.
This approach usually produced biased or inconclusive results, as there was no ability to know if the control group was “clean” and unaffected by other campaigns running.
Various other attempts to test incrementality were done by blacking out advertising all together for a period of time - but this approach had such high opportunity costs and only provided conclusive results for the time the test was performed - that most advertisers abandoned the idea of performing such tests.
Our challenge at INCRMNTAL was: How would we know if a user was going to perform an action, even if they were not advertised to?
The answer: we don’t
True Attribution Focuses on Incrementality
Our initial idea was: we will build “better attribution”. We wanted to build an attribution solution based on 1st party data, and apply machine learning to understanding the multiple touch points a user has with ads.
But this was a moot point - multi-touch is practically impossible in the mobile app ecosystem, as user data is becoming obsolete.
We also figured that attempting to help developers by offering a new measurement SDK is not helping the developers. No one wants to integrate another SDK.
Our research, had us understand that developers are not in need of “better attribution” - attribution as it is - is fine. But attribution does not provide measurement.
Making budget decisions based on attribution data lead to some major loses. Just ask Uber or AirBNB who reported that over 80% of their ad spend was redundant.
Once we established a few ground rules, we had our direction
We do not challenge attribution data
We are not offering to replace attribution
Incrementality testing is done in retrospect
Incrementality measurement does not happen for a single user
Causal Inference, Different in Difference
Once we established our ground rules, the answer was found in data science and statistics with Causal Inference and Difference in Difference.
Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed.The science of why things occur is called etiology. Causal inference is said to provide the evidence of causality theorized by causal reasoning.
Difference in differences is a statistical technique used in econometrics and quantitative research in the social sciences that attempts to mimic an experimental research design using observational study data, by studying the differential effect of a treatment on a 'treatment group' versus a 'control group' in a natural experiment. It calculates the effect of a treatment (i.e., an explanatory variable or an independent variable) on an outcome (i.e., a response variable or dependent variable) by comparing the average change over time in the outcome variable for the treatment group, compared to the average change over time for the control group. Although it is intended to mitigate the effects of extraneous factors and selection bias, depending on how the treatment group is chosen, this method may still be subject to certain biases
Incrementality Testing in Advertising
Applying causal inference into Advertising was the real challenge. Advertising, and specifically, multi-platform, high throughput, high scale, global, competitive and highly volatile, environment with no constant makes approaching causal inference an extremely challenging task.
You may say that we had an apple fall on our heads when we found our “how”. A simple, yet obvious, constant in every market research call we had with Advertisers across the globe and across various verticals.
From here on, it was an “easy” task, spending the next year running data experiments, developing anomaly detection, developing statistical models and algorithms, and developing a an AI brain that can interpret the algorithmic outputs to simple outputs: “New Vendor has no incrementality to your activity. We recommend that you stop the campaigns launched with New Vendor”
The Use Cases That Matter
We spoke with many growth teams from various industries and needs to come up with the 6 use cases our platform provides answers about: