Solutions
Teams
Built for your whole team.
Industries
Trusted by all verticals.
Mediums
Measure any type of ad spend
Platform
Use Cases
Many Possibilities. One Platform.
AI and Automation
The Always-on Incrementality Platform
Teams
Built for your whole team.
Industries
Trusted by all verticals.
Mediums
Measure any type of ad spend
Use Cases
Many Possibilities. One Platform.
AI and Automation
The Always-on Incrementality Platform
Back in 2015, I worked with one of the world’s fastest growing entertainment companies. This company was scaling fast, spending over $1M a day on performance marketing, working with almost every ad platform out there, and increasing its ad spend month over month. It became a phenomenon.
If you worked in advertising – these were the “party times”.
That was until this company hired a very bright Chief Marketing Officer, who came from the world of Brand marketing, and knew a thing or two about measuring “lift”. He did something no one expected him to do. He stopped marketing.
This CMO made the blunt decision to pause all ad spend for a full month allowing the company to learn what its true baseline organics were, before gradually restarting channels A > B > C > D, and so on.
The result? He learned that many channels generated little to no incremental value. These insights enabled him to allocate ad spends based on true value. He proved that he could spend half of the budget and get the same results.
This was a real planned experiment, and one of the inspirations for me starting INCRMNTAL.
After my backstory, you would think that I would be a big fan of experiments for incrementality testing. But, 10 years have passed since then and the marketing world has changed drastically.
For these and more, you are extremely limited by the number of times you can perform such a test.
But even if you could somehow control all of these conditions. If you could place the world in a laboratory and account for weather, seasonality, competition, market dynamics, product changes, product promotions – experiments would still fail in principle.
In 2015, stopping your Google, Facebook, Twitter, etc. campaigns was not a big deal. You could just restart those later, and the campaigns would pick up where you left them.
User acquisition managers in 2015 (“Media Buyers”) had control over keyword targeting, ad creatives, placements, and many levers and settings in the advertising platforms.
Fast forward to today and Google Ads holds barely any resemblance to legacy AdWords. Setting up ACI, UAC, or PMAX campaigns scarcely allows the UA any controls, other than strategy, and some limitations such as exclusion or adding creatives. The new advertising products heavily rely on artificial intelligence (AI) to learn and self calibrate.
Once stopped, you cannot simply restart without having the platform reset the knowledge it gained.
Planned experiments are disruptive. Stopping marketing for the sake of marketing measurement is completely absurd when there are better, more innovative ways to measure incrementality.
A recent interview I gave to the MobileDevMemo podcast with Eric Seufert spoke about the octagon diagram I borrowed from IPA and titled The Perfect Measurement. In the interview, both Eric and I addressed why is it that Planned Experiments, or GeoLift experiments, are typically referred to as The Gold Standard of measurement ?
I researched the history of this term, and found the common usage of the term Gold Standard comes from accurate experimental procedures in blind medical trials.
Creating a very strict control and test group is crucial in medical trials, often stopping the trial if the conditions of the control or test group changes beyond a margin of error.
Medical trials will require participants to adhere to strict practices, report on diet, and any activities or changes to their lifestyle.
The world of marketing is far off from the controlled world of pharmaceuticals. What marketers did was to borrow the term, and brand the practice of experiments, while leaving the necessary strictness in controlling the variables as a footnote. Or as something to just be swept under the carpet.
Planned experiments feel tangible. Our biggest beef with planned experiments is that they give the sensation of tangibility:
“I stopped and restarted marketing, therefore, I know what to expect”
But these are as tangible as the placebo effect in pharmaceutical trials.
In marketing, given the lack of ability to control the variables – both marketers and experiment designers will convince themselves that a planned experiment gave them the outcome they expected to see.
And if not, that would just mean that the test is inconclusive, therefore, another test is required until the expected result is reached. 🤔
This is not insightful. And it’s definitely not an effective means of measurement any more. The Gold Standard of planned experiments has well and truly lost its sheen.