Three Tactics to Understand The Incremental Impact of Media (With a Bonus)

The little girl from the Old El Paso commercial might have be onto something. Why do we need to choose between nice things? Sure, she was talking about taco shells, but we have the much cooler topic of *media incrementality testing tactics*.

If you’re starting to evaluate ways to better understand the incremental impact of your media, chances are you’ve come across many different tactics with people saying that a specific tactic is the ‘gold standard’ for measurement.

Allow me to blank stare for a moment

There is no single "gold standard" for measuring media incrementality. Each method has tradeoffs. The right choice depends on your budget, analytical resources, channels, and what decisions the results will inform. The best teams are usually choosing a combination of strategies and using them to triangulate the impact of media. Here are the three most commonly used tactics:

1.) Platform Specific Lift Studies

Platform studies, like Google Ads Conversion Lift Studies or Meta Conversion Lift, measure the incremental impact of campaigns within a single platform. Users are split into control and exposed groups, then conversions are compared between the two groups to isolate the platform's incremental contribution.

These studies require conversion data flowing into the platform and are best suited for understanding the impact of a specific channel or combination of channels within that ecosystem. The platform typically handles the analysis automatically, making this accessible even with limited internal analytics resources.

Cons

  • Siloed → can't measure cross-platform effects

  • Platforms are grading their own homework. You are relying on the platform to say how effective it is without access to much of the data.

  • Long conversion cycles are hard to capture (more to come on this in a future article)

  • There is the possibility for control group contamination (more to come on this in a future article)

 When to avoid

  • You need cross-platform or full-funnel incrementality

  • Channels lack robust conversion tracking (e.g. CTV, linear TV)

  • You have long conversion cycles (eg: It takes users a long time to complete the conversion after an ad click).

Pros

  • Lower analytical lift as platforms handle the analysis

  • Faster results at lower budget thresholds

  • Causal measurement (randomized control)

  • Easy to set up within existing campaign workflows

When to use

  • Evaluating a single channel's incremental contribution

  • You have clean conversion tracking in-platform

  • Limited analytics resources or timeline

  • Early-stage measurement before investing in larger tests


2.) Geographic Experiments

Geographic experiments typically split markets by city, zip code, or DMA. Each market is placed into control and exposed groups, then you use your own sales or business data (not platform data) to measure results. This makes them well-suited for channels that lack pixel-level tracking, such as Connected TV, linear TV, Out of Home (OOH), or direct mail. You can also measure the cumulative impact of your full media mix simultaneously by turning off all media in the control markets.

However, these tests are much more cumbersome compared to the platform-specific tests above. Market selection requires statistical expertise. You shouldn’t just be comparing New York vs LA and call it a day. You’ll need a package of markets in both the control & exposed groups that are representative of each other. This can be complex and needs a power analysis to determine the spend level required to detect a statistically significant lift. Even after that, external events like a natural disaster, a competitor opening in a test market, or a major local news story, can invalidate a test after it's already running. Some of this risk can be mitigated through outlier-market removal, but it's never guaranteed to provide valid results.

 Cons

  • Requires experienced statisticians for valid design

  • Higher minimum spend to achieve statistical significance

  • External market events can invalidate results mid-test

  • Longer time-to-result than platform studies

  • Need to go dark in entire markets (opportunity cost).

When to avoid

  • You need fast, tactical insights (these test can 3+ months)

  • You don’t have analytics support to help with the experiment design & analysis

  • Your business can’t afford to have no media in select markets for an extended period of time

 Pros

  • Works for channels without digital tracking (CTV, TV, OOH)

  • Can measure cumulative, cross-channel impact

  • Uses your own first-party sales data (no platform dependency)

  • Clean causal design when executed correctly

When to use

  • Measuring channels without conversion pixels

  • Evaluating total impact across the full media portfolio

  • You have designated analytical resources


3.) Marketing Mix Models (MMMs)

Unlike the previous two methods, MMMs are not randomized experiments. They use regression analysis to model the historical relationship between input variables (e.g. channel spend) and an output variable (e.g. sales or leads). The underlying logic is that if sales have consistently moved with a given input over time, there's a reasonable basis for inferring a causal relationship.

Because they're built on correlation rather than controlled experimentation, MMMs carry limitations. The results are only as good as the quality and completeness of the historical data fed in, and they can struggle to isolate effects during periods of rapid media mix change. That said, they're uniquely suited for long-horizon, portfolio-level planning. They let multiple teams align on a single view of channel efficiency, inform annual budget allocation, and evaluate the blended ROI of the entire media program.

Cons

  • Correlation-based. Not causal without validation

  • Requires significant historical data (typically 2+ years)

  • Models can be slow to update with changing media dynamics

  • Results can be subjective based on modeling choices and inputs

 When to avoid

  • Small budgets →  insufficient spend to detect true lift

  • No analytics resources available for design and analysis

  • You’re looking to determine a true causal relationship between a channel and a business outcome

 Pros

  • No experiment required. Runs on historical data

  • Covers all channels simultaneously, including unmeasurable ones

  • Excellent for annual budget allocation and scenario planning

  • Creates a shared, standardized view of channel ROI across teams

When to use

  • Measuring channels without pixel tracking

  • Analyzing your full media mix

  • You have access to first-party sales data

  • You have (or can hire) statistical expertise


A BONUS!!

Alright, I know I said ‘three tactics’ in the title, but I’m feeling crazy. This fourth tactic doesn’t measure ‘incrementality’ per se, but it does help fill some of the gaps the other tactics leave behind.

4.) Post-Purchase Surveys (aka ‘How Did You Hear About Us?’)

There are plenty of scenarios that fit neatly into the blind spots of the above tools. The most obvious one is influencers. You can’t run a cookie-based control vs exposed experiment on an influencer. You can’t hold out certain geographic regions from seeing an influencer's posts. And you probably can’t measure daily spend with an influencer to plug into an MMM.

What do you do?

Allow me to reintroduce post-purchase surveys. It’s one of the oldest measurement tool in marketing, but it has modern day capabilities. Modern surveys don’t need to be the static box checkers you’ve seen in the past. You can have conditional drop downs that allow people to select individual Influencers. This enables marketers to get much more precise data about their influencer spending (or find additional influencers you should be partnering with). This is going to be an increasingly important tool as influencers become a larger part of media mixes.

Next
Next

Marginal CPA: The Most Overlooked Metric in Media