Statistical Model

Synthetic Control Models

Answer “what would have happened otherwise” when real experiments aren’t possible.

Synthetic Control Models are the closest thing to a controlled experiment when you can’t actually run one. They answer a question most post-campaign analyses get wrong: “What is the true counterfactual for this campaign?”

The Problem This Model Solves

Classic before/after comparisons fail because markets don’t stand still, and A/B tests often aren’t feasible (you can’t turn off marketing everywhere).

The result: false confidence. We need a way to measure impact without a perfect randomized control trial.

Questions This Model Answers

"What would revenue have been in this market without the campaign?"

"Did this geo launch actually outperform similar regions?"

"Was this uplift causal or coincidental?"

"How much impact did offline or brand activity create?"

"Are we mistaking market growth for campaign success?"

How the Model Thinks (Without the Math)

Synthetic Control builds a “synthetic twin”.

It combines multiple unaffected markets and weights them to behave like the treated market before intervention. It then projects what would have happened after—without the intervention.

Then it compares Actual treated market vs Synthetic counterfactual. Think of it as: “Building the parallel universe you wish you could observe.”

Core Business Use Cases

1Geo-Based Experiments

The Problem

You can’t cleanly A/B test regions.

What It Reveals

What this region would have done otherwise.

Decision Enabled

Scale or stop geo strategies confidently.

2Market Entry & Expansion Analysis

The Problem

New market growth looks promising—but why?

What It Reveals

Incremental lift vs baseline market momentum.

Decision Enabled

Double down or exit early.

3Offline & Brand Campaign Measurement

The Problem

TV, OOH, sponsorships resist attribution.

What It Reveals

Causal lift at market level.

Decision Enabled

Justify offline spend rigorously.

4Pricing or Policy Changes

The Problem

Price changes coincide with demand shifts.

What It Reveals

Isolated impact of the change.

Decision Enabled

Avoid misreading elasticity.

5Leadership & Investor Validation

The Problem

“Trust us, it worked” isn’t credible.

What It Reveals

Transparent causal comparison.

Decision Enabled

Make defensible claims externally.

Powered by SpendSignal

How We Use This Model

SpendSignal uses Synthetic Control as a gold-standard validation layer.

Specifically:

  • Geo-level incrementality validation
  • Cross-checking MMM and BSTS outputs
  • Offline channel measurement
  • Executive-grade causal reporting

This model is invoked when the stakes are high and shortcuts are dangerous.

Example Output

Instead of “Revenue increased 18% after launch”, you see:

  • Treated market: +18%
  • Synthetic control: +6%
  • Net causal lift: +12%

The decision insight: "Two-thirds of this growth was truly incremental."

Works Best When

  • You have comparable markets
  • Interventions are localized
  • You need strong causal claims

Be Cautious When

  • Markets are highly unique
  • Data history is short
  • Spillover between regions is severe

Frequently Asked Questions

Is this better than pre/post analysis?

Yes—because it controls for everything except the intervention.

Does it require many markets?

More helps, but even a handful can work.

Is this understandable to executives?

Yes. It’s visually and conceptually intuitive.

Stop Guessing. Start Knowing.

See how Synthetic Control Models changes your budget decisions with a live incrementality audit.

Ask about ROAS, Attribution, or Budget...