Answer “what would have happened otherwise” when real experiments aren’t possible.
Synthetic Control Models are the closest thing to a controlled experiment when you can’t actually run one. They answer a question most post-campaign analyses get wrong: “What is the true counterfactual for this campaign?”
Classic before/after comparisons fail because markets don’t stand still, and A/B tests often aren’t feasible (you can’t turn off marketing everywhere).
The result: false confidence. We need a way to measure impact without a perfect randomized control trial.
"What would revenue have been in this market without the campaign?"
"Did this geo launch actually outperform similar regions?"
"Was this uplift causal or coincidental?"
"How much impact did offline or brand activity create?"
"Are we mistaking market growth for campaign success?"
Synthetic Control builds a “synthetic twin”.
It combines multiple unaffected markets and weights them to behave like the treated market before intervention. It then projects what would have happened after—without the intervention.
Then it compares Actual treated market vs Synthetic counterfactual. Think of it as: “Building the parallel universe you wish you could observe.”
The Problem
You can’t cleanly A/B test regions.
What It Reveals
What this region would have done otherwise.
Decision Enabled
Scale or stop geo strategies confidently.
The Problem
New market growth looks promising—but why?
What It Reveals
Incremental lift vs baseline market momentum.
Decision Enabled
Double down or exit early.
The Problem
TV, OOH, sponsorships resist attribution.
What It Reveals
Causal lift at market level.
Decision Enabled
Justify offline spend rigorously.
The Problem
Price changes coincide with demand shifts.
What It Reveals
Isolated impact of the change.
Decision Enabled
Avoid misreading elasticity.
The Problem
“Trust us, it worked” isn’t credible.
What It Reveals
Transparent causal comparison.
Decision Enabled
Make defensible claims externally.
SpendSignal uses Synthetic Control as a gold-standard validation layer.
Specifically:
This model is invoked when the stakes are high and shortcuts are dangerous.
Instead of “Revenue increased 18% after launch”, you see:
The decision insight: "Two-thirds of this growth was truly incremental."
Yes—because it controls for everything except the intervention.
More helps, but even a handful can work.
Yes. It’s visually and conceptually intuitive.