top of page

Attribution Is Astrology for People Who Like Spreadsheets

  • Belinda Anderton
  • Jul 8
  • 4 min read

You ran five marketing campaigns last month. You got 100 sales. Which campaign caused which sale? You don't know. You can't know. The mathematics of causation doesn't support knowing. But you have an attribution model that gives you numbers adding up to exactly 100%, so you believe it.


This is the most expensive lie in ecommerce, and everyone's doubling down on it.


The Attribution Delusion

Let me show you how attribution actually works in practice:

A customer sees your Facebook ad on Monday. Ignores it. Sees your Instagram ad on Wednesday. Ignores it. Gets your email on Friday. Clicks through, browses, leaves. Searches your brand name on Google the following Tuesday. Clicks the paid search ad. Buys.


Your attribution model says: Google paid search gets 100% credit (last-click attribution). Or Facebook gets 100% credit (first-click attribution). Or everything gets 20% credit each (linear attribution). Or the recent touches get more credit because time-decay weighting says so.


Which one is correct?


None of them. All of them. The question itself is malformed.


You're trying to isolate causation in a system with seven different touchpoints, unknown external variables (maybe they saw a billboard, maybe their friend recommended you, maybe they were always going to buy eventually), and exactly zero ability to run the counterfactual. You cannot know what would have happened if you hadn't run that Facebook ad, because you ran the Facebook ad. The alternative universe where you didn't run it is inaccessible.


But your attribution dashboard gives you confident percentages down to two decimal places, so it must be true.


The Math That Doesn't Work

Every attribution model is fundamentally just weighted correlation dressed up as causation. You're measuring sequence and calling it influence. Multi-touch attribution models use algorithms (sometimes "machine learned" which sounds sophisticated) to assign weights to different touchpoints. These weights are arbitrary. The algorithm isn't discovering causal relationships, it's finding correlations in your historical data and assuming they represent causation.


This is the post hoc ergo propter hoc fallacy with better branding. The rooster crows, then the sun rises. By attribution logic, the rooster causes sunrise. Give the rooster 40% credit for daylight.


The validation problem is insurmountable: you cannot test whether your attribution model is correct because you cannot observe the counterfactual. If your model says Facebook drove 30% of sales, you'd need to run a parallel universe where you didn't run Facebook ads and see if sales dropped 30%. You can't. So you trust the model because it gives you numbers and humans prefer confident wrong answers to admitting uncertainty.


The Coordination Catastrophe

Here's where it gets spectacular: attribution doesn't work, so companies don't fix the attribution problem. They add more systems. Your attribution tool doesn't talk to your ad platforms. Your ad platforms don't talk to each other. Facebook says you got 150 conversions. Google says you got 120 conversions. Your analytics platform says you got 100 conversions. Your actual sales data says you got 100 sales.


Each system has different definitions of "conversion," "session," "user," and "click." Facebook counts view-through conversions. Google doesn't. Your analytics platform uses a 30-minute session timeout. Your attribution tool uses 60 minutes. Nobody agrees on basic ontology, but everyone's confident about causation.


So you hire someone to build dashboards that reconcile these systems. You buy an integration platform. You implement a CDP (Customer Data Platform) because someone told you that's what you need. The CDP doesn't solve it because the CDP is just another disconnected system with its own definitions sitting on top of your other disconnected systems with their own definitions.


You're now paying six figures annually for attribution that's still wrong, but consistently wrong across dashboards, which feels like progress.


Doubling Down on Failure

The pattern is predictable: Attribution doesn't work, buy better attribution tool. Tools don't connect, buy integration platform. Data doesn't match, hire data engineer. Still doesn't work, "we need a CDP." CDP doesn't solve it, "we need a data warehouse." Warehouse doesn't solve it, "we need better data governance."


Each "solution" adds complexity. None solve the fundamental coordination problem, which is that you're trying to measure something that cannot be measured, using tools that don't agree on definitions, to answer a question that doesn't have an answer. You're not building a measurement system. You're building a coordination nightmare and calling it attribution.


What You're Actually Measuring

Your multi-touch attribution model isn't measuring causation. It's measuring correlation (things that happened together), sequence (things that happened in order), and association (things that appear related). None of these are causation. Your model can tell you that people who see Facebook ads are more likely to buy. It cannot tell you that Facebook ads cause purchases. The difference is everything.


Maybe people who see Facebook ads were already in-market. Maybe Facebook's targeting is just good at finding people who were going to buy anyway. Maybe the ad does cause some purchases but not others and you can't tell which is which.


You don't know. Your attribution model doesn't know. But it gives you percentages, so you make budget decisions based on those percentages, and then you're surprised when cutting the "low-performing" channel tanks overall sales.


What I Actually Do

I've stopped pretending I can measure attribution at the individual level. Instead, I run incrementality tests. Turn a channel completely off for a period. Measure total sales. Turn it back on. Measure again. The difference is the incremental value. It's not precise, but it's actually testing causation instead of assuming it.


I look at aggregate metrics rather than pretending I can trace individual customer paths. I focus on overall revenue, overall traffic, overall conversion rate. I stop asking "which touchpoint caused this sale" and start asking "what happens to total sales when I change total spend."


I've also stopped adding disconnected systems. Every new tool promises to solve the coordination problem. Every new tool becomes part of the coordination problem. The solution isn't better integration. The solution is accepting that attribution is unknowable and stopping the attempt to know it.


Your attribution model isn't telling you what caused sales. It's telling you a story about what happened, weighted by arbitrary rules you don't understand, reconciled across systems that don't agree on basic definitions. The coordination problem isn't that your systems don't integrate. The coordination problem is that you keep buying systems that claim to solve a problem that mathematics says is unsolvable.


Stop measuring. Start testing. The only way to know if something works is to turn it off and see what happens.


###

Comments


©2026. Belinda Anderton

bottom of page