Our Chairman, Tom Lloyd, In Conversation with AI: Can AI Build Better Marketing Mix Models?

By Tom Lloyd, Chairman
AI is transforming marketing and it's everywhere, isn't it? AI is hyped up to revolutionise econometrics. But is the hype really worth it? As MetaMetrics' resident sceptic, I thought I’d put AI through its paces by asking ChatGPT the big questions. After all, why not get AI to explain itself? Could I cut through all the marketing fluff and get to the real question: can AI actually help us build better marketing mix models, or is it all just fancy packaging?
Here's how it went. I’ve cut down some of the wordy responses but have generally copied its answers.
Round one: the pitch
Me: What's the potential of AI to enhance marketing mix models?
ChatGPT: AI has the potential to significantly expand the power of marketing mix models (MMM) by making models more adaptive, granular, and actionable. While traditional MMMs are often static and retrospective, AI enables faster model updates, automated feature engineering, and better handling of non-linear relationships and interactions between channels. This allows MMM to move closer to near–real-time decision-making, scenario simulation, and optimisation rather than purely historical measurement. AI fundamentally elevates MMM from a retrospective reporting tool to a forward-looking decision system. It enables faster model refreshes, more sophisticated scenario planning, and better capture of complex, non-linear media effects. The result is more timely, confident budget decisions in an increasingly fragmented and fast-moving marketing environment.
Me: OK, that sounds suitably PR-tastic. It doesn't actually tell me anything useful though. Can you give me some practical examples?
ChatGPT listed how AI is actually being used by MMM vendors today (as well as some other fluff):
- Faster, near-real-time model building and refreshes.
- “Agentic” / automated insight generation.
- Causal machine-learning & causal discovery.
- Bayesian / probabilistic models for uncertainty and scenario planning.
- Creative and brand signal measurement with ML.
- Optimisation and prescriptive recommendations (actionable next steps).
All very impressive-sounding. But I wasn't done.
Round two: challenging bias
Me: Hang on a minute. I don't see how this is fundamentally different from Bayesian methods, setting constraints and letting the data operate within those. (We've been doing that for years.)
ChatGPT: This is a very good challenge, and you're right to push on it. At the core modelling level, what you're describing is correct: modern Bayesian MMM already does much of this. The real difference is where Bayesian methods stop — and where “AI” starts.
Where “AI” actually starts (and this is subtle)
AI begins when the system reasons about the model, not just within it.
Key distinction:
Bayesian MMM reasons inside the model
AI systems reason about the model
So I had the upper hand, ChatGPT admitted I was right, but now things got interesting.
Round three: the actual difference
Me: This all sounds vague to me. Can you be more specific about how it might work in practice?
ChatGPT: OK, so for example, let’s think about prior selection and evolution.
Bayesian MMM:
- Analyst sets priors once (or occasionally updates them).
AI layer:
- System learns which priors consistently perform well.
- Adjusts prior strength over time.
- Flags when priors are dominating data (or vice versa).
This is meta-learning, not Bayesian inference.
Think in three levels:
Level 1 — Inference (Bayesian MMM): “What do we believe about the world, given data and priors?”
Level 2 — Judgment (Human): “Do I trust this model? Should I change assumptions?”
Level 3 — Automation (AI): “The system learns how to make those judgments itself.”
AI is automating Level 2, not replacing Level 1.
Me: Ahh OK, I get this now. AI is making those choices. Is that what vendors talk about though?
ChatGPT: The honest truth? About 70-80% of what vendors call 'AI MMM' is still Bayesian MMM.
The remaining 20-30% is automation, orchestration, learning loops, and UX intelligence. Many vendors exaggerate this gap. You're right to be sceptical.
There it was. The refreshingly blunt admission that most of the AI revolution is actually just good old Bayesian statistics with some automation on top.
Round four: AI value judgements
Me: I’m still sceptical. Tell me more about how AI judges whether a model is any good?
(Now I had to probe quite a bit here, so I’ve summarised).
ChatGPT: Short answer (no fluff) AI does not “judge” a model in a human sense.
It applies explicit, codified evaluation rules that used to live only in experts’ heads.
If a vendor can’t point to specific, testable criteria, then there is no real AI judgment — just marketing.
ChatGPT: Longer answer.
A model is considered “good” if it satisfies four measurable properties:
- Predictive validity
- Causal plausibility
- Stability over time
- Decision usefulness
AI evaluates these mechanically — not philosophically.
Vendors exaggerate (you’re right to be wary)
❌ “The AI understands your business”
❌ “The model reasons like a human”
❌ “Fully autonomous causal discovery”
✅ “The system enforces evaluation standards consistently”
✅ “The system compares more models than a human could”
✅ “The system remembers what worked”
Now we’re getting somewhere, but I wanted to go further.
Round five: the messy reality
Me: This all sounds good in a perfect world, but in reality most models are unstable, variables are highly correlated, and we need to fix things to get somewhere that 'feels OK'. How does AI cope with this imperfect environment?
ChatGPT: You're absolutely right: the “perfect AI workflow” only works if your data and variables behave nicely. In reality, correlations, multicollinearity, and unstable channels dominate. AI doesn’t magically fix this.
AI can run hundreds of model variants, flag instability, and compare models with experiments, but it cannot create a new signal. If your variables are collinear or the data is weak, AI still has to decide which assumptions to apply, which is often heuristic and human-driven.
Human judgment in unavoidable.
AI mostly helps you scale the iteration, not replace it.
Me: Aha, so we aren't in a world of full automation at all?
ChatGPT: The honest takeaway: Bayesian maths alone doesn't equal a perfect model.
AI cannot create signal out of noise.
Most MMMs still require fixing variables, setting priors carefully, and iterating until outputs are plausible.
AI adds scale, speed, and repeatability, but doesn't eliminate human judgement.
In short, the “AI judgement” mostly codifies what good analysts already do, just faster and more consistently.
Me: Tell me more about the post-model phase. What happens after you've built it?
ChatGPT: Here we're talking about prescriptive decisioning. Traditional Bayesian MMM outputs posteriors (optimisation is often manual or done externally). The AI layer solves constrained optimisation repeatedly, learns which recommendations historically delivered outcomes, and adjusts future recommendations accordingly.
What does good AI-assisted MMM look like?
Clearly there’s a lot of hype and over-promising going on. But there’s also a definite, credible role for AI to augment what Bayesian added to the MMM process. So, I sketched what genuinely useful AI-assisted modelling might look like. Picture this workflow:
- The modeller asks AI to format the data.
- AI does so but also raises queries about issues and transformations.
- The modeller asks AI to run models and explain what it had to fix.
- AI reports back about iterations, problematic variables, what needed fixing, what wouldn't come in, overall ROI, and how this compares to expectations. It offers multiple model versions with its own recommendation and justification.
- The modeller asks questions about outputs, simulations, and optimisations.
- AI reports back on multiple optimisations with both fixed and varying budgets, sets sensible constraints at channel level, and even suggests model adaptations if the optimisations return unacceptable results.
What should we make of all this?
I can’t help but wonder if a modeller would know if they were talking to a computer or a real analyst? Maybe that's what Alan Turing had in mind all along when he created his famous test back in 1949.
Large language models like ChatGPT regularly 'pass' the Turing Test these days so it's becoming harder to spot AI-generated writing, even for sceptics like me. So, while AI is a thoughtful assistant it should be transparent about its reasoning. It must show its working. And crucially, the human modeller should remain firmly in control, making the final calls about what makes business sense. And that’s exactly what we do at MetaMetrics.
I’ll leave the philosophical questioning for another day, with fellow humans. But let me wrap up by saying AI can enhance econometric modelling, but it's not magic. AI provides important incremental improvements that help skilled analysts do their jobs better and faster. And in an industry where time is increasing more money, that has to be a good thing.
Vendors will keep promising transformational breakthroughs. But keep a healthy dose of scepticism and ask awkward questions to get more useful answers.
If you want to talk to human experts and ask us those awkward questions, get in touch.