Published in partnership with Mutinex
The following interview was conducted by email
Henry Innis, in the words of my favorite interviewer, “who are you”?
I'm Henry, the co-founder of Mutinex. Basically been obsessed with modeling growth for around 12 years, and solving the problem of "what decisions drove growth". More recently at Mutinex I've been working in product, engineering to build the world's best Growth Co-pilot to replace the legacy MMM (marketing mix modeling) systems.
What sparked your interest in this space?
If you check many of the largest advertisers in the world, their marketing budgets have vastly outstripped their growth rates. That, to me, shows a structural problem in advertising. If our systems and our products are not orientated towards measuring growth, but rather, claiming sales then it's very easy to get trapped as an enterprise in a low-growth environment.
I think the future of all products broadly is to be incredibly user-centric, and allow users to act faster and with more convenience, and MMM is a space which is most definitely orientated to providers selling head hours and complex, managed services solutions. That also sparked an intense interest in what a true product looked like in the space versus a customised MMM plugged into an R-studio dashboard or similar. Products make lives easier and allow us to do things better, versus just drown us in more data, and it was clear growth lacked a compelling MarTech solution (which mostly is all built around CDPs and tracking technologies, ad servers).
Many of the solutions in the space are built around MMMs. But MMMs are slow, not granular and lack data. MMMs (are) observational. If I was to tell you to use a brand tracker with 4 samples a month, you'd be laughed out of the room, yet this is the underlying assumption most MMMs are built on. It's also why they are slow — because they need analysts to check these fragile models and ensure they 'make sense' to customers. I think that, at it's core, indicates to me where we are failing to model growth effectively (whether you're a Bayesian or a Frequentist).
The end product is a model that's slow to insight, lacks granularity of insight (across geography, creative, publishers) and ultimately does not forecast well. And it's not really a workflow product, just a bunch of business intelligence dashboards. We wanted to change that space and change MMM's into a growth co-pilot — something where insight is super granular, where users can pull out insights in seconds and they can connect their data unstructured super fast. Or, to put it finely, effectively building a SaaS product to manage, forecast and gain insight into growth versus a digitised MMM.
But did you enter the workforce aware of this issue, or was there a particular time in your career where you noticed this problem for the first time?
I came in via performance marketing, which to some degree was a fallacy. But the idea that you can and should try to link marketing inputs to an outcome was really an idea that gained traction from around 2012 onwards, which is when I entered the workforce. It felt very clear then that this would be where all marketing would go — using data to plan outcomes versus inputs — but somewhere along the way, it fell down. I spent time (at) WPP AUNZ and it became clear that holding companies did not have an answer to the problem, other than building fragile models for customers. I experienced third party vendors then too in my work and the same issue kept rearing it's head again and again. Everyone was building models that were fragile, bespoke to a customer and had little to no product around the dashboards. I felt passionately that we needed to do something about that.
Got it! So your models aren’t fragile, aren’t customized and are more productized. Is that right? To play devils’ advocate, why wouldn’t you want models that are sensitive to changes in inputs and bespoke?
Correct. We're working with a theory that most marketing dynamics are generalised and identifiable. In some regards, this was inspired by the works of Byron Sharp, who combined with the Ehrenberg Bass Institute has proven out many dynamics in marketing are generalised. Marketers intuitively know this as well — we all talk about the 4Ps of marketing — yet the modeling methodologies that we use mostly ignore these dynamics, and instead try to learn all about marketing within one specific business. We don't think that's scientific, predictive or a useful way to find what's really happening.
A simple way to think about this is the market dictates the model moreso than the model dictates the market. We want the model to explain the market, rather than try to make it up off one very small dataset. If you were bespoke you run into a really nasty problem: you literally don't have enough data to understand what is going on. So, you have to make a choice. Your data scientist 'chooses' the right story, which in turn, is what makes the models fragile in the long-run and not representative of actual marketing dynamics in the short-term. It also makes them highly sensitive and unstable. All of these things are bad outcomes — to give you an example, a 'saturation' curve for a channel is drawn using many weeks of data. But we get a much better general idea that saturation should be much higher in, say, Superbowl week than any ordinary NFL week. This is the sort of thing you want to model, but if you are just trying to build a model on one dataset with a data scientist's opinion, you might find much harder to pickup.
How we do that is by testing assumptions across all of our customers — so we don't pool data, but instead simulate thousands of assumptions to see how well the generalised principle is holding up. In contrast, most neural network approaches would simply pool data or create synthetic datasets off customers, which are awful for enterprise privacy and data security.
That all makes sense. The limited data problem is real, especially when marketers are reluctant to do proper A/B testing. But how does your product account for the idea that there will be unique drivers of market share changes inside of individual categories? For example, let’s say, there are two cola brands everywhere and past experience tells us that with constant reach of campaigns, colas win market share when they have stronger creative messages over multiple years, and that price differences don’t have much within a modest band. But in the toilet paper category, for a contrast, maybe with constant reach, branding doesn’t work as well and it turns out pricing is everything?
I think this comes down to a question of models vs. pure studies/findings, and why models are better, particularly in the more modern contexts. Within the more modern contexts, you can basically tell the models about assumptions yet still have them pushback on the data. What we give the model is a general understanding of how a given area (e.g channel, or pricing in the current economic climate) works and then ask it to generate a more specific understanding.
To take your example: I don't think either category would be unaffected by pricing and I'd challenge that assumption. But what we might find is that pricing is getting more important as the economy worsens, for example. That allows us to unpack and unpick this more precisely - rather than say relying on an assumption to hold over 3 years of data because we only have 150 samples on that one specific brand.
You still get nuances, dictated by the data, for toilet paper and cola having really different effects. But you don't get these fixed assumptions that have to hold for long, long periods of time over very volatile and sparse datasets. I think that's a huge unlock for brands, and solves much of the uncertainty problem that exists within MMM — which is we don't have enough data to understand this complex ecosystem at all and so we simply model averages across many, many years or calibrate it with lots of fixed, outside knowledge in 'priors'.
The 'how' on this is you have one central model constantly getting feedback about the assumptions that it feeds into model, to ensure assumptions / priors are generalising well. This stops you intervening on models for comfort, and instead keeps you focused on the science, without ever pooling the data.
I guess so long as the market data informing the model is robust enough, then the model works, yes? If so I think we’re on the same page.
Yeah, I think that's the challenge with it. It's why we've been so focused on collecting a) strong market data (we have around 16m records externally) and b) focused so much on generalised models. We think it delivers a more robust outcome, longer-term predictive analytics for marketers longer-term.
How do you and Mutinex tend to work with agencies and marketers? Is your commercial model 100% SaaS to agencies or marketers? Do you provide any service to either party, or do you focus on training users to become power users themselves?
We work with customers directly mostly and agencies as partners.
Our commercial model is 100% SaaS and licence focused. We offer Enterprise support — much like any great enterprise business to embed, adopt and support crucial moments of usage of the platform. As such, we see ourselves as primarily building capability within an org, although typically the BAU is managed by power users across customers and their agencies. Broadly, I think that traditional deep-dive MMM isn't actionable enough to drive frequent usage. We often see with our platform - for example, our customers get down to format, creative asset, specific seasonal event ROIs and geographic ROIs — that the granularity makes the platform used in a far more frequent manner. We also have an AI consultant that sits within the platform, which products all the reports and does the 'admin insights' that we're so often asked to do.
Most of our business is done direct with customers. I think customers love the independence and ease of use of the platform. For example, DataOS makes data ingestion for enterprise a complete breeze compared to the usual painful construct of organising and wrangling data and Hendren means you can literally write a free-text query to get insights from your market mix model in seconds.
With agencies I think we have an interesting relationship. Landing in the US, we've worked with three quite large agency groups where they effectively resell us and build services around us. It's a model that works — typically, hiring quality data scientists and engineers is expensive for an agency to do. My gut tells me MMM is a little like CRM in the early 2000s. It's bespoke now, but increasingly it'll be licensed models with agencies providing support services around them, and that will end up being a much higher margin, more reliable and profitable business for agencies, which is a really exciting pathway I think for most agencies to embrace and something we are passionate about enabling.
I don't think we intend to be in services. It's the wrong model, and it confuses our role with that of an agency. We'd much prefer to be an amazing SaaS provider working with an amazing agency ecosystem to generate value for customers and agencies through tech, rather than services.
---
To find out more about Mutinex, go to Mutinex.co