• -
Would you A/B test your cushions and beanbags?

The flaw in many automated marketing models

Tags :

Category : Blogs

Recently, I have been reacquainting myself with the excellent Jeremy Bullmore, the great advertising strategist, who was once described as “Adland’s greatest philosopher“.  Unfortunately, not physically – it is many years since I met Jeremy when I worked as part of the WPP network.  His writing is always full of intelligence; he manages to understand the details of the contemporary challenges of the digital marketplace, whilst conferring upon it a wisdom that seems to come from another age.

One of the challenges that feels uniquely current is multichannel attribution modelling; how do we understand how customers behave across different channels, and what do they do in different places on their way to purchase?  This feels brand new because the range of digital channels simply didn’t exist for previous generations of marketers.  The multiplicity of apps, web sites and other digital channels is new, and with it come new challenges about how you ensure that you are treating the same customer in the right way in all these different locations.  And that doesn’t even begin to factor in how you then understand the impact of offline, traditional advertising channels on what the customer does.

It is absolutely true to say it’s a more complex environment now.  There are many more channels to contend with, and this makes the equations and algorithms more complicated too.  But increasingly, I think many of these algorithms are wrong, because they are based on a crucial, incorrect assumption.

The myth of waste

As Jeremy wisely points out, the challenge of proving specific returns on investment on the back of media and advertising spend is not new.  In fact, it’s as old as advertising itself.

The assumption comes back to the myth encapsulated in the variously-attributed phrase “I know half of my advertising is wasted, I just don’t know which half”.  This has been attributed to just about any of the business moguls of the late nineteenth or early twentieth century.  Although none of them probably ever said it, this is clearly a challenge of ROI measurement that has been around since the infancy of modern advertising in the mid-1800s.

The unshakeable part of this is the idea of waste, and waste that is specific to advertising and marketing.  This is a curious distinction, as the same calculations in my experience do not exist for other types of investment.  For instance, that office refurbishment you did, to make the working environment more open-plan and more up-to-date, a nicer place to work.  I am willing to bet at no point did anyone during the budgeting process say “I’m not sure which of the new cushions on the trendy new sofas will actually make people work harder.  What if we spend some of the money on beanbags instead; would that be better?”

Knowledge as capital

Indeed, some business best practice now encourages the acceptance of the reality of some “wasted” investment.  Your digital change programme will be running a “test and learn” programme.  This might be called an optimisation programme (it’s that concept again – something must be optimised, as there must be unnecessary waste in there somewhere), but whatever you call it, it’s about experimentation, the generation of new ideas that are worth testing on consumers and customers.  But what’s different about a “test and learn” programme is that it intrinsically incorporates the idea of ideas and assets being created, and then, very quickly, not used again.  But these have definitively not been wasted, because they have provided insight into customer behaviour, or proposition development, or process management.

The crucial difference here is knowledge as capital.  A “test and learn” programme, and the culture that sits around it, recognises that gaining insight is valuable in itself; knowing that customers prefer a particular execution, or process, or whatever, means that the investment in the other alternatives was not wasted expenditure.

And yet, the same marketing or ecommerce departments, and often the very same people, will use the same marketing adage above to challenge the investment in media and advertising.  At Station10, we are regularly asked to help to analyse marketing spend and attribution of investment.  That is often because clients have very good reasons to challenge their existing measurement methodologies – despite the multichannel nature of customer behaviour, many organisations still haven’t got beyond looking at single channels or last or first touch models.  Clearly, there is a lot of value for clients in those scenarios to update their attribution models to reflect the reality of modern customer behaviour.

But sometimes, we do speak with a client who is expecting a definitive, scientific answer to their media “inefficiencies” from multichannel attribution modelling.  The logic seems to go that if you have targets to hit, it must be possible to then provide a level of certainty around them by using a scientific approach using predictive techniques that will prove an ROI before it’s even started.

Weighting your model correctly

And this is where the logic starts to fall apart, because it assumes that all advertising and media spend plays the same role of driving a specific, often near-term, sale.  By the same logic, any marketing asset that doesn’t affect a sale is worthless, and that spend is therefore available to be reallocated.

But what if there are those people that are not in market at that particular moment?  The job of driving brand awareness is an important one for advertising, but can be overlooked in models that upweight sale-driving jobs in this way.  There should be nuances in this models to consider the role that different advertising does, and so factor these in accordingly.  And the challenge is that as Machine Learning or automated algorithms become more popular to do the attribution effort and programmatic buying, these role-based calculations will not be considered, unless you tell the algorithm to look out for this.  Which means that you will by default start optimising for near-term sales, rather than longer-term, customer lifetime advertising benefit.

To return to Jeremy Bullmore, he continues on this topic – “I was once given a lift by a 50-year-old friend who’d recently sold his share in an advertising agency and had celebrated by buying himself an extremely expensive car. ‘I bought this car because I saw an advertisement,’ he told me. ‘Nothing very special about that, I grant you – except that I saw that ad when I was 14.’”

The point, therefore, is that no attribution model will ever take into consideration such a purchase, and the reasons that drove it.  However, whilst marketing directors will happily talk about customer lifetime value metrics, none would ever brief an insight team to build it, because they have no intention of still being marketing director of the same brand in 36 years, or indeed (very often) in 3.6 years.

And whilst the technological advances are driving much of what can be done, I think this last point is one of the main factors in sales-focused attribution models.  Marketing and brand directors are often forced to concentrate on relatively short-term metrics and impacts, and have to hit their numbers.  Ironically, this is partly due to digital channels’ measurability; in a digital age, with rapid development, innovation labs and insight sprints, the focus is on delivering products and services, and by extension, their success, quickly.  Personalisation teams will talk about needing “real-time” data accessibility, down to the second, or even sub-second, because the experience needs to be seamless and to reflect what the brand knows about the customer right now.  In our current world, we prize immediacy of results.

It’s hard enough building, and maintaining a single view of the customer, and the attribution and loyalty models that sit on top of it.  But if the business requirements and assumptions start to warp the logic to focus on too short-term a metric or payback period, then the entire programme risks being put at risk.

So, if you are about to build an attribution model, or if you are reviewing how your current one works, come and speak with Station10, and we can help give you take a step back and make sure that your marketing models reflect the reality of your customer behaviour for long-term benefit, not just short-term gain.  I’d like to think that Jeremy Bullmore would approve.


Sign up for our newsletter