• -
Cricket

Can Marginal Gains In Sport Be Applied To Data Optimisation?

Tags :

Category : Blogs

In an excellent article on the mental health challenges of cricket (which has sadly been in the news a lot recently, and you should read the article, even if you have no interest in cricket), Mike Atherton recently wrote:

“What has been called the marginal gains revolution, which has been so successful with sports where improvements can be measured every step of the way, has been transported elsewhere. I remain convinced that for less predictive and more complex team sports, such as cricket and rugby, the inevitable micro-managing needed to establish such marginal gains has had an entirely negative impact”.

This got me thinking about similar philosophies in digital businesses. The method of improving consumer actions and outcomes, based on the principle of conversion rate optimisation, would be a prime example. If you improve the small details of the environment or layout of a task, you will increase the number of people who can complete it. This is the business equivalent of marginal gains in sport.

 

Optimisation Done Right

To be clear, there are highly successful optimisation programmes (some of which Station10 have been involved with).  But these programmes have often been very specific around certain tasks – like the checkout funnel on an ecommerce site, for example.

This is comparable to the environment that Atherton describes which can be “measured every step of the way”. Some of the principles that have been applied in sports like cycling (most famously by Team Sky) can therefore be applied here:

  • Removing unnecessary weight (either as body fat in cycling, or as page loads in digital optimisation)
  • Making the journey smoother (slicker tyres, or clearer digital signposting)
  • Streamlining your entire machine (achieving the most aerodynamic position, or removing extraneous steps from the digital journey)

These are all excellent ways of finding the extra “1 per cents”.

In each scenario, the best way of testing the performance is by ensuring the environment remains exactly the same, apart from the individual element or component being tested. The results are immediately available and relate to performance there and then.

This is A/B testing (or multivariate testing, if there are enough different components to be tested at the same time) and can be highly effective in driving improvements.

 

 

Does ‘every little’ really help?

But, the point about marginal gains is that they are, well, marginal.  By definition these are never going to account for more than a single-figure percentage of the overall performance gains.

Once you have found the extra 1 per cents, and made them add up to, say, 5%, what do you do next?

In a business context, what typically happens is that the optimisation team are asked to turn their attention to other areas of digital innovation.  But, outside the specific confines of a conversion funnel, it can be much harder to achieve these marginal gains.

Say you are asked to improve a ‘Product Details’ page on a web site. You would be battling with questions like the following:

Did someone leave the site at that page because they were not interested?

Or…

Because they were just investigating and are intending to buy the product later?

The former would represent a lost sale, but the latter reveals a fairly loyal behaviour that should be encouraged.  In other words, this is a more complex “sport” where not every step is as easily measured in the moment.

Nonetheless the “marginal gains team” are being asked to play this more difficult game – normally by the same rules and with the same tools as before. But, as Atherton suggests for more complex sports, the rules for winning in this game are different to that which we saw earlier.

This game requires a much more long-term strategy and more detailed assessment criteria.

This is where the results can become less clear-cut and the optimisation team starts to struggle if they persist with this ‘marginal gains’ approach.

 

Know your customer

For example, if it takes several months to buy a product (common in the financial services or automotive industries) it’s important to understand at what stage the customer is on that journey and how he or she regularly behaves.

If you start encouraging (or perhaps forcing) customers to ‘buy now’ on a complex purchase, which they need time to consider, you will only end up with an angry customer.

Frustratingly, optimisation programmes (and the software that enables them) are often based simply on immediate impacts: did I improve performance in this individual visit?

This is too one-dimensional and can leave you with a scenario in which a single test increases performance in that one instance, i.e.:

  • Conversion rate goes up

But negatively impacts the longer term picture, i.e.:

  • There are more returns or cancellations after purchase because the product wasn’t right for them; OR
  • The transaction was simply moved forward in time and shouldn’t be read as an additional sale

Clues that point to this kind of problem can often be found in how long it might take to complete each individual test, or in the statistical significance that any uplift is given (generally, because there are so many other factors at play, the specific changes are not as relevant to the customer as assumed).

 

New game. New rules.

This is not to say that “complex sport” performance cannot be optimised. Indeed, this is where large improvements can be made (as opposed to the marginal gains mentioned earlier). But, it needs to be managed and tracked more appropriately. The optimisation team need to choose the correct methodology (or “playbook”, to continue the sporting metaphor) for the right test.

Just like in sport, a team that continues to focus purely on “marginal gains” will not provide any value outside the simple conversion funnel.

The main challenge in this more complex setting is the need to understand the long-term customer value and to show how it increases incrementally, as a result of an individual change. It requires more intricate analysis and a greater set of skills, to do this right.

 

Full customer view

Most optimisation software tools do not look at the longer-term customer performance – some can look at visitor (or cookie) level, but not at the full customer view. The data can, however, be matched to reveal the bigger picture.

For example, we have in the past uncovered the impact of a new feature (shown only to a particular group of customers in an A/B testing format) on long-term customer value amounting to many millions of pounds. So, it is very important to conduct this kind of analysis. But it does require more precise skills and extra time to do the work.

You need to make sure that any “complex sport” programme is given the extra hours and energy that it needs.

 

Trust your team

The second challenge in the wider optimisation environment is the temptation to micro-manage an optimisation programme. Tests take a certain amount of time (normally much longer than a 5 day cricket match), and can’t be rushed. No matter how many times you ask.

The best managers and product owners understand this. They let their teams alone to run the tests, and to explore any additional ideas they have along the way.

Importantly, they empower them to follow up on these additional ideas without having to amend the testing plan or sign off small changes. As Mike Atherton says in relation to cricket: “The last thing a player needs is added scrutiny and pressure”.

The most successful optimisation programmes are those in which the product owner (whether an individual or team) is fully empowered to make customer-facing experience decisions “on the fly”. They have the best understanding of what the customer wants, and this should afford them the ability to use the tools at their fingertips to help optimise for what the customers really need.

This should be, as much as possible, free from day-to-day justification or upward management.

That’s important. Because, if you fast-forward a couple of years, there will be mainstream “virtual” optimisation managers. Advanced AI engines will start to optimise performance, based on the criteria, frameworks and parameters that you provide.

This will mean that these departments really will behave autonomously , and you will need to make sure you know when to deploy them, and in which type of “sport mode” they should be set.

 

Game. Set. Match.

It’s a good idea to get your organisation set up to be ready for these types of automated capabilities in the future – otherwise you risk being stumped or caught out.

P.S. Congratulations to Jonny Bairstow, whose father’s tragic death is mentioned in Atherton’s article, for his maiden Test century in South Africa in early January.


Sign up for our newsletter