station10
Typewriter-blog.png

Blogs

our views and our knowledge in analytics and other releveant topics


our blogs


Data driven conversion rate optimisation

 
conversion-rate.jpg

How much are you using data to drive your Conversion Rate Optimisation programme? Using data in each step of your testing cycle is key to generating strong hypotheses, prioritising ideas and understanding test outcomes.

Stick close to the numbers and you’ll make better decisions about what’s going to have the best impact, refine your strategy and drive strong results.

Step 1 - Explore your data

Looking for a really great test idea? Use your data to show you the best places to test.  Web analytics is a good place to start - you may find a high traffic landing page with a high bounce rate, or perhaps there are a lot of visitors dropping out on a particular step of your funnel.  Once you’ve identified a key page to test, you can explore further by looking at in-page tracking or heatmaps; or exploring those areas within session recordings, call tracking data, call centre feedback, survey responses, previous test results or anywhere else you have relevant data.

Already got an idea for a test? That’s fine too! There are loads of other places inspiration can come from: maybe you’ve witnessed an issue in user testing; you think there’s opportunity to improve the user journey, or you’re negotiating with a particularly opinionated stakeholder.  Wherever the test idea came from, you can validate it with data. Dig out all the relevant data you have and explore it to identify anything which may support your idea.

Step 2 - Write a great hypothesis

Once you’ve decided on what you want to test, it’s important to write it down in the form of a hypothesis. A hypothesis is a formulaic statement that is testable and quantifiable. It should include a variable, an outcome and a rationale.

hypothesis.png

Here are some top tips for perfecting your hypothesis:

  1. Only have 1 hypothesis per test

  2. Avoid including multiple variables - this makes it difficult to identify what caused the outcome of the test

  3. Focus the outcome on the main KPI of the test

  4. Base your rationale on data

Step 3 - Design your experiences

Use your research into the data to find ways of changing the experience to fix UX issues, be more persuasive or direct the users’ attention.  Make sure you focus on the hypothesis, only changing what’s relevant to the test.  Strictly speaking, if you’re testing a web page then you’d only need to change one element on the page for a given hypothesis, but occasionally your hypothesis may need to be a bit broader.

Bear in mind that subtle changes are unlikely to have a measurable impact, so aim to make obvious changes without throwing your brand guidelines out of the window!  If you have designers and developers you can collaborate with then they can help keep things on brand and make sure your tests aren’t too complex to build.

Step 4 - Prioritise your tests

Now that you’ve got some data-driven test ideas with clear hypotheses, it’s important to prioritise these to get the best value from your optimisation programme.  A straightforward way of doing this is by considering the effort required versus the likely impact and reward. For example, a test that is easy to implement and is likely to have high return on investment should be prioritised over complex tests with low ROI.  You can map your tests onto a chart like this:

prioritisation-chart.png

As well as the complexity of the test and the likely return, there are other prioritisation models that explore other questions to consider which could have an impact on which tests you decide to go ahead with. Is there support and interest within your business to run this test? Is there a strong foundation of data to run the test?

Step 5 - Run your tests

When building and checking the test remember to focus on your metrics and integrations as well as the experiences themselves.  In your optimisation platform, make sure you’re tracking the main KPI for the test - the one that’s outlined in your hypothesis - plus any others which may be affected by the changes in your test experiences.  

Check all tracking thoroughly so that you can be confident that your KPIs are tracking correctly, and that you have integrations set up with analytics and any other tools which could help you understand performance, such as heatmap, survey and call tracking tools.  This will give you rich data to compare between your control and alternative experiences, so if you need to dig deeper to understand the impact of the test, you’ll have plenty of data available.

Step 6 - Analyse your results

So your test has been running for a while - how do you know when to stop it?  Start with the reports in your testing platform.  Is your main KPI showing an uplift in one of your experiences vs another?  Is there a sufficiently high confidence or statistical significance value against it?  In most platforms you’ll be looking for a significance/confidence value of 90 or 95%.  If you don’t have a reliable significance or confidence measure, then work with an analyst (or the specialist team at Station10) who can use statistical analysis using tools like R to calculate these measures.

If you’ve achieved statistical significance against your main KPI then you have the answer to your test question and can finish the test.  

If not, it may be that you’ll need to run the test for longer or that your changes are having very little impact. Give it a bit more time and keep monitoring your results, some tests take longer to run than others.  If the uplift value stays low and the results keep fluctuating then your alternative experience probably isn’t having much of an impact.

Once you’ve finished the test, you can use your analytics, heatmaps, surveys and other tools to dig deeper into the data and get a complete understanding of the impact of the test.

Conclusion

Using data to underpin your testing strategy and process will help you produce a quality roadmap and reliable results, but it’ll also make your tests much more likely to produce great results because your hypotheses will be based on fact rather than assumption.

Often, your tests will trigger new test ideas in themselves, so you can feed these back into your roadmap and testing cycle.