• -

Making the most of the data you didn’t even know you had

Category : Blogs

One of the most enjoyable parts of my role at Station10 is that it often gives me the opportunity to talk to groups of people about why Data and Analytics is such an interesting field, and one that they should really take time and effort to make the most of.


I had one of these opportunities recently to speak to a group recently; the subject that they initially wanted to hear about was about getting the most out of their data. After a few conversations about what they wanted the session to achieve, it became apparent that what they really wanted was to discuss the data that they didn’t even know they had!


Now, the idea of ‘Making the most of your data using data you didn’t even know you had’ seems quite a strange one, but it starts from a few misconceptions that people who deal with ‘data’ have without realising it.


Being the pseudo-academic that I am becoming in later life, I often like to refer back to the dictionary at the start of all conversations around things like this. In this case, the Oxford English Dictionary gives us a wonderfully concise view of data as:


“Facts and statistics collected together for reference or analysis”


When most people in the MI and BI spaces start thinking ‘data’, the first place that they go wrong is that they focus on the ‘statistics’ part of the definition. We all intrinsically get the idea of things that are counted, directly measured or some form of transactional data that is a finite amount. All of this is nicely gathered and served up to us in some form of database, cube or even ‘Data Lake’ (depending on who you talk to).


This quantitative data is not the only type of data the businesses and organisations have available to them through their many digital channels. Often overlooked (and not necessarily understood by these statty folks) are the vast amounts of qualitative data that organisations generate on a daily basis. Think of this as the ‘Facts’ part of the definition. Facebook channels, Twitter feeds, user surveys and even blog post comments are all places in which vast amounts of opinions and issues are shared with a business that can be used to derive more insights from the existing data that organisations already knew about.


As a strategic piece for one of our partners recently, we looked at how to use this qualitative data, alongside an organisation’s transactional and clickstream data, to better understand their customers and drive more insights into their innovation and development process.


Without writing a sermon on data driven innovation (which I’m sure I will do in a later blog post) there are 2 schools of thought when it comes to innovation, “inside-out” and “outside-in”. Most large organisations, which are failing to move with the times, employ an “inside-out” approach. This is when the people in the business basically believe they know best and their ideas are the ones that their customers will want and will buy/use/etc. Now, as the saying goes “Even a blind squirrel will sometimes find a nut”, so this methodology I’m sure has driven value to businesses in the past and will in the future, but is rapidly becoming the ineffective and inefficient way to innovate.


The “outside-in” approach makes understanding the customer the key activity for innovation, as when you understand your customer they will ‘tell’ you what they want and need, so leads you to the correct areas in which to focus your innovation. It was this method of innovation that they were moving towards.


To enable this, we proposed two routes to innovation. The first involved an initial analysis of one of their many sources of qualitative data using text mining techniques to understand the sentiment for the products, as well as issues and ‘wants’ for their main property. As the qualitative data was a smaller dataset than their clickstream data (think hundreds of statements, against hundreds of thousands of visits), we then used the quantitative data to scale how much these items have weight within the broader context of their property.  This enabled us to understand the scale of opportunity that was available. The second route was the reverse, with a review of the quantitative data, with the qualitative data contextualising the issues and pain points that we found.


Both of these methods highlight the main point around data. One source on its own is fine, and will paint you part of a picture, but you need the second data source to effectively contextualise it to bring insight more quickly to the surface.  If some of the challenges above sound familiar, I’d be very happy to come and talk to your group about ways in which you can solve them.

Sign up for our newsletter