station10
Typewriter-blog.png

Blogs

our views and our knowledge in analytics and other releveant topics


our blogs


A Sneak Preview of Adobe’s Data Cloud Platform

 
Data_platform_AdobeSummit2018.png

Over the past 50 years marketing has undergone a gradual shift in its approach from mass marketing of products and services to segmented marketing based on user requirements. Increasingly, we are seeing yet another (more rapid) shift towards customer-centric marketing, which seeks to fulfil the needs and wants of each individual customer. Achieving this requires a system that can recognise a customer across multiple devices and channels. One that is able to view their past interactions and infer their future needs, and thus serve them the right content at any moment.

This starts with data collection. In order to identify a single customer across multiple touchpoints, organisations need a data architecture that ingests data from multiple sources and combines it to create a unified view of a customer. Bear in mind we are talking about high volume, high velocity, and highly varied data.  

Organisations with data engineering and data science teams have been able to build such systems from scratch using platforms such as Amazon Web Services and Hadoop/Spark clusters. However, most companies lack such expertise. As such, this step becomes one the biggest challenges faced when making the transition to a customer-centric approach, i.e. breaking down the information silos that result from data collection across non-integrated data systems with different schemas, query languages and APIs.

At the Adobe Summit, Adobe positioned its new “Data Cloud Platform” as the data-lake solution that would allow its customers to build data pipelines that collect and centralise all of the data coming into one place. This could include CRM data, POS transactions, site behavioural data, display ad impressions and email campaign behaviour. The platform is set to come with an extensive list of schemas that can be used to connect all the disparate data, creating data that has a uniform schema. This creates a unified profile that captures the totality of a user’s interactions with your company from marketing to sales, product and customer support.

With GDPR around the corner, a discussion on data usage is not complete without mention of data governance. The platform is set to have a data governance section that allows users to label sensitive and private data, limiting its usage across the organisation.

In order to infer a customers’ needs and intent, organisations need to analyse and contextualise the data with speed and efficiency. A job that is primed for machine learning capabilities. The Jupyter Notebook integration significantly increases the data science/ machine learning capabilities and can be used to run attribution analysis, customer segmentation, audience scoring and journey prediction at speed and used to inform the content that is delivered based on previous interactions in real-time.

The audience segments are then pushed to marketing channels for reach or personalisation. This enables more efficient targeting, such as product-specific suppression lists (e.g. for customers who purchased walking shoes, show no more ads for walking shoes for 6 months) with the benefit of improving spend on advertising and ensuring relevant communication. Given the notebook comes with both Python and R (most used languages for ML/AI) which both have extensive statistical packages, they can be used to prototype algorithmic models, customised to each organisation.

This is a very interesting development, which has been a long time in the making, but there are already several applications for our clients that we can envisage. This is a tool which can help those at the forefront of data analytics advance further, but may also be able to help those at the back to catch up. If you want to find out more about how this (or other tools) could help you, get in contact and we’d be happy to discuss.