Data Security & Governance – The Unsexy, But Big Trend of 2018
I realise it’s a bit early to start doing a big “Trends of the Year” blog entry, although it’s probably only a couple of weeks before you start seeing these hitting your news feed with the frequency of discussions about this year’s John Lewis Christmas ad.
However, I think there is one trend which has unquestionably emerged this year – data security and tag management governance. I appreciate this doesn’t have the sexy ring of the latest data science technology or app development feature, but it’s probably the thing that keeps digital directors and heads of insight up at night the most at the moment. And, if it isn’t, then it should be, as this month’s hack of the Vision Direct web site highlights.
It was inevitable that, after GDPR, data security would become important, with many large companies desperately hoping that they had done enough to secure their customers’ data.
Some companies, like Dixons Carphone and Yahoo, got their dirty washing out first, by finally disclosing very old breaches which had happened before the GDPR deadline. This was interesting, as it meant that companies were having to step up because of the new legislation; it’s a pretty safe bet that the Yahoo breach in particular – given it was so old - would not have come to light yet if it wasn’t for GDPR. So, it was obvious even before May that the new law was having the desired effect as companies started to grow up and fess up in how they treated customer data.
GDPR led to a vast amount of planning, replanning and general wailing and gnashing of teeth as organisations tried to mitigate for scenarios that seemed unlikely, but the scale of the potential downside meant that all kinds of contingencies had to be made.
The Different Scenarios
In general terms, there are four high-level scenarios that might lead to a hack. Organisations tend to focus on some of them, and not others, which ironically makes them more vulnerable to the latter, mainly because it beggars their organisational belief that they might be plausible.
The four scenarios are, in rough descending organisational focus:
Malicious external hack of internal, or “owned” code infrastructure (in other words, a criminal hacks into an organisation’s systems, through firewalls and so forth, and steals data or inserts malicious code)
Malicious hack of external, 3rd party or cloud software infrastructure (someone hacks into a third party system which the organisation uses to gather or host data, and steals or inserts that way)
Hack of a legitimate login of one of the 3rd party systems (someone gains access to a legitimate user password, and data is downloaded, redirected or stolen in that way)
A disgruntled, or perhaps naïve, employee either deliberately or accidentally inserts malicious code
The first two of these involve what IT professionals like to think is their real “enemy” – the brilliant, talented computer scientist, who could be just like them, but went “bad” and can use their expertise against the “good” people trying to keep IT running in companies; think Boris Grishenko (Alan Cumming) in Goldeneye, or just about any geek character in any heist movie ever.
The first scenario is where the “good” people are the direct target, and so is the biggest threat. The second is where a third party tool is the one that’s hacked; this is less bad – it’s the other company’s fault – but they still recognise the evil genius in the background. And as a result, both these scenarios attract a considerable, and even disproportionate, amount of attention in IT Security and Legal teams everywhere.
The third scenario involves people not giving away their passwords, or using obvious ones. This is important and can be mitigated against, but it involves processes and the maintenance of these. The problem with this is it’s not as sexy as fighting evil supervillains whose choice of weapon is the very latest computer language. It involves governance, and it’s messy and time-consuming and people-focused. And so, IT Security tend to think it’s not as interesting or important as the other two.
Finally, the disgruntled employee is by far the least likely, goes the logic. Because this is about the company’s people. And we all work here, and why would anyone want to sabotage our great company from the inside? I mean, I understand people get annoyed sometimes, but no-one here would do that, would they? And as for naïve or stupid people – well, we just don’t employ those people. We hire great people – people who are clever enough to take on the Boris Grishenkos, and win. So, people won’t do stupid things here. And besides, that’s a lot of effort to police and train everyone on how to behave and not do silly things. And is that where the company’s money is best spent?
And as a result, the focus for IT Security is massively skewed towards the first two scenarios, and not the last two. And yet, we are all human, and it seems that the last two scenarios are more likely to have affected some of the more high-profile breaches since May.
The World Since GDPR
Because everything was so unknown, including how big the fines would actually be, the real battle was to not be the first organisation to be hit by a digital data breach after May. In the UK, that dubious and unfortunate honour was conferred on British Airways in August, as 22 lines of malicious code found its way onto the site and started sending customer credit card details out of the site.
I should emphasise at this point, I don’t know exactly what happened. However, by all accounts in the press, the lines of code were specifically customised for the BA environment. This suggests a very targeted intent to attack that individual company. Whilst this is clearly terrible, it perhaps suggested that such attacks would only focus on very large organisations, where the criminal payback would be large enough to justify the risk and tailored effort to create such code. The suggestion, again, as reported, was that the malicious code was inserted directly onto the site – that is, not via a tag management system, and was seemingly specifically written to target the BA web site and infrastructure.
But it also means that it’s much more likely that this involved an individual who put code onto the site, either accidentally or deliberately. In other words, this was probably the fourth scenario at work. Yes, the one that most organisations think is the least important or deserving of their focus.
The Latest Hack
At the end of August, the collective sigh of relief from every other major corporate everywhere was audible. However, very few very large companies went into the “I told you so” type of corporate smugness that can follow such an event, mainly because all major organisations who had been taking this seriously knew that there but for the grace of God, it could have been them.
But, of course, the examples have continued, with some organisations perhaps lulled into a false sense of security. Earlier this month, Vision Direct, the online seller of contact lenses, was hacked; whilst Vision Direct is a major player in the vision and eyecare sector, it’s perhaps not the Tier 1 type of organisation that the BA model would suggest was more likely.
It would appear that this hack used a fake Google Analytics script being inserted, which then took credit card details for a week. This is particularly interesting, as it appears to be the first time that Analytics tracking has been the explicit “disguise” by which data is stolen. Which has two particular implications for anyone using any form of analytics tracking and/or a tag manager (in other words, just about everyone).
Firstly, Google Analytics is the most common and standard analytics tool in the marketplace, and the entry level version is free. So, it’s pretty much ubiquitous. It is often implemented using Google’s tag management tool, Google Tag Manager, which is also free at entry level. The Vision Direct hack highlights that anyone can be hacked, through elements that are likely to be part of almost any site; the hack does not require particular coding or high levels of effort – this same code could work alongside GA code on pretty much any site.
Secondly, (and again, I don’t know the specific details, so this is supposition) it does strongly suggest that the weak link in this case is the tag management tool, or specifically the processes around it. From reading the technicalities of it as reported in the press, the fake script was deployed alongside the normal GA script rather than replacing it.
Of these, GTM is the more likely culprit, because it would be directly web accessible and likely to have a wider group of existing users, while Content Management tools tend to be more locked down and with a smaller group of users. But most importantly in this case, GTM would be the usual place to insert Google Analytics code, so it would ring alarm bells sooner if an analytics tag suddenly turned up somewhere else.
So, it’s possible that this was a Boris Grishenko style hack, but it’s probably not his MO; if you are going to hack into Google’s Tag Manager, you might as well go for someone really big, or perhaps everyone at the same time. Again, it’s much more likely that this was someone either guessing or hacking an existing login, or perhaps the disgruntled employee.
Which means that once again, we are looking at the third and fourth scenarios. The least interesting ones. And the ones most organisations think are least likely, and so are least prepared for.
What to do
So, don’t assume that just because you are not a global organisation that you are not the potential target for a hack; code doesn’t have to be tailored to you. And don’t assume that governance is just something that can be considered as an after-thought for your implementation, or something that your agency should just automatically cover; it’s probably the most important part of your entire analytics environment, and it’s vital that you take ownership of it and understand its value.
If you would like some tips on where to start on governance, please look at Katie Lockett’s article on the top 5 tips for getting value from Google Analytics.