• -
Traffic Lights

Don’t Judge a Model by its (Initial) Output

Tags :

Category : Blogs

Public sector organisations have been armed with “big data” and there are several examples of how this has changed the way public services are delivered.

Big Data Goes Public

There are Californian cops, for instance, who use predictive analytics to identify the areas or blocks where crimes are most likely to be committed that day. They can then simulate ‘the right place at the right time’ and deploy their teams to those specific areas. That’s game-changing for the police and could have a huge impact on how crime is fought in the future.

Then there are entire cities that have had their traffic systems made more effective through the analysis of vehicle movement. Activity at traffic lights and roundabout systems are closely monitored so that the optimal phasing of traffic lights can be deployed, and drivers and pedestrians are able to flow through the city most efficiently.

At the same time, city authorities have been able to use this information to vary the pricing of car parks in specific areas depending on the time of day. People parking out of hours can then save money. Who wouldn’t want that?

Except it works both ways.

One of the criticisms levied against the variable pricing of car parks is that it is simply a way of identifying supply and demand. Those people parking during rush hours just end up paying more – they effectively subsidise the cheaper prices for those who park outside of peak times.

This has negative social implications since city authorities are essentially charging working people more. They are obviously the ones more likely to use such services during those busy hours.

The Human Touch

From a data analysis point of view, this also opens up an opportunity for accusation. This is of course a very obvious, predictable outcome. It doesn’t take a genius – or a series of geniuses with expensive algorithmic programming capabilities – to work out that demand for transport and related public facilities will be greater during rush hour.

Likewise, does it take a sophisticated analysis tool to establish the areas or neighbourhoods in which crime is most likely to be committed? How is this different to a policeman knowing his beat very well? Is this better than first hand knowledge of the streets and the community?

Sure, examples of big data usage within the public sector can be made to soundgreat. And certainly, if this is where we are now, just think how these kinds of public services will be delivered or managed in 10-20 years’ time.

But the capabilities that we have currently don’t seem to measure up against good old human competencies.

Naked Streets

A case in point, a story I read in today’s Metro on the Tube in London (or, more accurately, over someone’s shoulder as they read the article, because, well, that’s the reality of how you read the Metro!).

The story explained how 42 traffic lights at a single junction, in the centre of Beverley, East Yorkshire, had failed entirely the other day. This meant that drivers had to travel through the complicated one-way system, with roads crossing roads through roundabouts, and all sorts, on their own; without the assistance of the helpful traffic light system to guide them on their way. 

This outage was obviously fairly significant – it affected all 42 lights and it continued for several days. And the result? Anarchy? No.…smoother traffic flow. The Metro even reported that “locals” had recounted that the traffic flowedbetter than normal.

Now, as far as I am aware, Beverley, East Yorkshire, does not have a big data system feeding its traffic light management system. So this is not an example of a big data failure. But it does suggest that, left to our own devices, and when we are familiar with our local environment, we humans can actually navigate pretty well. Sometimes better, even, than the town planners and police commissioners give us credit for.

The Big (Data) Opportunity

While there are certainly challenges with some of the programmatic, interventionist (even paternalistic) approaches that some of these public service applications are following, this is where the opportunity for big data lies.

Leaving aside the philosophical challenges about whether you can predict a criminal before they become one – issues which hark back to the phrenology approaches of the nineteenth century – these kinds of advancements in data and analytics do address a real problem for modern police forces.

We’re all painfully aware that the public sector is out of cash – and that, consequently, there are fewer police patrols and “bobbies on the beat” walking the streets. There are fewer policemen who therefore know the area, and the likely trouble spots, very well – with less time spent on patrol, it’s harder to get that local knowledge. This is where data steps in.

What the predictive analysis tools enable police forces to do, is to regain a version of that local knowledge at an organisational level. They can then co-ordinate responses either to prevent crime in the first place, or to apprehend suspects rapidly. It’s a short-cut to gaining that long-term experience.

Embrace The ‘Bleeding Obvious’

This gives us a glimpse into the potential for public service data in the future – and it relates to something that all those who have worked with predictive models will recognise.

Very often the first iteration of the model will identify what Basil Fawlty would call the “bleeding obvious”.  Any model will always find the largest correlation first.

That might be something like: Loyal customers are more likely to visit more often, or indeed, to spend more. 

This is where the car park model is at; it is simply being used to show something that any casual observer could have noticed anyway.

But it’s once you get beyond these first impressions that the models become genuinely interesting, novel and informative. For example, spotting that loyal customers always visit certain web pages, or perform particular actions in your store more effectively. Once you move beyond the blatantly evident, you can identify real behavioural segments.

What if you spotted that certain groups of people in the car park travel a certain distance, or a particular route?

This could indicate a lack of public services in those commuters’ more immediate local area, which could lead to the development of more car parks, or train stations, or shops, nearer to them. Or it could result in councils incentivising particular groups to encourage the use of electric cars on their journey to the car park.

That’s where predictive models can get really valuable. But many in the public sector, as yet, have not got beyond first base. That’s often because they haven’t had time to mature. But it’s also an important reminder to managers who might be new to the data applications that are being created in their teams:

Don’t judge a model on its initial output, because that’s likely to be it’s “bleeding obvious” version.

Instead, ask your teams to keep evolving the application, so that they can identify more sophisticated iterations, which could identify opportunities that would be very hard to spot in any other way. This is where the long-term value of your predictive models will kick in.

It might not take a group of number-crunching masterminds to ascertain ‘the bleeding obvious’, but it’s still an important part in a model’s evolution. It sets the stage for more complex iterations to follow, which could transform the way we work.

For Big Data to be effective, it needs to take small steps…

Sign up for our newsletter