station10
Typewriter-blog.png

Blogs

our views and our knowledge in analytics and other releveant topics


our blogs


An Ethical Dilemma - Mental Health & Data Science

 
mental-health.jpg

Recently, I have been considering suicide.  Not in a literal sense, I hasten to add, but generally I have been considering the impact that suicide might have on communities.

Sadly, the topic seems a constant theme in the news at the moment. In the last few weeks, the front pages have carried the tragic stories of Keith Flint, lead singer of The Prodigy, former Love Island contestant Mike Thalassitis, and Aaron Armstrong, the boyfriend of Sophie Gradon, another Love Island contestant.  Even more sadly, I could have gone on with a longer list.

Last year, BBC2 broadcast a truly excellent programme about how suicide is the biggest killer of men under 50, and exploring what the factors are that drive this, and what the best approaches are to address this.

The most obvious point is to get men to talk about their issues, but sometimes this needs to be in a quite explicit way.  It’s no good, for instance, asking someone who is depressed or suicidal if they are all right; particularly in Britain, where the standard response to anything, no matter what your state of mind, is “Yes, I’m fine”, that’s simply not an arresting enough question.  In order to break through the defensive mindset of depression, in which logic is inverted to convince the individual that no-one cares, and that perversely they are quite right to think everyone would be better off without them, the question has to be much more, even alarmingly, penetrating and direct – something like “are you going to kill yourself?”

However, before you go round all of your male friends asking whether they are planning to commit suicide, remember this is something of a blunt weapon.  Whilst there may well be benefits in getting society to be more direct, if you ask everyone this question, almost everyone will say “no”. And mean it. So, you may have to ask many, many people this question before you find someone who needs that level of help.  Unless there is a wholesale change in how we as a society address this issue, which seems unlikely, this makes this approach realistically the preserve of the professionals (doctors, police, paramedics, dedicated charities and so forth). And in the same way that data science approaches are helping blue light organisations to optimise their activities and improve outcomes, such as deploying police in areas which are statistically more likely to have crime committed on a particular day, this would seem to be an opportunity to do the same and improve outcomes in terms of suicide rates.

And indeed, as the television programme went on to discover, an academic in the US has analysed the range of data points to assess what might be factors or triggers for people to consider and then attempt suicide.  From this, he has built an algorithm that can predict who is likely to attempt suicide up to two years before they do so. And it seems that this algorithm operates at a high level of accuracy. In other words, it is an algorithm that pulls in different data sets and enables you to target the most relevant people, and potentially to alter your message to those individuals.  If we ignore the particular context or “business” that it works in, it is, therefore, exactly the sort of data-driven tool that commercial organisations are building everywhere to help tailor their communication to the right people. And its level of accuracy means that any other business would, once the odd tweak had been done, happily put this into production.

So far, so ordinary.  And yet, the academic went on, no-one can use it.  Because we are talking here not about targeting customers with the latest offer to buy groceries, or clothes, or holidays, which they may, or may not, choose to take up or ignore.  We are talking about preventing suicide. And whilst that outcome sounds admirable, we are also effectively talking about doing so by preventing someone from using their own free will.  And that is a challenge that simply doesn’t occur with the latest grocery offers.

And so, we hit an ethical dilemma, caused by data science.  Many of the traditional moral debates about artificial intelligence focus on the idea of robots becoming smarter than humans, and are informed by science fiction concepts – think of the replicants in Blade Runner or the malfunctioning police droid in Robocop.  But even the most ambitious AI enthusiast knows that these are at least 15 years away from their first tentative steps into reality, and it’s always easier to counter moral arguments when their impact is decades away. However, here is an ethical challenge in the here and now, which could have an effect on people today.  The questions is: what is more important – saving lives, or allowing people to exercise free will?

I have discussed this with various people, from different angles of the ethical debate.  And the answer seems to be fairly consistent – free will wins. The idea of algorithmically identifying who might be at risk of suicide, and then using that intelligence to stop them doing so, is too Big Brother.  Getting the British to be more pushy and direct – the very antithesis of what many would regard as what it means to be British - is an easier sell. In other words, it’s never going to happen.

This is not to say that there are not important services being developed in this area.  The Zero Suicide Initiative, a collaboration amongst NHS trusts to work to reduce suicide rates amongst their surgeries and hospitals, is to be applauded in attempting to change the way we think about mental health, but, as it stands, doctors and nurses can only ever treat those who present themselves.  If you are suicidal, you are less likely to go to formally seek help from a doctor before attempting anything; in many scenarios, the doctors, and particularly paramedics, are generally going to be “downstream” of any attempts, by which time, it’s too late.

However, by encouraging people to talk about mental health issues, and creating a culture of opening up, the Zero Suicide Initiative could well have transformative effects on outcomes.  But, as it stands, the ethical position appears to be that it can’t use data to inform this, and to make any of these activities potentially more efficient.

And I think this is a major issue for advocates of data science and artificial intelligence.  Because if it’s not morally acceptable for major public service initiatives to use data because it’s too Big Brother, then there will be a much greater limit on the take-up of machine learning in general.  One of the guiding principles of ethics on AI has been that it should be used for the good of humanity, which neatly encapsulates how we should focus on people, not on the technology. And yet, it seems that when AI can help people right now, it’s already losing the moral argument.  

Which is fine, if that’s what we decide.  But it doesn’t feel like a conscious decision at the moment.  However, there is a real sense of inevitability in the discourse around AI and machine learning at the moment; it’s only the fact that the technology hasn’t quite matured that is preventing it from greatness, and even that is just a matter of time.  But this example shows that’s not the case. In the rush to build interesting tools, no-one has been talking about the ethics of AI and where the limits should be, and that could be the biggest challenge. We need to start having a greater, wider public debate about where this line should be drawn now.

Oh, and we do need to help men, in particular, to talk about mental health issues more.  The training on the Zero Suicide Initiative website is a good place to start.