station10
Typewriter-blog.png

Blogs

our views and our knowledge in analytics and other releveant topics


our blogs


Are we fighting with algorithms on social media?

 
social-algorithm.jpg

Earlier this month, cosmetics retailer Lush UK announced that it was bidding farewell to social media. In a move that shocked many, Lush announced why it had decided to leave social media:

“Increasingly, social media is making it harder and harder for us to talk to each other directly. We are tired of fighting with algorithms, and we do not want to pay to appear in your newsfeed.”

That’s an interesting point. Are we too, as individuals, fighting with algorithms? How can we make sure that as artificial intelligence becomes more widespread that we are helped, rather than hindered, by algorithms?

How do they work?

When you scroll through Instagram or browse Twitter, you’re being guided by an algorithm that understands (and is constantly learning about) your content consumption habits. Facebook prioritises posts that spark meaningful interactions. Twitter focuses on relevance rather than timeliness. LinkedIn is the same. Whilst each social network has its own unique algorithm that decides which content a user should see and when, they are all impacting how we use social media.

But, are they making it harder for us to interact with our favourite brands, and ultimately, each other? How can we make sure that algorithms serve us, rather than the other way round?

They’re constantly evolving

At first, social media sites were a place where we could connect with our friends and colleagues. Then, they transformed into a place where we consume news and other information. This change meant that now, social media algorithms can influence and, arguably even control, the information that we see. That’s powerful, and scary.

As social media sites became prominent as news distribution outlets, numerous problems followed. Trolls, fake news and political manipulation to name just a few. Following the 2016 US presidential election and the Brexit vote here in the UK, Facebook pledged to become a platform for “meaningful interaction” between users rather than a platform for connecting with brands and news outlets. So, as social media sites evolved once again, so do their algorithms. They aren’t static. They’re constantly changing.

Whilst social media sites claim they’re (and rightly should be) focused on safeguarding their users, their algorithms are also centred around maximising site usage and ultimately revenue. As users, we need to keep our guard up, and understand, as best as we can, how these algorithms work and how they’re influencing our decisions.

The social impact of social

Social media’s impact on mental health has been making headlines over the last few months, particularly in the case of teenager Molly Russell. Following her tragic suicide, Molly’s father said that he believes Instagram is partly responsible for her death.

Social media algorithms are designed to show us the content it thinks we want to see. In Molly’s case, based on her usage of social media, it is reported that she was recommended further disturbing content and was even sent an email with “personalised recommendations” containing similar images.

A number of government ministers have warned social media companies, calling for them to do more to protect vulnerable people or face stricter regulations and punishments for failings. It’s time for social networks to revisit their algorithms and find a way of using them to solve problems, rather than creating them. If it’s possible to highlight disturbing content and recommend it to users, then it’s possible to highlight it and remove it, and more importantly, provide support for people who may be at risk.

Algorithms for social good

Whether through regulation or self-regulation, social media needs to adapt. However, as users, as parents, and as friends, so do we. We need to be more clued up to ensure that we’re not manipulated by algorithms. But, with algorithms changing how we live on so many different ways, far beyond how we interact with social media sites, can they do good?

Humans aren’t good at being impartial. We get tired. We’re not always precise. This isn’t a problem for algorithms. They’re not perfect, and they can contain imperfections instilled in them by their programmers but they’ve made our lives a lot easier.

We shouldn’t blindly trust them. But we shouldn’t dismiss them as being completely evil either. As technology companies are being made to become more and more socially responsible, we will see a shift towards fairer, bias-free algorithms that are there to help us.

Make (Good) Trouble, a Community Interest Company that exists to understand and improve the mental health of teenagers, is campaigning for this. We at Station10 are working closely with Make (Good) Trouble to understand how algorithms and data science can support this pursuit. It’s definitely worth having a listen to their new radio show which explores teenage mental health as well as the impact of social media on the teenage brain