top of page

Social media regulation can't keep up with the latest trends of harmful content


The emergence of social media has shaped much of the first two decades of the 21st century. For any individual who spent their childhood in this era, social media represents both a poison and a pleasure. More tellingly, our time spent on these platforms means we are highly aware of the dangers of these sites, usually through first or close second-hand experience. We’ve all had a friend who’s been sent inappropriate content, or received hateful messages.


Regardless of personal experience, the dangers related to social media use have become evident to those who haven’t been brought up as digital natives. The statistics show a clear story for anyone to see. Around one in five children aged 10 to 15 years in England and Wales experienced at least one type of online bullying behaviour.


Algorithms have only compounded the issue by broadening the amount of content users see beyond their real-life group of friends. The result is that users have an unprecedented stream of non-contextualised clips of stranger’s lives. Beyond the understandable self-esteem issues that constantly comparing yourself to others may create, this style of content increases opportunities for bullying and harmful content to spread as people can leave comments or messages without facing repercussions outside of social media. Parliament’s DCMS Select Committee found 62% of adult internet users and over 80% of children have had potentially harmful experiences online.


This level of harm online makes it clear that there is an urgent requirement for greater online safety measures. Yet, it is evident that online regulation faces two key challenges.


Firstly, politics has not caught up with social media developments. Despite having knowledge of the dangers related to social media, Members of Parliament have so much on their plates that their approach to legislation has typically been reactive instead of proactive. This is particularly problematic given the speed it takes to pass legislation related to online harms, with the Online Safety Bill itself having been six years in the making. The result is that social media content regulation will always lag behind online harms unless politicians change their approach.


The other key issue is that users will always try to find new ways to bypass implementation of regulation by social media companies. Sadly, users who wish to get around regulation consistently find ways of doing so. For example, on TikTok users have taken to using code words like ‘mascara’ (sex toy) and ‘unalived’ (suicide) to get around the AI content moderation. While many of these code words are not used out of malicious intent, it does indicate that social media companies need to be smarter with their content moderation if they wish to reduce harms on their sites.

0 comments

Recent Posts

See All
bottom of page