Why banning under 16’s from social media isn’t the answer to solving big tech’s big problem

by Jess Davies

Social media is a lot of things. It’s fun, addictive, informative, curious, connective- and, in far too many ways, harmful to its users. Over the past twenty-five years, it has transformed from a basic Myspace profile accessed via the shared family PC and a screeching dial-up connection, into a collection of apps that sit quietly in our pockets, feeding us a never-ending scroll. A bowl of tomato soup designed to never run out.

In 2025, Australia banned social media for under 16’s following years of concerns around internet safety for children, and this week the House of Lords backed a move which would see the UK follow suit.

On the surface, a blanket ban might sound decisive. But is it really the answer? Or just a lazy attempt to plaster over a complex, generational problem while big tech continues to operate with near total impunity?

Growing up alongside the digital revolution I’m a member of a unique cohort, a generation that can remember life before the internet- having to physically call for your mate by turning up at their door and stretching a tenner’s phone credit across a couple of weeks- and having the digital literacy of the permanently plugged-in generations that would follow. We shared too much of ourselves online before the concept of a “digital footprint” even existed. We fell headfirst into harmful beauty trends pushed by #ads long before advertising rules were enforced. We swapped intimate selfies with strangers on early dating apps, naively assuming our consent and privacy would be respected. When it comes to the good, the bad and the ugly of life online, I’ve been there- and bought the whole gift shop.

But while millennials lay the framework to what life online might entail, there’s a new, younger generation (often referred to as Gen-z or Gen-alpha) who’s digital lives carry just as much weight as their offline ones. From ordering food and selling second-hand clothes, to playing videogames and keeping up with their favourite streamers and influencers, young people’s experiences are being driven each day by algorithms designed to keep them online. A 2025 Ofcom report found almost 4 in 10 3–5-year-olds have their own social media account, while 82% of 10–12-year-olds own their own mobile phone.

With the internet woven into almost every aspect of young people’s lives, there’s no question that stronger protections are urgently needed. The harms unfolding in online spaces- from body image pressures and misinformation, to grooming and abuse- are no longer abstract risks, but daily realities. Fresh off the back of an AI abuse scandal that saw X’s in-built chatbot, Grok, generate non-consensual intimate images of women and child abuse material, the UK government is set to debate legislation that would ban under-16s from social media altogether, with more than 60 Labour MPs calling on the Prime Minister to impose a ban.

But why should young people born into the digital age be punished by having their right to a digital citizenship restricted, simply because grown adults don’t know- or don’t care enough- to keep them safe online?

While there are steps we can all take to reduce risk (turning on two-factor authentication, avoiding sharing live locations, locking accounts where possible), online safety is not an individual problem to solve. It is a systemic one. It is big tech’s problem- and crucially, they have both the money and the technology to fix it. What’s needed is a genuine safety-by-design approach, rather than continuing to place the burden on users to protect themselves from algorithms explicitly engineered to outsmart them.

Just as films are age-rated, social media platforms could- and should- offer age-appropriate experiences. A thirteen-year-old should not be navigating the same online environment as a sixteen-year-old, let alone an adult. The fact that everyone, from children to pensioners, are lumped into the same digital spaces is a lazy, one-size-fits-all approach that simply wouldn’t be tolerated in other areas of daily life.

Removing young people from social media altogether also risks leaving them behind in a world that runs on digital literacy. For many under-16s, social platforms are their primary source of news- an access point that matters more than ever as independent journalists and outlets are squeezed out of legacy media, and unreliable or ideologically driven content fills the gaps. Cutting young people off from these spaces doesn’t protect them from influence; it reshapes it, often in ways that are far harder to challenge.

A ban would also limit young people’s ability to form relationships and find online communities that positively support their mental health. While social media’s harms are real, we can often be guilty of dismissing its benefits. For those without strong support systems at home or who face cultural factors that may prevent them from sharing their true selves, online spaces can provide connection, validation and belonging. They can be places to learn, explore different cultures, engage creatively, and momentarily escape the pressures of the offline world.

Speaking to the BBC’s Newscast, Ian Russell- the father of teenager Molly Russell, who tragically died after engaging with suicide and self-harm content online- said banning under-16s from social media would be “wrong” and would “cause problems”. He explained:

“At the heart of it are companies that put profit over safety.”

Russell stressed that governments should force platforms to comply with protections already enshrined in law, such as the Online Safety Act, rather than resorting to “sledgehammer techniques like bans that will have unintended consequences and cause more problems”.

Young people themselves have echoed these concerns. Speaking to the BBC, one girl said she would be “devastated” by a ban: “I love going through my phone and having my online identity. I find it so much more freeing. Online, you can be yourself, whereas at school you’re having to fit in constantly.”

She also highlighted how building digital literacy from a young age helps people spot deepfakes and scams- something older generations often struggle with. Another teenager acknowledged social media’s impact on her sleep and attention span, but suggested “more restrictions and time limits on apps” would be far more useful than an outright ban.

On surface-level, a blanket ban may look like the best option, but it absolves platforms of responsibility. It becomes a convenient get-out-of-jail-free card for companies that have spent years designing addictive, unsafe systems. And it raises an obvious question: what happens when those users turn sixteen and are suddenly allowed back online? The same harmful technology exists. The same perpetrators remain. The same algorithms continue to operate- only now, young people are less equipped to understand and navigate them.

In her book Logging Off, online safety campaigner Adele Walton argues for a comprehensive digital curriculum, so children can grow up with the tools to critically engage with online spaces rather than being shut out of them. This is something we’re beginning to deliver through workshops at Our Streets Now, creating space for young people to ask difficult questions- without judgement- about online misogyny, digital abuse and the algorithms that shape us.

Rather than supporting a ban, the Molly Rose Foundation proposes a one-off harm-reduction windfall tax on big tech, forcing companies to fund safety improvements. It’s an approach I’ve long supported. Just as gambling companies contribute to addiction support and public health messaging, social media giants should pay towards specialist services for victims of online harm, the rollout of digital education, and properly resourced policing for technology-facilitated abuse. Let’s be for real, the billionaires who own these platforms are among the richest people on the planet- it’s time we started speaking their language and made them pay to protect the people lining their pockets.

Ultimately, this is not young people’s problem to solve, nor should it be their punishment to bear. Holding big tech accountable means enforcing safety-by-design, funding education and support, and using the guardrails we already have; because banning children from social media doesn’t fix a broken system, it just allows it to keep breaking them.

Resources:

Ofcom https://www.ofcom.org.uk/siteassets/resources/documents/research-and-data/media-literacy-research/children/childrens-media-use-and-attitudes-report-2025/childrens-media-literacy-report-2025.pdf?v=396621&__cf_chl_tk=9KSmRHyH7qtWfDLN8h1o3HPJ9ztIToL39rHsZMWCrzw-1769006241-1.0.1.1-lNzFwwFvuv1mKzu8IYZ2VrnHbHYKthCAkM7r3TJCz8U

BBC “Molly Russell’s Dad says under-16 social media ban would be wrong” https://www.bbc.co.uk/news/articles/cpwn1vjy0y5o

BBC “Under 16 social media ban? I’d be devastated”

Next
Next

From 2021 to Now, What’s Changed in 5 Years of Working in Schools?