Can we really ‘reset the internet’ to make it safer for children?
“A major reset of the internet to make it much safer” is how Ofcom’s Gill Whitehead described the communications watchdog’s child safety announcements to me.
But can it really deliver that kind of a sea-change in the protection of children online?
Turning faulty tech off then on again is a tried and trusted fix, but “resetting the net” is considerably more challenging.
First of all, consider the scale of the task: while the focus is on the largest and riskiest social media firms, over 150,000 services fall under the Online Safety Act, the new law Ofcom must enforce.
According to Ms Whitehead, the big tech firms are already taking action.
She pointed to measures by Facebook and Instagram owner Meta to combat grooming, and steps taken by streaming site Twitch, owned by Amazon, to stop underage users seeing “mature” content.
But the problem goes much wider than that.
Internet Matters, which provides advice on online safety, has just published research which suggests one in seven teenagers aged 16 and under have experienced a form of image-based sexual abuse, with more than half saying that a young person known to them was to blame.
And it will be the second half of 2025 before the new rules come into force – child safety campaigners say that’s not fast enough, and the measures don’t go far enough.
And remember, this announcement is of a consultation, which will likely be an exchange between the regulator, tech firms, experts, parents and a range of tenacious activist groups.
Age checks
Among the 40 practical measures in the draft Children’s Safety Codes of Practice, some will be particularly controversial.
One contentious area is how tech firms check whether their users are children, and if they are children, that they are old enough to use the service.
The regulator calls this “age assurance”. It doesn’t specify exactly how this must be done – but it is clear that simply ticking a box or entering a date of birth won’t do.
It has previously suggested tech which scans a user’s face and uses artificial intelligence to estimate age could be acceptable, if used in conjunction with a demand for further proof of age.
But age-checks could mean tens of millions of UK social media users, mostly adults, providing information to the tech firms – or third-party age-check businesses.
Privacy campaigners have already pushed back on this point. Jim Killock, of digital rights campaigners the Open Rights Group, wrote:
“Adults will be faced with a choice: either limit their freedom of expression by not accessing content, or expose themselves to increased security risks that will arise from data breaches and phishing sites”.
But the many third-party age-check companies argue that they can implement these systems while preserving privacy.
New systems also attempt to guard against obvious work arounds, such as using a photo of an older person, by checking for “liveness”.
But some argue age-checks will be counter-productive.
“The worst thing is saying to a youngster ‘you can’t look at this’,” Surrey University professor Alan Woodward told the BBC.
“They’ll find ways around it, whether it’s using VPNs (virtual private networks) to go via routes where it doesn’t require that or where they can sign on with somebody else’s details.”
While he supports stopping children viewing some content, he worries some may respond by seeking out darker corners of the internet where age-checks are not enforced.
And Ofcom’s own data suggests a significant minority of parents can be willing collaborators in allowing underage children to use social media sites under the minimum age.
For example, a parent or an older sibling opening an account that a child then uses will be harder to guard against.
Meta boss Mark Zuckerberg has previously argued he favours making app stores, like those operated by Apple and Google, check ages instead – but these important gatekeepers aren’t covered by the consultation.
Ofcom told me that it will consult on the role of app stores in the protection of children, and the government would have the power to introduce new duties for app stores if the report suggested that it was necessary.
But that won’t happen until 2026.
Extra encryption
Another big problem in resetting the net: the growing use of end-to-end encryption.
The technology means only the sender and the receiver can read messages, see media or hear phone calls – even the app-makers cannot access the content.
Campaigners argue the tech makes it hard for the big firms to spot child abuse on their platforms when messages are protected by security which, by design, even the platforms themselves cannot break.
Ofcom has the power to compel companies to scan for child sexual abuse material in encrypted environments, but it will be the end of the year before it sets out how it expects to use these powers.
Some encrypted services like Signal and WhatsApp have said they won’t comply with any measures that weaken the security and privacy of their systems.
WhatsApp’s owner, Meta, has even said it will expand the use of end-to-end encryption on its platforms – a decision many children’s charities and the government are unhappy about.
How operators of encrypted services will prevent teenage users from seeing other types of very harmful content, while keeping the security and privacy of their services intact, remains unanswered.
But if tech firms fail, the message from the watchdog is clear – the consequences will be serious, both in the cost of eyewatering fines for the firms, and in the cost to the mental health and wellbeing of children.
-
Three-year-olds groomed online, charity warns
-
23 April
-
-
https://www.bbc.co.uk/news/technology-68838029
-
19 April
-
Published at Wed, 08 May 2024 13:36:18 +0000