
Britain’s media and privacy watchdogs have issued a stark warning to the world’s largest social media companies, urging them to do far more to prevent children from accessing platforms
meant for older users.
In a joint push announced Thursday, the UK’s communications regulator Ofcom and the data protection authority Information Commissioner's Office said major platforms are failing to properly enforce their own minimum age policies. The regulators warned that tougher enforcement could follow if companies do not act quickly.
The intervention comes as the government considers stricter rules on children’s social media use, including a potential ban for users under 16 — an approach similar to recent measures adopted in Australia.
Regulators alarmed by algorithms and harmful content
Officials say they are increasingly concerned about algorithm-driven feeds that can expose young users to harmful, addictive or inappropriate material.
“These online services are household names, but they’re failing to put children’s safety at the heart of their products,” said Melanie Dawes, chief executive of Ofcom.
She warned that companies must move quickly to strengthen safeguards or face regulatory action.
Platforms ordered to improve age checks
As part of the latest rollout of the UK’s Online Safety Act, Ofcom has instructed major platforms to demonstrate by April 30 how they will better protect minors.
The companies targeted include platforms operated by Meta — such as Facebook and Instagram — as well as Roblox, Snapchat, TikTok owned by ByteDance, and YouTube from Alphabet Inc..
Regulators want the companies to introduce stronger age-verification systems, prevent strangers from contacting children, create safer content feeds and avoid testing new features on underage users.
Push for “modern” age-verification technology
Separately, the Information Commissioner’s Office issued an open letter urging the same platforms to adopt advanced age-assurance technologies to stop children under 13 from accessing services not designed for them.
“There’s now modern technology at your fingertips, so there is no excuse,” said Paul Arnold of the ICO.
Industry pushback
A spokesperson for Meta said the company already uses artificial-intelligence tools to detect users’ ages and automatically places teenagers in accounts with additional protections. The company argued that age verification should happen at the app-store level, so families don’t have to repeatedly provide personal data across multiple services.
Meanwhile, a spokesperson for YouTube said the platform already offers age-appropriate experiences and expressed surprise that Ofcom was shifting away from a “risk-based” regulatory approach.
Companies including TikTok, Snapchat and Roblox did not immediately comment.
Heavy fines possible
Under the Online Safety Act, Ofcom can fine companies up to 10% of their global revenue for failing to comply. The Information Commissioner's Office can impose penalties of up to 4% of worldwide annual turnover.
The privacy watchdog has already shown it is willing to act. Last month it fined Reddit nearly £14.5 million for failing to introduce meaningful age checks and unlawfully processing children’s data.
The regulators say their message to tech companies is clear: protect children online — or face serious consequences. Photo by Nokia621, Wikimedia commons.



