WELLINGTON, New Zealand — Social media companies have removed or restricted about 4.7 million accounts identified as belonging to children in Australia since the country enforced a nationwide ban on platform use by those under 16, government officials said, marking the first major assessment of the law’s impact.
The figures were submitted to Australian regulators by 10 major platforms after the law took effect in December, amid concerns over the effects of harmful online environments on young people. The ban has triggered intense national debate over technology use, privacy, child safety, and mental health, while also drawing interest from governments abroad considering similar measures.
“We stared down everybody who said it couldn’t be done, some of the most powerful and rich companies in the world and their supporters,” Communications Minister Anika Wells told reporters on Friday. “Now Australian parents can be confident that their kids can have their childhoods back.”
Under the law, Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X, YouTube, and Twitch can face fines of up to 49.5 million Australian dollars ($33.2 million) if they fail to take reasonable steps to remove accounts held by children under 16. Messaging services such as WhatsApp and Facebook Messenger are exempt.
Platforms are allowed to verify users’ ages through several methods, including requesting identification documents, using third-party age-estimation technology based on facial analysis, or inferring age from existing account data, such as how long an account has been active.
Australia’s eSafety Commissioner Julie Inman Grant said about 2.5 million Australians are between the ages of 8 and 15, and previous estimates suggested 84% of children aged 8 to 12 had social media accounts. While the total number of accounts across the 10 platforms was not disclosed, she described the reported figure of 4.7 million accounts “deactivated or restricted” as encouraging.
“We’re preventing predatory social media companies from accessing our children,” Inman Grant said.
The commissioner added that all 10 major companies covered by the ban had complied with reporting requirements and submitted their figures on time. She said enforcement efforts are now expected to shift toward preventing children from creating new accounts or bypassing the restrictions.
Officials did not provide a platform-by-platform breakdown. However, Meta, which owns Facebook, Instagram, and Threads, said in a blog post this week that it had removed nearly 550,000 underage accounts by the day after the ban took effect.
In the same post, Meta criticized the legislation, warning that smaller platforms not covered by the ban might not prioritize child safety. The company also noted that children can still be exposed to algorithm-driven content while browsing platforms, a concern that helped drive the law’s passage.
The ban has been widely supported by parents and child safety advocates. However, online privacy groups and some youth organizations have opposed it, arguing that vulnerable or geographically isolated teenagers often rely on online communities for support. Some young users have also said they were able to bypass age checks or received help from parents or older siblings to retain access.
Since Australia began debating the policy in 2024, other countries have explored similar approaches. Denmark’s government said in November it plans to introduce a social media ban for children under 15.
“The fact that in spite of some skepticism out there, it’s working and being replicated now around the world, is something that is a source of Australian pride,” Prime Minister Anthony Albanese said.
Opposition lawmakers have questioned the effectiveness of the measures, suggesting young people may be migrating to less regulated apps. Inman Grant said her office observed a spike in downloads of alternative platforms when the ban was introduced, but not a sustained increase in usage.
“There is no real long-term trends yet that we can say but we’re engaging,” she said.
The eSafety Commissioner also said her office plans to introduce what she described as “world-leading AI companion and chatbot restrictions” in March, though further details were not disclosed.

Paraluman P. Funtanilla
Paraluman P. Funtanilla is Tutubi News Magazine's Marketing Specialist and is a Contributing Editor. She finished her degree in Communication Arts in De La Salle Lipa. She has worked as a Digital Marketer for start-up businesses and small business spaces for the past two years. She has earned certificates from Coursera on Brand Management: Aligning Business Brand and Behavior and Viral Marketing and How to Craft Contagious Content. She also worked with Asia Express Romania TV Show.





