Australia made global headlines when it became the first country to enforce a hard minimum age for social media access. As of December 10, 2025, platforms operating in the country were legally required to take “reasonable steps” to prevent children under 16 from holding accounts. Three months later, the Australian eSafety Commissioner has released its first compliance report — and the findings should concern regulators, parents, and cybersecurity professionals everywhere.
The short version: platforms removed over 4.7 million age-restricted accounts in the initial sweep. But a substantial number of Australian children under 16 still have accounts, can create new ones, and in some cases were actively helped by the platforms themselves to circumvent the very restrictions those platforms were supposed to enforce.
What the Law Actually Requires
The Online Safety Amendment (Social Media Minimum Age) Act 2024 — colloquially called the SMMA — doesn’t ban children outright. It places legal obligations on covered platforms to take reasonable steps to prevent under-16s from holding accounts. What’s “reasonable” isn’t prescriptively defined, which was intentional: eSafety wanted flexibility for platforms to implement layered, privacy-preserving approaches suited to their specific architecture.
The 10 platforms eSafety identified as age-restricted: Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X (formerly Twitter), and YouTube. Penalties for systemic non-compliance can reach AUD $49.5 million per platform. Neither children nor parents face penalties for circumvention.
On March 25, 2026 — just days before this compliance report dropped — the Minister for Communications registered a new legislative rule tying the definition of “age-restricted social media platform” more tightly to harmful design features. Specifically, platforms now also need to have either a recommender feature (algorithmic content selection) or logged-in features like endless feeds, feedback mechanisms, or time-limited content to qualify. All 10 currently identified platforms meet one or both of these new conditions.
The Numbers: Progress, But Not Enough
eSafety’s pulse survey of 898 parents conducted in late January 2026 found that account ownership among 8–15 year olds dropped from 49.7% to 31.3% after the law took effect. That’s meaningful movement. But dig into the retention numbers and the picture darkens considerably.
Of parents whose child had an account before December 10, approximately 7 in 10 reported their child still had an account on Facebook (63.6%), Instagram (69.1%), Snapchat (69.4%), and TikTok (69.3%). YouTube fared better — about half (48.5%) of children retained accounts there.
The most common reason children lost their accounts? Platform-led deactivation (43.6% of cases). But the most common reason they kept their accounts? The platform never asked them to verify their age — cited by 66.8% of parents whose children still had accounts.
That’s the core indictment buried in the data: the platforms weren’t actively enforcing anything. They were waiting to be caught.
Four Compliance Failures Worth Understanding
eSafety’s report identifies four key observations about how platforms have failed — not just failures of omission, but in some cases active design choices that undermine the law’s intent.
1. Encouraging Kids to Game the Age Check
At least one platform sent notifications to users who had already self-declared as under 16, inviting them to undergo age assurance in case they had “entered their age incorrectly.” The framing was plausibly deniable — maybe some users really did enter a wrong birthday — but the practical effect was an open invitation for every 14- and 15-year-old on the platform to try their luck.
The check offered? Facial age estimation — a technology the government’s own Age Assurance Technology Trial found has elevated error rates near the 16-year threshold. Platforms that sent these invites to self-declared minors and offered facial estimation as the verification method knew exactly what the false-positive rate would look like. This wasn’t an oversight. It was a policy decision.
2. Letting Kids Retry Until They Pass
Some platforms allowed users to attempt the same age assurance method repeatedly. Best practice from age assurance vendors recommends no more than five attempts. Some platforms were permitting attempts in the double digits.
Worse, even after a parent reported an account as belonging to an under-16 user, some platforms responded by triggering another facial age check — the same check the child had already gamed. No escalation to a more robust method. No ID verification. Just another spin of the same wheel.
The regulatory guidance is explicit: once a user has already used one form of age assurance and there’s reason to suspect they’re underage, platforms should escalate to a more robust method — bank-based checks, government ID verification, or similar. Some platforms appear to be ignoring this entirely.
3. Reporting Pathways That Don’t Work
Parents trying to report an underage account face what eSafety describes as significant friction. Some platforms require reporters to have an account themselves before they can file a report. Some require documentation proving parental relationship — including, in at least one documented case, a letter from a lawyer.
There’s a bitter irony here: these same platforms don’t require the same level of proof when a parent vouches that their child is 16 or older and should be allowed access. The asymmetry is hard to explain as anything other than a choice.
Even when reports are filed, some platforms respond by reviewing the account for surface-level evidence of underage behavior — and if they can’t find anything obvious, they leave the account active rather than triggering an age check.
4. Self-Declaration Still Rules at Sign-Up
Perhaps most damaging: most platforms have not deployed meaningful age assurance at the point of account creation. A new user can enter a false birth date, declare themselves 16, and proceed without any additional check. The age inference models that would eventually flag suspicious account behavior — where they exist at all — can take months to produce high-confidence signals.
Some platforms haven’t opted into Apple’s Age Range API, which provides age range data associated with the device being used. This is a low-friction, privacy-preserving signal that multiple age assurance vendors and eSafety’s own guidance recommend. Some platforms advocated for years for app stores to take a stronger role in age verification — and then, when the API became available, declined to use it.
The Enforcement Shift
As of today, eSafety Commissioner Julie Inman Grant announced a formal shift from compliance monitoring to enforcement stance against five platforms: Facebook, Instagram, Snapchat, TikTok, and YouTube. Investigations are underway and eSafety aims to finalize decisions on enforcement action by mid-2026.
The Commissioner was direct: “These platforms have the capability to comply today and we certainly expect companies operating in Australia to comply with our safety laws. They can choose to do so or face escalating consequences, including profound reputational erosion with governments and consumers globally.”
Enforcement options available to eSafety include platform provider notifications, enforceable undertakings, infringement notices, court-ordered injunctions, and civil penalties up to $49.5 million.
What This Means Beyond Australia
The cybersecurity community should be paying close attention to this case — not because Australia’s approach is perfect, but because the compliance failures being documented here expose something more systemic.
Age assurance as currently deployed by major platforms is security theater. The same companies that have built extraordinarily sophisticated behavioral analytics engines for advertising purposes claim they cannot reliably infer whether an account holder might be 14 years old. The same companies that detect unusual login patterns across continents within seconds struggle to flag an account whose activity patterns match a minor, within weeks.
The SMMA compliance failures reveal a familiar pattern: minimum viable compliance until enforcement forces more. Platforms removed accounts in bulk when the law took effect — 4.7 million is a real number — but the architecture of their ongoing systems was designed to pass regulatory scrutiny, not to actually keep children out.
For cybersecurity professionals involved in vendor risk assessments, privacy compliance, or platform security audits, this report is a useful case study in distinguishing between compliance documentation and operational reality. The gap between what platforms reported to eSafety and what eSafety’s independent testing found is precisely the gap that good security assessments are designed to surface.
For policy professionals and compliance officers watching from other jurisdictions: the UK, Canada, and the EU are all developing or refining similar frameworks. Australia’s experience is already shaping those conversations. The key lesson so far is that principles-based compliance requirements without sharp enforcement timelines tend to produce reactive, minimum-effort responses.
The Honest Accounting
Australia’s social media minimum age law is a genuine regulatory experiment at scale, and eSafety deserves credit for publishing a transparent, technically detailed compliance report rather than a PR document.
The preliminary data from parents suggests the law has had some real effect on account ownership. Anecdotal reports from educators include something genuinely surprising: some students appear relieved to be off social media, which suggests the framing of these platforms as overwhelmingly desirable to children is at least partially a construct.
But the systemic picture is one of platforms treating compliance as a negotiation rather than an obligation. Three months in, the enforcement phase is just beginning. The mid-2026 decisions on civil penalties will be the real test of whether this law has teeth — or whether it becomes another example of a well-intentioned regulation that major platforms outlasted through delay, minimum compliance, and legal challenge.
Sources: eSafety Commissioner Social Media Minimum Age Compliance Update, March 2026; eSafety Commissioner media release, March 31, 2026; Online Safety Amendment (Social Media Minimum Age) Act 2024; eSafety Commissioner parent pulse survey.



