Back
21 January, 2025

Viable Online Age Verification Technologies and the Implementation of Age-Restricted Social Media Legislation

Australia’s Parliament amended its Online Safety regulations[1] and on 10 Dec 2024 banned children under 16 from using social media (“Amendment”). The objective of the bill is to help parents and children keep safe online and protect young people from social harms on social media.[2] The regulations will come into effect “no later than 12 months from the passage of the bill”, which allows the affected social media platforms a year to put in place the necessary systems to comply with these new regulations.

The social media platforms that will fall under this Amendment include platforms that allow end-users to connect and interact with one another. The legislation does not indicate/call out specific platforms which will fall under it. However, Minister Michelle Rowland has named Snapchat, TikTok, Facebook, Instagram, and X (formerly Twitter) as platforms falling within its remit.[3]

Global Regulations on Age-Restricted Social Media/Internet Use

Australia is not alone in setting up age restrictions for social media. Other countries around the world have similar regulations put in place over the years (please note this list is indicative, not exhaustive):

  1. Under the European Union’s General Data Protection Regulation (GDPR), parental consent is required for children under 16 if you process their personal data (Article 8),[4] and therefore the assumption is that any child under 16 in all EU countries would be required to seek parental consent to sign up and use social media. Provisions in national data protection and privacy laws were enacted/amended in Belgium, Germany, Italy, and other EU member states, stipulating country age requirements for creating and using social media accounts: Belgium requires a social media user to be at least 13 years old, in Germany 13-15-year-olds are allowed to use social media if their parents’ consent, and in Italy, children under 14 require parental consent for social media use.[5]
  2. To curb cyberbullying and the harmful effects of social media on children, France required social media platforms to ensure parental consent was obtained for users under 15. Status: Enacted 29 Jun 2023.[6]
  3. Norway has a proposal to amend its Personal Data Act to increase the minimum age for social media use to 15 years old. Status: At proposal stage as of 24 Oct 2024.[7]
  4. Vietnam also has many age-restricted laws for internet usage, such as a 2011 ban on gaming from 10pm to 8am to protect young people under 18 (“Cinderella Law”).[8] The country also has a 2017 law that requires users to obtain permission from their parents or children under 16 when sharing their information or photographs online.[9] Vietnam also passed legislation that limits under-18s to gaming up to 60 minutes per day only.[10]
  5. South Korea also enacted a shutdown law in 2011 (“Cinderella Law”) to stop under-16 children from gaming between midnight and 6am. This was eventually ended in 2021.[11]

Keen Interest from Asia Pacific Neighbours

  1. Following the announcement of the new legislation on 10 Dec 2024, Singapore announced a month later (10 Jan 2025) that shared the same objectives as Australia in age-restricting social media access for young users, and that it was engaging its Australian counterparts to understand the developments better.[12]

If age restrictions on social media are put in place, it would be following close behind another similar age-restriction guideline just released on 15 Jan 2025, the Code of Practice for Online Safety for App Distribution Services,[13] which advises that App Distribution Services (ADS) must block children under 12 from downloading apps not suitable for their age group.

  1. Indonesia has also expressed an interest in releasing similar minimum age regulations for users of social media, shared Minister of Communication and Digital Affairs Meutya Hafid on 14 Jan 2025.[14]

Implementation Challenges, Technical Solutions, and Options

One of the biggest questions of interest from regulators, social media platforms, users – children and parents, academics, system integrators, and anyone with an interest in technology public policy is – how exactly will this age restriction be implemented? Particularly when requiring age verification might contravene personal data and privacy protection obligations?

This is where things get slightly complicated, because the Amendment’s section 63DB “Use of certain identification material and services”, stipulates that the relevant platforms cannot:

  • Collect government-issued identification material.
  • Use an accredited service such as a Digital ID (which itself is governed under the Australia Digital ID Act 2024.[15]

With these prohibitions for privacy concerns in place, what methods and mechanisms are available to platform providers to comply, ensuring a safe and secure way to enable age verification?

Types of (Viable) Online Age Verifications

While the legislation does not specify the mechanics of the law, the Australian eSafety Commissioner has been tapped to implement and enforce it. No documentation specific to the law has been released to date, but eSafety Commissioner Julie Inman Grant has identified three main ways to verify age – hard identity (IDs), behavioural signals, and biometrics.[16]

In addition, the Commission in a public statement has pointed to existing research that their team have done around the topic, [17] a key set of documents being the Roadmap for age verification and complementary measures to prevent and mitigate harms to children from online pornography.[18] The Age Verification Background Report in particular has a strong review of the different age assurance technology interventions available, including an assessment of the technologies from an Australian context.

The technologies assessed in the report are listed below, with the likelihood of the technology being suitable for this Australian regulatory context noted beside them:

Practical constraint

(i.e., can potentially be overcome)
Theoretical limits

(i.e., can’t be overcome)
Physical limitsSemiconductors are critical to all modern technology. They rely on the critical minerals silicon and germanium, which are increasingly scarce commodities. That said, developments like asteroid mining can potentially indefinitely kick the can down the road.Landauer’s principle sets a hard boundary for AI’s energy efficiency: every irreversible computation (e.g., overwriting data) must dissipate a minimum amount of energy (e.g., ~2.9 × 10⁻²¹ joules (at room temperature)) due to entropy constraints. This means, even with perfect hardware, AI training and inference face unavoidable energy costs.

That said breakthroughs in quantum computing can potentially bypass this limit, allowing for further energy efficiencies while expanding AI usage.
Computational limitsThe curse of dimensionality reveals that adding more parameters or features exponentially increases data requirements, leading to diminishing returns.AI models face fundamental constraints rooted in mathematics and computational theory. The No Free Lunch Theorem proves that no single algorithm can excel at all possible tasks – specialisation inherently limits generalisation. This implies that AI usage will be limited by the availability of specialised AI algorithms.
The interpretability-performance trade-off means the most accurate models (like deep neural networks) often become uninterpretable “black boxes,” making them unreliable for high-stakes decisions. This imposes a practical and risk-based constraint on AI usage.
Excessive model complexity could push AI systems into unstable or unpredictable regimes, hindering progress towards robust reasoning.
Data limitsLLMs are expected to catch-up with human-generated data by 2026–2032, creating a fundamental bottleneck for training future AI systems. That said, this bottleneck can be potentially bypassed through processes like synthetic data creation and transfer learning.

Growing reliance on synthetic AI-generated data risks irreversible quality degradation through “model collapse” – where AI models trained on previous AI content accumulate errors, distorting patterns and ultimately imposing a hard limit on long-term AI scalability.

Moreover, legal and privacy barriers can pose limits on data for AI training, imposing practical limits on AI capabilities.
Economic LimitsSimilar to the law of dimensionality, Chinchilla scaling laws demonstrate that simply adding parameters yields diminishing returns due to exponentially scaling costs: for example, training a model with 10× more compute might deliver only 2-3× better performance at 100× the cost.
A critical but underexplored limit on AI’s economic potential is the exhaustion of automatable tasks and problem domains. For instance, the O*NET database – a comprehensive US government taxonomy of occupations – provides a near-exhaustive list of ~1,000 job roles and their underlying tasks, effectively mapping the “automation frontier.” If AI were to fully penetrate all O*NET-classified tasks (from truck driving to legal analysis), it would hit a theoretical ceiling on labour-oriented applications – at least until new jobs or tasks emerge.AI’s ultimate economic value depends on its ability to address core human challenges like disease, resource scarcity, and aging. While AI is accelerating drug discovery (e.g., AlphaFold) and energy optimisation, many such problems require physical breakthroughs (e.g., fusion power, cellular reprogramming) where AI’s role is auxiliary. Once these domains reach saturation, AI’s growth may plateau until new scientific paradigms emerge. That is, assuming AI systems do not begin to define their own tasks divorced from their utility to humans.

Standards for Age Assurance Technologies

Another approach towards implementation would be to allow social media companies to establish the mechanisms themselves, per adherence to certain age assurance standards. These could include (this list is not exhaustive):

  1. ISO/IEC DIS 27566 Age Assurance Systems (currently at the inquiry stage with all ISO/IEC members)[22]
  2. British Standards Institute (BSI) and Digital Policy Alliance Publicly Available Specification (PAS) 1296: 2018, a Code of Practice for online age checking, provision and use of online age check services.[23]
  3. Institute of Electrical and Electronics Engineers (IEEE) 2089.1-2024 Standard for Online Age Verification, which is a framework for the design, specification, evaluation, and deployment of online age verification systems

No Single Solution

The implementation path ahead likely lies with several approaches and methods being worked on in tandem, as there is no clear single solution which will effectively provide compliance with the regulation. For example, in tandem with selecting from these different age assurance technologies lies a question around how to store and share these age assurances. This leads us to other technical implementation solutions, such as stored on a device or a digital wallet, via a reusable and interoperable age token, etc.

With these technologies in hand – and many more emerging – Australia will have 12 months to explore, examine, and implement its approach towards what has been termed “one of the world’s strictest internet crackdowns in the world”.[24]

The Security, Trust, and Data Policy Team are keeping a watching brief on digital trust and safety issues as they emerge. To find out more about the other trends we are observing in the cybersecurity and digital trust space, please contact Lim May-Ann at [email protected].

[1] https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r7284
[2] https://www.pm.gov.au/media/social-media-reforms-protect-our-kids-online-pass-parliament
[3] https://www.bbc.com/news/articles/c89vjj0lxx9o
[4] https://gdpr-info.eu/art-8-gdpr/
[5] https://www.reuters.com/technology/what-countries-do-regulate-childrens-social-media-access-2024-11-28/
[6] https://www.lemonde.fr/en/france/article/2023/06/29/france-requires-parental-consent-for-under-15s-on-social-media_6039514_7.html
[7] https://www.biometricupdate.com/202410/norway-proposes-raising-social-media-age-limit-to-15-backed-by-age-verification-system
[8] https://olganon.org/comment/218396
[9] https://e.vnexpress.net/news/news/vietnam-steps-up-child-protection-with-new-internet-law-3585717.html
[10] https://e.vnexpress.net/news/business/companies/players-under-18-cannot-exceed-60-minute-playtime-per-game-4815249.html
[11] https://www.engadget.com/south-korea-gaming-shutdown-law-end-163212494.html
[12] https://www.straitstimes.com/singapore/politics/spore-in-talks-with-australia-over-social-media-ban-for-young-users
[13] https://www.sgpc.gov.sg/detail?url=/media_releases/imda/press_release/P-20250115-1&page=/detail&HomePage=home
[14] https://www.thejakartapost.com/indonesia/2025/01/14/indonesia-planning-minimum-age-limit-for-social-media-users-minister-says.html
[15] https://www.legislation.gov.au/C2024A00025/latest/text
[16] https://www.npr.org/2024/12/19/nx-s1-5231020/australia-top-regulator-kids-social-media-ban
[17] https://www.esafety.gov.au/newsroom/media-releases/esafety-statement-on-the-online-safety-amendment-social-media-minimum-age-act-2024
[18] https://www.esafety.gov.au/about-us/consultation-cooperation/age-verification
[19] https://www.esafety.gov.au/sites/default/files/2023-08/Age-verification-background-report.pdf?v=1737119681359
[20] https://www.esafety.gov.au/sites/default/files/2023-08/Age-verification-background-report.pdf?v=1737119681359
[21] https://www.biometricupdate.com/202501/borderage-promises-100-anonymous-age-assurance-with-hand-gesture-modality
[22] https://www.iso.org/standard/88143.html
[23] https://knowledge.bsigroup.com/products/online-age-checking-provision-and-use-of-online-age-check-services-code-of-practice?version=standard&tab=preview
[24] https://www.npr.org/2024/12/19/nx-s1-5231020/australia-top-regulator-kids-social-media-ban