Australia’s Parliament amended its Online Safety regulations[1] and on 10 Dec 2024 banned children under 16 from using social media (“Amendment”). The objective of the bill is to help parents and children keep safe online and protect young people from social harms on social media.[2] The regulations will come into effect “no later than 12 months from the passage of the bill”, which allows the affected social media platforms a year to put in place the necessary systems to comply with these new regulations.
The social media platforms that will fall under this Amendment include platforms that allow end-users to connect and interact with one another. The legislation does not indicate/call out specific platforms which will fall under it. However, Minister Michelle Rowland has named Snapchat, TikTok, Facebook, Instagram, and X (formerly Twitter) as platforms falling within its remit.[3]
Global Regulations on Age-Restricted Social Media/Internet Use
Australia is not alone in setting up age restrictions for social media. Other countries around the world have similar regulations put in place over the years (please note this list is indicative, not exhaustive):
- Under the European Union’s General Data Protection Regulation (GDPR), parental consent is required for children under 16 if you process their personal data (Article 8),[4] and therefore the assumption is that any child under 16 in all EU countries would be required to seek parental consent to sign up and use social media. Provisions in national data protection and privacy laws were enacted/amended in Belgium, Germany, Italy, and other EU member states, stipulating country age requirements for creating and using social media accounts: Belgium requires a social media user to be at least 13 years old, in Germany 13-15-year-olds are allowed to use social media if their parents’ consent, and in Italy, children under 14 require parental consent for social media use.[5]
- To curb cyberbullying and the harmful effects of social media on children, France required social media platforms to ensure parental consent was obtained for users under 15. Status: Enacted 29 Jun 2023.[6]
- Norway has a proposal to amend its Personal Data Act to increase the minimum age for social media use to 15 years old. Status: At proposal stage as of 24 Oct 2024.[7]
- Vietnam also has many age-restricted laws for internet usage, such as a 2011 ban on gaming from 10pm to 8am to protect young people under 18 (“Cinderella Law”).[8] The country also has a 2017 law that requires users to obtain permission from their parents or children under 16 when sharing their information or photographs online.[9] Vietnam also passed legislation that limits under-18s to gaming up to 60 minutes per day only.[10]
- South Korea also enacted a shutdown law in 2011 (“Cinderella Law”) to stop under-16 children from gaming between midnight and 6am. This was eventually ended in 2021.[11]
Keen Interest from Asia Pacific Neighbours
- Following the announcement of the new legislation on 10 Dec 2024, Singapore announced a month later (10 Jan 2025) that shared the same objectives as Australia in age-restricting social media access for young users, and that it was engaging its Australian counterparts to understand the developments better.[12]
If age restrictions on social media are put in place, it would be following close behind another similar age-restriction guideline just released on 15 Jan 2025, the Code of Practice for Online Safety for App Distribution Services,[13] which advises that App Distribution Services (ADS) must block children under 12 from downloading apps not suitable for their age group.
- Indonesia has also expressed an interest in releasing similar minimum age regulations for users of social media, shared Minister of Communication and Digital Affairs Meutya Hafid on 14 Jan 2025.[14]
Implementation Challenges, Technical Solutions, and Options
One of the biggest questions of interest from regulators, social media platforms, users – children and parents, academics, system integrators, and anyone with an interest in technology public policy is – how exactly will this age restriction be implemented? Particularly when requiring age verification might contravene personal data and privacy protection obligations?
This is where things get slightly complicated, because the Amendment’s section 63DB “Use of certain identification material and services”, stipulates that the relevant platforms cannot:
- Collect government-issued identification material.
- Use an accredited service such as a Digital ID (which itself is governed under the Australia Digital ID Act 2024.[15]
With these prohibitions for privacy concerns in place, what methods and mechanisms are available to platform providers to comply, ensuring a safe and secure way to enable age verification?
Types of (Viable) Online Age Verifications
While the legislation does not specify the mechanics of the law, the Australian eSafety Commissioner has been tapped to implement and enforce it. No documentation specific to the law has been released to date, but eSafety Commissioner Julie Inman Grant has identified three main ways to verify age – hard identity (IDs), behavioural signals, and biometrics.[16]
In addition, the Commission in a public statement has pointed to existing research that their team have done around the topic, [17] a key set of documents being the Roadmap for age verification and complementary measures to prevent and mitigate harms to children from online pornography.[18] The Age Verification Background Report in particular has a strong review of the different age assurance technology interventions available, including an assessment of the technologies from an Australian context.
The technologies assessed in the report are listed below, with the likelihood of the technology being suitable for this Australian regulatory context noted beside them:
Age Verification Method/ Technology | Likely Acceptable Implementation for Australia Age-Restricted Social Media Regulation |
---|---|
1. Age-gating based on age self-declaration | This is unlikely to be an acceptable method for implementation as it is easy to circumvent by misrepresenting your age/birthday while signing up or making a declaration on a social media site. |
2. Account-based assurance (e.g., cross-authentication with another account) | This is a possible method for implementation, with limitations as there will likely be data privacy and protection concerns, as well as questions around if this contravenes the second prohibition in section 63DB(2) on accredited services. |
3. Vouching for another person’s age | This is a possible method for implementation, with limitations as there will likely be data privacy and protection concerns, as well as questions around if this contravenes the second prohibition in section 63DB(2) on accredited services. In addition, it raises the question – who vouches for the voucher, and how was he age-verified? |
4. Requiring ‘hard identifiers’ such as government-issued identity documents (ID) or digital versions of these documents, or drawing on new developments in the field of digital identity. | Without further information forthcoming on implementation mechanisms, government-issued ID or accredited services are explicitly prohibited in the Amendment. However, alternative approaches using new technological developments might be put in place, e.g. AgeChecked (UK) and Mastercard ID (Australia) are two hard identifier systems which were reviewed and tested in the Background Report (p186).[19] |
5. Using biometrics and capacity testing to estimate age based on characteristics or aptitude | This is a possible method, with limitations as there will likely be data privacy and protection concerns. Two biometric systems were tested in the Background Report (p176)[20] – Yoti and Privately. Yoti (an Australian company) has been integrated into an existing local social media app Yubo; it claims that it deletes all biometric data as soon as the age check is conducted. The Background Report notes that the independent test on Yoti did not deem Yoti’s age estimation software to be overly sensitive. Privately (a Swiss company) uses multiple forms of biometrics in its age estimations, including images of a user’s face, voice pattern, and writing. The age estimation models use deep learning to learn complex mathematical models to detect age. These calculations are performed on the user’s device, meaning the biometric data does not leave the user’s device. |
6. Artificial Intelligence (AI) profiling or inference models that estimate age on behaviour / behavioural signals | This is a possible method, with limitations as there will likely be data privacy and protection concerns. BorderAge is an example of a company that uses hand gestures to determine if a user is above a specific age threshold. The system is built on medical science that assumes different ages will have different limb movements as their muscles and bones develop and grow across ages. [21] |
Standards for Age Assurance Technologies
Another approach towards implementation would be to allow social media companies to establish the mechanisms themselves, per adherence to certain age assurance standards. These could include (this list is not exhaustive):
- ISO/IEC DIS 27566 Age Assurance Systems (currently at the inquiry stage with all ISO/IEC members)[22]
- British Standards Institute (BSI) and Digital Policy Alliance Publicly Available Specification (PAS) 1296: 2018, a Code of Practice for online age checking, provision and use of online age check services.[23]
- Institute of Electrical and Electronics Engineers (IEEE) 2089.1-2024 Standard for Online Age Verification, which is a framework for the design, specification, evaluation, and deployment of online age verification systems
No Single Solution
The implementation path ahead likely lies with several approaches and methods being worked on in tandem, as there is no clear single solution which will effectively provide compliance with the regulation. For example, in tandem with selecting from these different age assurance technologies lies a question around how to store and share these age assurances. This leads us to other technical implementation solutions, such as stored on a device or a digital wallet, via a reusable and interoperable age token, etc.
With these technologies in hand – and many more emerging – Australia will have 12 months to explore, examine, and implement its approach towards what has been termed “one of the world’s strictest internet crackdowns in the world”.[24]
The Security, Trust, and Data Policy Team are keeping a watching brief on digital trust and safety issues as they emerge. To find out more about the other trends we are observing in the cybersecurity and digital trust space, please contact Lim May-Ann at [email protected].