The rise of deepfakes – AI-generated content that manipulates audio, video, or images to create realistic but false representations – has introduced new challenges in the realms of privacy, security, and misinformation. Deepfakes can be used for both legitimate purposes (e.g., entertainment, art, and digital marketing) and malicious activities, such as spreading fake news, impersonating individuals for fraud, or defamation.
The rapid proliferation of deepfake technology has exposed critical vulnerabilities, particularly in the context of social media and online platforms where misinformation can spread rapidly.
Regulatory approaches in APAC
Governments are increasingly recognising the serious threats posed by deepfakes, particularly in areas such as disinformation, election interference, and reputational harm. As a result, many countries in APAC have introduced laws that criminalise the creation and distribution of deepfakes, particularly those intended to deceive, defame, or harm individuals.
South Korea
South Korea has focused on criminalising the creation and distribution of malicious deepfake content, with particular emphasis on protecting individuals from deepfake pornography and disinformation. In late 2023, South Korea’s National Assembly amended the Public Official Election Act to ban the use of deepfakes and manipulated media within 90 days of an election. Violations of this law can result in penalties of up to seven years in prison and fines of 50 million won. Moreover, the National Election Commission mandates that political campaigns disclose any use of AI-generated content.
Separately, under the Act on Special Cases Concerning the Punishment of Sexual Crimes, individuals who edit or process images, videos, or audio targeting a person’s face, body, or voice to cause sexual desire or shame without their consent can face severe penalties. In place since 2020, this Act covers the publication of edited or processed content, thus including deepfakes.
In September 2024, South Korea amended the Act to increase the maximum sentence for this offence to 7 years, regardless of intent to distribute; and prohibit the purchase, storage, or viewing of such material, with penalties of up to 3 years imprisonment or a fine of up to KRW 30 million.
The government is also considering expanding enforcement powers to authorise undercover online investigations, even when victims are adults, allowing authorities to confiscate profits from deepfake pornography businesses, and imposing stricter fines on social media platforms that fail to prevent the spread of deepfakes and illegal content.
Australia
Similarly, the Australian government has passed the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, which introduces new criminal offences to ban the sharing of non-consensual deepfake sexually explicit material. This legislation imposes severe penalties, including up to six years in prison for sharing such material, and up to seven years for creating and sharing it.
The bill targets the harmful abuse of digitally created content, which is often used to degrade and humiliate victims, predominantly women and girls. The new law complements other government actions aimed at tackling gender-based violence and enhancing online safety.
Singapore
Singapore has taken a proactive regulatory stance with its newly passed law to combat deepfakes and other digitally manipulated content during elections.[1] The Elections (Integrity of Online Advertising) (Amendment) Bill prohibits publishing digitally generated or manipulated content during elections that realistically depicts a candidate doing or saying something they did not, applying to online election advertising once the writ of election is issued until polls close.
The country’s approach emphasises transparency in online political advertising and the regulation of AI-generated content to prevent manipulation during sensitive periods like elections.
The need for clarity
The global landscape for deepfake regulation is rapidly evolving, with countries adopting diverse approaches to address the risks posed by this technology. As legislation continues to develop at a swift pace, ongoing vigilance and adaptability are essential.
Both digital platforms and policymakers have critical roles to play in navigating this complex terrain. Platforms must take proactive measures, such as implementing content verification and user consent protocols, to reduce the risks associated with deepfakes. Policymakers, on the other hand, need to craft clear, flexible legislation that can address both current and emerging challenges in this fast-moving sector. These combined efforts are vital for creating a safer and more accountable digital environment.
A proactive response
Governments in APAC are introducing stringent regulations to address the privacy, security, and misinformation challenges posed by deepfakes. These laws are reshaping the digital landscape and setting new standards for platforms and businesses to follow. Contact Dr. Gayathri Haridas at [email protected] to understand how these regulations will impact your operations and take the necessary steps to minimise risks, avoid legal penalties, and protect your reputation.