In celebration of the 20th edition of Safer Internet Day on 7 February 2023, the Fair Tech Institute has reviewed recent developments in online content moderation globally.
What is content moderation and why is it so important?
Content moderation is the management of content published on a particular platform, most often used in connection with social media and online media platforms such as social sharing sites, discussion forums, and other similar sites with user-generated content (UGC).
Social media is one of the easiest and fastest ways for people to share their personal views and stay informed of current events. The freedom for users to create and share their own opinions makes it critical that platforms carry accurate information and that UGC is overseen and moderated to reduce the amount of offensive content and undesirable behaviour online. This is to create a safe environment for all – especially children and other vulnerable groups – to engage freely in information exchange.
Content moderation is also important for organisations, ensuring that information found on their platforms aligns with the purposes for which they were created, rather than being manipulated and exploited by adversarial actors that may harm both their own and their clients’ reputations.
Recent developments in content moderation
Observation 1: Governments and private companies have been stepping up efforts in creating content moderation guidelines and regulations to manage harmful behaviour and toxic content.
This is a positive signal that platforms and regulators are taking online harms seriously, attempting to ensure there are rules protecting more vulnerable populations online and preventing the spread of misinformation.
- Oct 2022 – India made amendments to its 2021 Information Technology Rules, introducing new regulations for OTT platforms and establishing new Grievance Appellate Committees to allow users to appeal against content moderation decisions made by grievance officers of social media platforms after many of these grievances were left unresolved by intermediaries.
- Nov 2022 – the EU Digital Services Act (DSA) entered into force with the purpose of introducing regulation on content moderation for online platforms and transparency obligations for intermediary services. It also highlighted, in Article 21, the need for an out-of-court (OOC) content appeals system to address content grievances. The first upcoming deadline of 17 February 2023 will require all online platforms to comply fully.
- Nov 2022 – the UK announced major changes to its Online Safety Bill, imposing responsibilities upon digital service providers to ensure that users within the UK are not exposed to illegal content and that child safety duties are made more explicit. The Bill is expected to receive royal assent and take effect sometime in 2024.
- Feb 2023 – TikTok introduced a ‘strike’ content moderation approach for its platform. Users will gain a ‘strike’ if they violate existing rules, such as the Terms of Service (ToS), with multiple strikes accumulated on your profile. Once you exceed a set number, your account may be suspended or put on ‘probation’. There are ways to exit the ‘strike’, which will go away after 90 days. The company has not announced further information on strikes that may be more severe or the penalties for repeat offenders.
Observation 2: There are challenges to content moderation regarding how to administer it properly across a platform, especially in terms of the conflict between ‘content moderation’ and other existing legislation/regulations or norms, such as ‘freedom of speech’. Economic headwinds are also forcing many tech platform companies to downsize, with many cutting trust and safety staff.
- 2019 and beyond – human content moderators across platforms such as Reddit and Tiktok have reported post-traumatic stress disorder (PTSD) and lack of adequate psychological support after needing to manually moderate harmful content.
- Jan 2023 – In its effort to downsize – part of a broader organisational overhaul – Twitter counter-intuitively cut 50% of staff overseeing content moderation across its global offices.
- 31 Jan 2023 – The state of Kansas in the US is considering government control over online speech in Senate Bill 50 (prohibiting companies from removing dangerous but legal content, such as extremist views), which the Computer & Communications Industry Association (CCIA) has written to oppose, highlighting that the bill would violate federal law and the First Amendment.
Established in 2021, the Fair Tech Institute (FTI) is Access Partnership’s think tank that develops and provides substantive research considering the myriad ways where technology, business, government, and good governance intersect. Our focus is on providing evidence-based research and insight into questions around technology and governance. Our mission is to provide thought leadership, new ideas, and well-considered approaches to the digital opportunities and challenges that our world currently faces. We closely monitor tech, regulation, and policy developments globally.
If you’d like to learn more about content moderation risk management issues, please contact Cheryl Low at [email protected] or Lim May-Ann at [email protected].