Content moderation in geographically, culturally and religiously diverse environments: how do we balance free speech and user protection?
19 September, 09:00 to 18:00, County Hall, Belvedere Rd, London SE1 7PB
About the session: Social media and streaming platforms operate as a “democratic” town square, a central public arena for dialogue and debate amongst citizens, organisations, and governments. But they are also a global platform, making it difficult for jurisdictions to manage harmful content and false information.
Regulators are responding to these challenges: in the EU with the Digital Services Act, Australia with its Online Safety Act 2021, and somewhat unsuccessfully in the US, with the dropping of the “Journalism Competition and Preservation Act”. In the Middle East, where 79% of Arab nationals between the ages of 18-24 get their news from social media, governments are acutely aware of social media’s impact on political stability, fuelling attempts by political leaders to exert greater control of both online content and cross-border data flows.
If we accept that broadly speaking there is a consensus on how to deal with harmful content, is a similar outcome possible for false information, misinformation and fake news? What are some examples of differing cultural norms that we are seeing represented on the internet in the form of digital content? How should we deal with “out of bounds” (OB) markers which some cultures find taboo and others celebrate?
Digital content also reflects, represents, and promotes markets for culture. Are we seeing the value of a wide and diverse global cultural content creation and aggregation, and what are we doing to advance that? Or is there a risk of a monoculture emerging, set by some countries but not others? Can content moderation solely provided by AI achieve the right outcome, and how do we account for algorithmic bias?
Panel: