Twenty years of events at SXSW have focused on the next “big idea” — unicorn startups, hit products, new music for films — but it also puts the issues from our current ideas under a microscope. This year, it was clear that privacy is at the forefront of technology debates, appearing in keynote speeches and televised townhalls with US presidential candidates.
Leading the charge, Access Partnership’s Head of Data Policy and Trust, Laura Sallstrom, was invited to speak at several high-level sessions on global privacy, starting with the EU House’s annual privacy breakfast. Next up, Sallstrom moderated an official panel, Privacy Police: Polarised Approaches to Global Data, to discuss diverging privacy regimes around the world and where they’re trending. For example, Sallstrom said, India’s and China’s shifts towards restricting the movement and uses of data, or the debate in the US about introducing a comprehensive privacy framework.
In response, former US trade representative Ambassador Robert Holleyman provided insights on other approaches in Asia — Japan’s role in exporting balanced privacy norms and standards, as well as the role of the Asia-Pacific Economic Cooperation Cross-border Privacy Rules (APEC CBPRs) system in facilitating trust amongst industry and member governments. Peter Fatelnig, Minister-Counsellor for Digital Economy Policy for the EU delegation to the US, represented the EU’s privacy approach, describing the central tenants of the General Data Protection Regulation (GDPR) and the EU’s intent to spread its values and principles across the globe.
For the companies that have to work inside these differing approaches, Riccardo Masucci, Global Director of Privacy Policy at Intel, stressed the importance of being able to flexibly access and move data to develop new technologies like artificial intelligence while protecting individuals and their rights. For him, data sovereignty-led approaches in China and India could damage that.
Masucci notably presented Intel’s proposal for US federal privacy legislation, which is the first industry-led foray into building a US privacy regime that aims for innovative and ethical use of data.
As more countries develop, promote, and export their own privacy principles and regimes, one thing that all four panellists agreed on was that future privacy policies — in order to promote competition, consumer fairness, innovation and economic growth — must be interoperable and avoid fragmentation. But, with different approaches on display throughout the panel, that’s easy to say and hard to achieve.
Bias in Austin – SXSW’s Artificial Intelligence Debate
Blockchain was the flavour of the month during the last SXSW, but 2019’s buzzword was artificial intelligence (AI).
Dozens of panels focused on AI, from how small- and medium-sized businesses can harness AI’s potential of AI in HR processes — or in ending them, through potential displacement of much of the world’s workforce. One concern that contributors kept coming back to, though, was bias.
How much do the life experiences, beliefs and values of the engineers and developers of algorithms affect the build, data-set use and operation of the systems they create? Sasha Moss, Technology Policy Manager at R Street Institute, a US think tank, moderated an all-female panel to dive into these questions, arguing that there’s a need to rethink the way that algorithms are crafted as well as the influences and inputs of those individuals that write them, citing examples of racist facial recognition or the proliferation of sexist language.
Another panel presented outlooks on AI bias from a religious perspective and discussed how different faiths are responding and should respond to societal shifts brought on by the adoption of AI. The discussion also touched on how faith may impact the construction of AI biases, as well as the need to prevent AI systems that discriminate through faith-based outlook.
The AI debate throughout SXSW was timely indeed. We have seen numerous governments around the globe begin to develop national AI strategies, as well as policies and guidance on the ethical use of data that feeds into machine learning and AI solutions, such as Hong Kong’s recent Ethical Accountability Framework. Deeper consideration of the ethical implications and new approaches to consider them are needed for the public to trust AI-based technologies and the services they enable.
Author: Filip Pacyna, Policy Analyst, Access Partnership
Listen to the full recording of Privacy Police: Polarised Approaches to Global Data panel here.