At Access Partnership, we recognize the importance of balancing the interests of consumers, citizens, and governments with enabling sustainable market access for technology leaders. We call it “Fair Tech.”
In that regard, we respectfully present the first in what we hope is a series of primers examining technologies and their impact on society. Inspired by the non-partisan work of the Congressional Research Service, this edition focuses on facial recognition and its impact on race, social justice, diversity and inclusion, noting the challenges faced when deploying technologies that can be problematic in their use. Topics planned for future reports in the series include “AI and Predictive Policing” and “Social Media and Hate Speech.”
The purpose of this series to inform and equip policy professionals who may be brought into top-level conversations or newly established committees dedicated to issues such as race, diversity and inclusion. And government affairs teams have a crucial role to play as an indispensable liaison between the state and corporate leadership.
Access Partnership is positioned to assist companies grappling with these very real and very important issues. Thank you for reading.
- Police use of facial recognition technology has scaled significantly in recent years. This technology has long utilized an array of public sector resources facial image data sets, while off-the-shelf private solutions using publicly gathered datasets are increasingly available and used.
- While law enforcement asserts this is a key tool to ease their work, critics point to a lack of transparency in the technology’s use, systematic inaccuracies, due process questions, and invasion of privacy. Instances of targeting of individuals in specific communities or exercising certain speech rights risks a dangerous unbalancing of the state/citizen relationship.
- Studies on the performance of facial recognition point to systematic errors in identification, especially prevalent for persons of color, which create harms that reinforce disproportionate policing of individuals and ethnic groups of color.
- Policy-makers and stakeholders have put forward many different solutions to mitigate racial inequities. At a technical level, these may target sources of bias in the operation of the technology. However many also prohibit or place obstacles before its use by law enforcement, or create new legal safeguards on when and how the technology can be used.
- While many in law enforcement seek to facilitate continued deployment of facial recognition, an array of stakeholders from diverse perspectives – including both left and right voices in Congress and civil society, rights advocates, and tech companies – have coalesced around implementing temporary or permanent bans on uses of facial recognition technology.
- Several tech companies have voluntarily sacrificed revenue to control blowback from harmful deployments of facial recognition technology. However, unless a more permanent regulatory solution is developed, this voluntary restraint may evaporate.