Impact Assessments: Supporting AI Accountability

Impact Assessments: Supporting AI Accountability

Artificial intelligence’s applications span everything from healthcare and education to manufacturing and energy.

Automation is influencing strategies across industry, but the rapid pace of deployment raises as many issues as these technologies claim to solve.

Law enforcement, mortgage lending, and video hiring processes are just some of the areas where the limits of current systems have been exposed, with significant consequences. Tools like deepfake software threaten to accelerate the spread of disinformation even further, inflaming geopolitical tensions and jeopardising national security.

The world stands at a regulatory crossroads. For all that algorithms have promised to augment our decision-making, judgements on how to mitigate the negative impacts of AI while preserving its value rest in human hands alone. Where do we go from here?

Access Partnership’s latest report, ‘Impact Assessments: Supporting AI Accountability & Trust’, helps to guide this debate by providing an overview of existing processes, as well as recommendations on how to proceed. Developed in partnership with Workday, the report outlines three main accountability frameworks – algorithmic impact assessments (AIAs), third-party auditing, and conformity assessments – to explain why AIAs should become the global standard.

To date, a patchwork of regulatory responses has emerged. New York and California have proposed mandatory third-party auditing of AI systems. Companies in Virginia, Colorado, and Connecticut must conduct ‘data protection assessments’ for activities that demonstrate heightened consumer risk. The EU’s landmark Artificial Intelligence Act will require conformity assessments for ‘high-risk’ systems.

Our report analyses existing and emerging frameworks in detail, drawing on Access Partnership’s industry-leading expertise to identify the limits of each in terms of technical standards and adaptability.

Policymakers hold the key to harmonising AI’s future. The challenge of balancing transparency with the need to drive automated technologies’ advantages for economic development is essential to societal progress. To learn how to do both, download our report.


Related Articles

Risk-Based Protection of Critical Information Infrastructure: A Policymaker’s Guide

Risk-Based Protection of Critical Information Infrastructure: A Policymaker’s Guide

Access Partnership’s Coalition for Cybersecurity in Asia-Pacific (CCAPAC) has published a new report examining critical information infrastructure and how to...

20 Mar 2023 Reports
Aligning for Impact Webinar – Australia’s Food and Agribusiness A$200 Billion Potential in Alignment with the SDGs

Aligning for Impact Webinar – Australia’s Food and Agribusiness A$200 Billion Potential in Alignment with the SDGs

The recent Australian Bureau of Agricultural and Resource Economics (ABARES) Outlook 2023 conference, organised by ABARES, focused heavily on critical...

16 Mar 2023 Press
Clearing the Runway for Intra-Asia Trade: New report with UPS

Clearing the Runway for Intra-Asia Trade: New report with UPS

Asia has transformed from being solely the world’s production hub into an important centre for consumer demand. It is home...

13 Mar 2023 Reports
GovInsider | Is ChatGPT the new iPhone in terms of technological innovation?

GovInsider | Is ChatGPT the new iPhone in terms of technological innovation?

This article was originally published in GovInsider on 23 February 2023. Within just a few weeks, the capabilities of ChatGPT astounded...

23 Feb 2023 Press