Artificial Intelligence is poised to surpass all previous technological achievements in terms of its potential impact. With this, however, comes a need for an AI framework that is not only responsible and effective but also considers all perspectives.
Following the UK Government’s recent AI Summit and the Biden Administration’s Executive Order, and their focus on multinational technology companies, Access Partnership’s latest report, ‘AI Risk Regulation: Cost of excluding the SME engine’, explores the potential dangers of not including small and medium-sized (SME) perspectives and priorities in the development of AI regulation. Featuring interviews with four leading AI SMEs across healthcare, data analytics, behavioural insights, and visual interpretation, our report identifies several risks associated with not pursuing a genuinely multi-stakeholder approach to AI governance and excluding a significant portion of the economy, as well as an important centre of innovation for new applications and use cases.
The report seeks to craft an AI framework that factors in all layers of the ecosystem, highlighting three significant areas in need of regulation: 1) Incorrect data; 2) Incorrect output; and 3) Incorrect usage. Our paper offers potential approaches that can be taken to regulate each of these areas, exploring topics ranging from penetration testing to extensive third-party testing.
Access Partnership closely monitors AI developments across the globe, with an AI Policy Lab dedicated to cultivating in-depth and frank conversations surrounding AI.