First Principles and Regulating Artificial Intelligence

First Principles and Regulating Artificial Intelligence

Reframing the Narrative Around Artificial Intelligence

As human beings, we tend to be drawn towards pessimistic stories due to our inherent negativity bias. The same can be said of the media, which tends to portray artificial intelligence (AI) with a fatalistic focus on job displacement and ethical violations. This negative attention often misguides how we consider AI, preventing us from engaging in more meaningful discussions about how to capture its benefits.  

Economic challenges create AI opportunities

As the global economy slows and labour supply continues to fall, AI offers increased efficiencies and productivity. The automation of routine tasks can transition workers towards more high-value work. Improved forecasting will reduce waste and enable personalised services. Chinese clothing retailer Shein, for example, uses AI not only to produce new clothing designs but to manufacture in real-time based on demand forecasts.1 

The International Monetary Fund has forecast a global growth slowdown from 3.4% last year to 2.9% in 2023,2 yet the UAE government aims to use AI to boost its GDP by 35% (USD 96 billion), forming the UAE Council for AI to integrate its use within government departments and the education sector.3 Elsewhere, the World Health Organization (WHO) and the International Telecommunications Union (ITU) are developing a framework for standardisation in healthcare. AI has the potential to shape virtually every sector and market in the near future. The question of whether this is a good thing rests solely with those guiding its implementation. 

Governments worldwide are already contemplating how to manage AI risks, with many establishing voluntary principles and guidelines ahead of stricter frameworks. The European Union (EU) has proposed its AI Act, which aims to introduce a common regulatory and legal framework focused on trust and safety. Singapore, meanwhile, has developed a Model AI Governance Framework that it is promoting internationally. 

Developing solutions

Access Partnership is actively engaged in shaping and analysing these debates, helping organisations to understand and influence policy on various use cases. Be it as a member of the EU AI Alliance, on the ground at ITU conferences, or in advising clients on the growing appetite for governance frameworks in Asia Pacific, we closely monitor all relevant policy developments. 

One strategy worth considering for regulating AI is a first principles approach. This entails breaking the risks and challenges of AI down to their most basic form to build a regulatory framework around them. Developing these principles then allows us to identify the risks and implement necessary safeguards to minimise harmful outcomes. 

Different governments will have different interpretations of what principles are important. These could include ensuring human safety, privacy, accountability, rule of law, fairness and non-discrimination, transparency, promoting innovation and growth, and upholding ethical and social values, among others. 

The roadmap to responsible regulation

The first step for governments will be to identify what these principles are, why they are important, and how to define them. A general baseline approach could offer the basis for protecting these principles; for example, ensuring that all AI applications adhere to personal data protection laws or cannot be used to endanger human lives.  

The next step is to identify specific risks based on how AI is applied, such as generating biased results that lead to discriminatory outcomes. Such occurrences have already been observed within law enforcement, mortgage lending, and video hiring processes. By identifying these risks, we can then examine the cause (e.g., the use of biased data sets), understand why such data was used, and take steps to address these uses (e.g., using a wider pool of data or removing biased markers). 

Following this process, a tailored regulatory approach can be designed to address specific risks and challenges. This may include the use of specific technical standards, employing greater testing requirements and thresholds, disclosure obligations, or maintaining monitoring and oversight mechanisms to identify unexpected risks that may emerge. Access Partnership’s recent report on algorithmic impact assessments detailed the broad regulatory variance that currently exists in the field of AI accountability, highlighting the complications that a lack of cross-regional standardisation can cause for business.4 

Facilitating engagement

Of critical importance is the need to encourage responsible innovation by creating the right culture (or, where necessary, incentives) for organisations to invest in ethical and responsible AI development. Furthermore, regular coordination and communication across industry, government, academia, and civil society are crucial to uncover and address new risks and challenges while promoting cooperation. 

To advance these debates, Access Partnership will hold a series of dialogues over the coming months that bring together key stakeholders from across these domains to explore key issues. Our events range from a discursive conversation on how to ensure the ethical development of generative AI to regional dissections of the policymaking approaches being adopted in the US and EU, respectively. 

There are contrasting viewpoints regarding how AI should be regulated, and these perspectives will continue to evolve with the technology and its applications. While no one party is likely to have all the solutions, encouraging a collaborative and open-minded approach to AI regulation will ensure that we can maximise its potential while minimising the risks. AI may not yet be able to come up with the right regulatory solutions, but we will probably be consulting with it soon. In the meantime, Access Partnership is perfectly placed to offer the insight that companies and governments need to navigate the complexities of this dynamic, rapidly growing sector. 

Related Articles

Driving Brazil’s app ecosystem: The economic impact of Google Play and Android

Driving Brazil’s app ecosystem: The economic impact of Google Play and Android

With the largest Internet population in Latin America and the fourth-largest market for app adoption globally, Brazil is an established...

15 Apr 2024 Opinion
Access Alert: Brazilian authorities ask for contributions on AI and connectivity

Access Alert: Brazilian authorities ask for contributions on AI and connectivity

On 9 April, Brazil’s National Telecommunications Authority (Anatel) released a public consultation to gather contributions and insights about the role...

11 Apr 2024 Latest AI Thought Leadership
Responsible AI Readiness Index (RARI)

Responsible AI Readiness Index (RARI)

In an era where AI increasingly influences every aspect of society, the need for responsible and ethical practices has become...

11 Apr 2024 General
Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

Access Alert: Orbiting innovation – key satellite industry trends unveiled at SATELLITE 2024

The SATELLITE 2024 conference in Washington, DC, took place between 18-21 March 2024. The event brought together close to 15,000...

28 Mar 2024 Opinion