Tech Policy Trends 2022 | AI Regulation: Is this GDPR Again or will the world ignore Europe this time?

Tech Policy Trends 2022 | AI Regulation: Is this GDPR Again or will the world ignore Europe this time?

The past decade has witnessed the development of artificial intelligence (AI) and its infiltration into all aspects of the world we live in. With so many advancements, our relationships with organisations providing banking, healthcare, law enforcement, social interaction, and access to information, to name a few, are now all affected by AI. However, in the last few years concerns about the implications of AI on fundamental rights and society at large have led policymakers around the world to consider the role of regulation in addressing the impact of AI on our lives.

2022 will be the year when European Union (EU) policymakers attempt to be the first to legislate on AI. However, will it be possible for the EU to leverage its global economic and political authority to influence the regulation of AI across the globe? This was the intention of EU policymakers when the AI Act proposal was released by the European Commission in April 2021. Having recognised its ability to set standards globally in privacy legislation with the GDPR in 2018, there is no doubt the EU is once again attempting to take a leadership role in regulating digital technologies. Whether one considers it the Brussels effect or simply the first-mover advantage, the outcome is the same.

By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.”

Margarethe Vestager, Executive Vice President of the European Commission

Aware the EU is lagging in the global race for the development and deployment of AI, EU policymakers have entrusted the new AI act with a double hatted role of ensuring the development of “trustworthy” AI in Europe while enhancing the EU’s competitiveness. Alongside plans to invest in AI through the Digital Europe Programme and the AI white paper, the EU is not only attempting to lead the way in setting high AI regulatory standards but also influencing the development of AI development globally to enhance its competitive position. While guidelines relating to AI and regulation of specific applications or sectors of AI exist around the world, when enacted, the EU AI Act will be the first horizontal legal framework of its kind.

Similarities can be drawn between the GDPR and the AI Act proposal not only in its attempts to set the standard globally but also in its content, most notably its extra-territoriality effect. While ensuring the protection of residents in the EU from AI systems created or deployed anywhere in the world, this will also have a significant impact on businesses operating in or attempting to access the EU market. Those intending to commercially benefit from the EU market and its 500 million citizens, whether based in the EU or not, will be required to play by these new rules. Further similarities can be drawn to the GDPR with respect to scope, sanctions, surveillance, and enforcement. Introducing a risk-based approach to the regulation of AI, the proposal introduces hefty compliance obligations and burdens on a wide range of stakeholders involved in the production, provision, and use of high-risk AI systems. There is also an outright prohibition on those AI systems considered to pose an “unacceptable risk” such as biometric surveillance.

Can transatlantic allies align this time?

What is guaranteed, is that the regulation will introduce overarching harmonised rules for EU member states, providing more legal clarity for businesses operating in the EU and alleviating burdens of complying with 27 different national rules. The catalyst effect, which this regulation will have on driving regulation of AI globally, is also likely if the experience of the GDPR is repeated. However, what remains to be seen is whether countries, such as the US, will align federal legislation with the high standards expected to be introduced by the AI Act or choose to place a smaller regulatory burden on companies. While the US has seen the development of regulation on AI at a state level, initiatives at a federal level are primarily voluntary, standards and initiatives remain focused on bolstering competition, innovation and research on AI, such as the US Innovation and Competition Act passed in June 2021.

The new EU-US Trade and Technology Council (TTC) represents the latest step in the transatlantic joint efforts for technological prosperity. It remains to be seen whether this initiative will be sufficient to align AI regulation on both sides of the Atlantic.

Learning from the past

The regulation will be subject to heavy lobbying by industry and organisations representing scientists, academia and civil society, and consumer protection organisations. Negotiations between EU policymakers are likely to be lengthy and challenging. The development of AI is only accelerating and the call for its regulation is increasing, EU policymakers will be faced with the challenge of navigating varying interests while seeking to protect fundamental rights and promote innovation and competition.

It has been a little over three years since the GDPR has entered into force and policymakers are already considering its amendment in response to technological advancements, constraints on international business operations, and enforcement pressures. Policymakers should learn from this experience and remember that the pursuit of setting global standards should not come at the expense of introducing barriers to AI uptake and innovation. Global interoperability of AI regulations will ensure legal certainty and avoid excessive regulatory burdens for SMEs and industry, will enhance the EU’s AI ambitions by creating an environment attractive for international investors, start-ups and businesses, and provide citizens with the protections they need.

Predictions:

  • AI regulation will proliferate in jurisdictions across the world.
  • Industries will unite on a global level to encourage EU policymakers to consider industry best practices when delimiting the categories of high-risk AI and fundamental definitions in line with global standards.
  • Industries will call for transatlantic regulatory alignment on AI standards, while the US, at a federal level, is likely to continue the current approach in recommending frameworks and voluntary standards set by national institutions. Leveraging the EU-US Trade and Technology Council by engaging on both the EU and US fronts could prove a viable route to encourage policy alignment.
  • Businesses were caught out by GDPR, but that won’t happen this time. While the AI Act will affect Companies operating or planning on operating in the EU market, companies should be better prepared to comply with strict new rules on AI coming from Brussels.

Subscribe to our news alerts here.

Related Articles

Can Asia artificially think?

Can Asia artificially think?

What ChatGPT really tells us about the future economy.  News of ChatGPT’s capabilities has captured the public imagination. Our algorithmically...

7 Feb 2023 Opinion
Access Alert | Safer Internet Day 2023 – Recent Policy Developments in Content Moderation

Access Alert | Safer Internet Day 2023 – Recent Policy Developments in Content Moderation

In celebration of the 20th edition of Safer Internet Day on 7 February 2023, the Fair Tech Institute has reviewed...

6 Feb 2023 Opinion
Tech Policy Trends 2023 | Infrastructure and supply chain security in the EU

Tech Policy Trends 2023 | Infrastructure and supply chain security in the EU

Since Ursula von der Leyen introduced the idea of leading a ‘Geopolitical Commission’ in 2019, the EU has placed security...

6 Feb 2023 Opinion
Access Alert | Canada’s Public Consultation on a Modern Regulatory Framework for Space

Access Alert | Canada’s Public Consultation on a Modern Regulatory Framework for Space

The Canadian Space Agency (CSA) has launched a public consultation on various aspects of the Canadian national space regulatory framework....

1 Feb 2023 Opinion