From East to West: Regional Approaches to AI Governance (Sep 2023)

From East to West: Regional Approaches to AI Governance (Sep 2023)

Artificial intelligence (AI) is rapidly reshaping the technological landscape, promising revolutionary advancements while raising concerns about its ethical, privacy, and security implications. Governments across the world are grappling with how to regulate this transformative technology.

In this article, we explore the diverse AI regulatory approaches in major global markets, focusing on tier-one influencers such as the European Union (EU), the United States (US), and China, and tier-two influencers, which include the likes of Japan, the United Kingdom (UK), and India.

European Union: Risk-Based Approach – Balancing Innovation and Regulation

The EU has emerged as a leading voice in AI regulation with its ambitious plans to create a harmonised approach through the “AI Act”,[1] currently in the negotiation phase. Its aim is to protect individual rights and safety while promoting domestic alternatives to leading US tech companies. The EU’s risk-based framework places varying levels of compliance burdens on AI producers and users. However, there are concerns about inhibiting growth, as Bard delayed release in the EU markets citing regulatory concerns.[2] While this harmonised approach might promote innovation, industry leaders are concerned that it could stifle growth due to its overly prescriptive nature. The Act’s slow adaptation to rapid technological progress in generative AI could also affect its overall effectiveness.

United States: Market-Driven Approach with Varied Oversight

As a leader in AI innovation, the US has pursued a market-driven approach to AI regulation.[3] In the absence of federal legislation, various executive branch activities and state-level efforts have emerged. In 2022, the White House put forward its AI Bill of Rights, and in July 2023 it announced voluntary commitments with leading companies to manage the risks posed by AI. In Congress, Senate Majority Leader Chuck Schumer outlined his preferred approach to regulation in a June speech, promising to hold “AI-insight Forums” in September.

At the state level, 43 bills have been introduced across 21 states that would regulate a business’s development or deployment of AI solutions. These bills seek to balance stronger protections for consumers with enabling innovation and the commercial use of AI.  The flurry of policy activity in the US signals a shift towards proactive engagement with industry and civil society. However, the absence of a comprehensive federal law results in a fragmented and uncertain regulatory environment.

Furthermore, increasing geopolitical tensions and competition with China may impact the future regulatory landscape in the US. Nevertheless, some governments around the world may still adopt industry-led guidance, aligning more closely with the US approach.

China: Domestic Emphasis and Growing Ambitions

China has been steadily establishing its AI governance framework, focusing on targeted regulations for Internet-based services and Generative AI[4] ahead of a national AI legislation, the first draft of which is expected later this year.[5] The Chinese government has also sought to encourage its domestic AI capabilities as its latest crackdown on big tech appears to have wound down in an effort to prioritise domestic capabilities and outpace the US and other competitors in AI development.

The country’s approach emphasises consolidating computing power and data centres through initiatives like the “China Computing Net” (C2NET) programme. However, its attempts to dominate the AI space may face resistance globally, especially in applications that align with the CCP ideology. The potential for heightened US government oversight due to the US-China tech war could affect China’s AI projects.

Japan: Collaborative Governance for Holistic Growth

Japan, a legacy leader in technology, has developed a comprehensive set of AI principles that prioritise human-centric AI and innovation that aligns with its legacy tech leadership. The country’s regulatory approach involves active participation from government, industry, academia, and community groups in shaping AI guidelines. While non-binding, guidelines such as the “METI Governance Guidelines” usher stakeholders towards responsible AI development. Japan’s proactive private sector engagement and governmental working groups contribute to a comprehensive and holistic regulatory approach.

United Kingdom: Pro-Regulation for Harmonisation

The UK pursues a “pro-regulation approach” to AI regulation, delegating responsibilities to sectoral regulators and emphasising AI liability frameworks. The Information Commissioner’s Office plays a key role in providing industry standards on AI “explainability”, defining fairness, bias, and data protection.[6] The UK’s AI Safety Summit aims to influence international AI approaches and could contribute to regional harmonisation. If the UK can bridge the gap between EU and US regulatory philosophies, it may find more traction than expected.

India: Soft Regulation

India’s regulatory activities lag behind other major players, but the country has released policy documents outlining its National Strategy for AI. With a focus on soft regulatory guidelines, India promotes responsible AI adoption without considering specific AI-focused legislation. The absence of concrete regulations creates an environment conducive to AI adoption and growth. While ethical concerns remain, India’s focus on responsible AI and best practices is intended to promote gradual adoption, rather than aggressive regulation.

Comparative Analysis

Across these diverse approaches, some common themes and contrasts emerge:

  1. Compliance and Innovation: The EU’s stringent regulations may ensure compliance but potentially stifle innovation. In contrast, the US’s market-driven approach prioritises innovation but could lead to regulatory uncertainty.
  2. Domestic Emphasis: The EU and China emphasise domestic AI capabilities. However, China’s ambition to dominate AI could potentially raise geopolitical concerns.
  3. Collaboration and Harmonisation: The UK and Japan focus on collaboration and harmonisation, which could shape the global AI regulatory landscape balancing innovation and ethical concerns.
  4. Growth Focus: India’s soft regulation encourages innovation and could lead to substantial growth but raises ethical concerns and could increase stakeholders’ susceptibility to potential risks.


The diverse global approaches to AI regulation reflect the varying priorities of governments and industries, which in turn reflect the delicate balance between innovation and responsibility. While the EU seeks to harmonise, the US aims to avoid hindering innovation, China races for dominance, Japan collaborates for comprehensive solutions, the UK balances ethics with progress, and India focuses on growth. The development of AI is influenced by regulatory strategies that prioritise compliance, innovation, domestic capabilities, collaboration, and growth. Balancing these factors will be the key to shaping the future, influencing technological progress, adoption, and usage of AI on a global scale.

If you would like to keep updated with AI developments and AI thought leadership, please subscribe to our channels and our AI Policy Lab Newsletter.

[1] EU, 2023, “Commission proposal”.
[2] CDP Institute, 2023, “Google delays Bard Launch in EU”.
[3] Foreign Affairs, 2023, “The Race to Regulate Artificial Intelligence
[4] Provisions on the Management of Algorithmic Recommendations in Internet Information Services, Provisions on the Administration of Deep Synthesis of Internet-based Information Services, and the Interim Measures for the Management of Generative Artificial Intelligence Services.
[5] According to the 2023 Legislation Plan of the State Council.
[6] See a compendium here:

Related Articles

TechFinitive: AI copyright: should your business be worried?

TechFinitive: AI copyright: should your business be worried?

Generative AI is rapidly becoming a business staple. Many will be actively using AI to write marketing copy, generate images or...

3 Oct 2023 Press
TPX23: Keynote Speech by Gregory Francis

TPX23: Keynote Speech by Gregory Francis

In his keynote address at TPX23, Access Partnership’s CEO, Gregory Francis, outlined the organisation’s mission to enhance global access to...

3 Oct 2023 Opinion
The Key Policy Frameworks Governing AI in India

The Key Policy Frameworks Governing AI in India

India is in the process of formulating and implementing policy frameworks to govern various aspects of AI regulation. While comprehensive...

2 Oct 2023 AI policy Lab opinion
Access Alert: European Commission opens targeted consultation on the EU Space Law

Access Alert: European Commission opens targeted consultation on the EU Space Law

The European Commission has just opened a targeted consultation on the EU Space Law, which will legislate on safety, resilience,...

29 Sep 2023 Opinion