Tech Policy Trends 2024: The EU AI Act and the future of AI regulation

Tech Policy Trends 2024: The EU AI Act and the future of AI regulation

After taking the world by storm this year, 2024 will be a pivotal moment for AI’s regulatory future. Countries globally are enacting legislation in response, but how much global cooperation can be achieved?

Global policy debate will address allocation of responsibilities within the AI ecosystem

Managing risk

As policymakers consult domestic AI players and become more familiar with the AI value chain, they increasingly realise that the “developer-deployer” and “build vs buy” dichotomy does not accurately represent the complex realities of the AI industry. Most organisations will not create AI solutions from scratch but build them on top of tools or services provided by others.

This complex supply chain requires careful analysis by policymakers to identify the right entity that should be responsible for mitigating specific risks to ensure safety and effectiveness. For example, risks related to data subject rights are better mitigated by entities with a direct relationship with the data subjects. Conversely, risks of embedded biases are best mitigated by the entities that oversee and manage the datasets and training of the AI model.

Concern over competition issues

Policymakers recognise that only a few entities have the data, computing, financial resources, and AI expertise to build their own Large Language Models (LLM) or foundation models.

Such concerns have manifested through the G7 Communique on Digital Competition, where G7 antitrust authorities highlight the risk of anti-competitive behaviour. The communique also highlights that massive resources are needed to develop large-scale AI and warns that “incumbent tech firms” could control access to resources to reduce competition or adopt anticompetitive behaviour, such as bundling, tying, exclusive dealing, or selfpreferencing, to harm rivals. The way we understand and define the AI ecosystem will shape competition policy.

Categorising the AI ecosystem

Policymakers have already started developing more nuanced ways of categorising entities in the ecosystem. In July, Singapore released advisory guidelines on the use of personal data in AI systems that went beyond the developer-deployer dichotomy to differentiate between custom AI procurements (where the customer participates in the design) and Commercial Off-the-Shelf (COTS) solutions.

In September, Japan released draft AI business operator guidelines that included up to five different categories. The first two (AI algorithm developers and AI training implementers) differentiate between algorithms and training. The next two (AI system/service implementers and service providers that utilise AI) are similar to Singapore’s distinction between custom and COTS systems, with safeguards for the latter focusing on data protection and input/output management, rather than the AI system itself. The final category (entities that use AI in their business operations) distinguishes between entities that use AI for external-facing service functions, as opposed to internal operations.

Most recently, 16 countries, including the US and UK, released Guidelines for Secure AI System Development that promote an AI lifecycle design approach recognising the distinct roles of different companies in the AI value chain and corresponding variations in their risks and responsibilities. Building on this work, 2024 will see more definitions of where responsibility lies and the first court cases connecting levels of liability to different roles and responsibilities in the value chain.

Safeguards for public sector AI use will inform early AI regulations

Public sector progress

Many jurisdictions, including Australia and South Korea, have found it difficult to move ahead with their draft AI regulations due to strong opposition from stakeholders and political opponents. Even the EU AI Act once looked in potential jeopardy due to Germany, Italy, and France’s views on foundation models.

By contrast, there has been far less resistance against the development of safeguards for public sector AI use. Australia’s AI taskforce is scheduled to release safeguards for government AI use by early 2024. The recent US Executive Order directs federal agencies to use AI safely and promote such practices in the private sector. Japan promulgated a draft agreement for the use of generative AI in government agencies, while New Zealand issued its Generative AI Guidance for the Public Sector (2023).

Informing future regulations

Public sector safeguards will set key precedents. As governments gain experience using AI tools and implementing safeguards, their risk mitigation approach will evolve and improve. Over time, the risk appetite of the public may gradually decrease, even as regulators gain confidence and credibility to push regulations or best practices to the wider industry. By the end of 2024, public sector safeguards will find their way into regulation of the broader economy, requiring the same safeguards to apply in the private sector.

The EU AI Act will not set the global standard

Larger risk appetite

Many countries’ draft AI regulations are less conservative compared to the EU AI Act and don’t have a category for “banned” AI applications. South Korea’s AI Bill has a more flexible approach towards facilitating high-risk AI solutions. Australia’s responsible AI paper had less rigid risk tiering. Others, like Thailand’s draft AI Bill, have an entirely different focus, prioritising the growth of their domestic AI ecosystems. Equally, the recent US Executive Order centres on standards development, funding and innovation, and promoting US leadership on AI policy.

Evidence from Asia

Policymakers in Asia responded[1] poorly to the EU’s lobbying efforts concerning the AI Act back in July. Singapore and the Philippines shared their view that AI regulations at this stage would be premature and could stifle innovation. South Korea said that while the EU AI Act was an important reference, it would be closely following the G7 Hiroshima AI Process. None of the other seven countries that the EU engaged (including Japan and India) have taken any significant steps toward greater alignment with the EU AI Act.

Circling towards consensus

More countries are actively seeking to lead the global discourse on AI policy. Japan has strongly pushed the Hiroshima AI Process and secured G7 agreement to 11 new guiding principles. In November, 28 countries, led by the UK and including the US, EU, and China, signed the Bletchley Declaration, committing to develop AI safety principles. This was swiftly followed by the new Guidelines for Secure AI System Development by the US, UK, and 16 other countries.

The US has also stepped up its harmonisation efforts with Asia, establishing a “crosswalk” between the NIST (National Institute of Standards and Technology) framework and Singapore’s AI Verify framework, as well as starting discussions with Japan on a similar “crosswalk”. Furthermore, the upcoming ASEAN (Association of Southeast Asian Nations) AI guide (driven by Singapore) is reportedly closely aligned with the NIST framework. Rather than a single framework setting the worldwide standard, we will see various harmonisation workstreams gradually circle toward a global consensus.

Every year, Access Partnership’s Tech Policy Trends report leverages our global expertise and relationships with leading stakeholders across the public and private sectors to bring you the defining issues of the next 12 months.

From to the future of internet governance and consumer protection to the implications of the G20 in Brazil and US-China relations, stay one step ahead of the headlines in 2024 by downloading the full report.


[1]  https://reuters.com/technology/eus-ailobbying-blitz-gets-lukewarm-response-asiaofficials-2023-07-17/

Related Articles

Access Alert: New agency for digital transformation and telecommunications in Mexico

Access Alert: New agency for digital transformation and telecommunications in Mexico

The Mexican Congress has approved the creation of the Agency of Digital Transformation and Telecommunications, which will have the level...

19 Nov 2024 Opinion
Access Alert: The wider impact of Australia’s social media ban for under-16s

Access Alert: The wider impact of Australia’s social media ban for under-16s

Australia’s states and territories have unanimously backed a national plan to ban children under sixteen from most forms of social...

18 Nov 2024 Opinion
Economic Impact Report: Driving digital growth in Vietnam with Google

Economic Impact Report: Driving digital growth in Vietnam with Google

Vietnam’s economic development journey has been impressive. From one of the world’s lowest-income countries, Vietnam has risen to become a...

14 Nov 2024 General
Access Alert: What Trump’s 2024 victory means for tech and trade

Access Alert: What Trump’s 2024 victory means for tech and trade

The election of Donald Trump as the 47th US President portends change in US technology and digital policy. Artificial Intelligence...

8 Nov 2024 Opinion