Tech Policy Trends 2020 | AI Regulation Takes Shape: A Look at the EU and US

2020 will be an important year for Europe and the US in forging their own approaches to AI regulation, with implications for the way transatlantic companies will conduct business on both sides of the Atlantic.
Simona Lipstaite
Public Policy Manager
Michael Clauser
Head of Data Policy & Trust

The clock is ticking on the European Union to legislate AI. The new European Commission President, Ursula von der Leyen, has promised AI regulation in the first 100 days of taking up office, which means we can expect a clearer picture to emerge by late February. This year will lay the foundations for AI regulation in Europe for decades to come. The European Union has published several communications laying out a vision of “human-centric” AI based on defined ethical notions of security, privacy, and dignity.

At the same time, the US – an AI powerhouse – does not face the same time restraints and pressures but is moving forward with proposed AI legislative initiatives all the same. 2020 will be an important year for Europe and the US in forging their own approaches to AI regulation, with implications for the way companies will conduct business on both sides of the Atlantic.

Uncertainty in Europe

Von der Leyen’s promise has not been without controversy. The Commission has still not decided on whether to regulate AI “horizontally”, with baseline requirements for a wide range of AI applications or “vertically”, focusing on specific sectors in the first instance. Should autonomous cars face the same scrutiny as an add-on helping to curate invoicing for companies’ finance departments? Allowing just 100 days to develop such regulation will ensure pressure-filled January and February, not least for the Commission staffers working on the proposal.

The new European Commissioner for the Internal Market, Thierry Breton, has said that technologies such as AI would enable the EU to become a “key industrial player,” and defended “ambitious industrial policies” which – according to Breton – would have to be socially responsible in order “not to leave anyone behind.” At the same time, he clarified that the Commission may not produce AI regulation “in the first 100 days” and that he will not be “the voice of regulation on AI.”

European regulators must contend with issues which some other global AI powers do not. Europe continues to trail behind, dwarfed by countries such as the US and China, in terms of investment in AI and the ability to retain AI talent. These considerations will influence the way the EU chooses to regulate, with the intention of not stifling innovation, but with a strong understanding that it cannot afford to take a back seat anymore and must take proactive steps to rectify the situation.

US Regulations on AI

While the possibility of the EU advancing regulation on aspects of AI continues to turn heads, lawmakers in the US are increasingly moving in the same direction and the beginning of 2019 saw the Trump Administration release its own “Artificial Intelligence for the American People” policy. Speaking at an event at Stanford in November, White House Chief Technology Officer, Michael Kratsios, declared “we fundamentally believe America must lead the world in critical emerging technologies”— shorthand for AI and quantum computing.

There are a few major areas to watch in the US in 2020 for AI regulation trends. The first to watch will be final export control rules issued by the US Bureau of Industry and Security. Prompted by national security concerns that foreign adversaries, especially China, will leverage US-developed know-how, BIS proposed new restrictions in November 2018 on export of vaguely defined “emerging” technologies – with over a dozen catagories named including AI. With BIS finalizing rules as well as proposing new ones for “foundational” technologies, in 2020 its possible to foresee specific discrete AI technologies being export controlled by individual rules.

The next leading eduts is action adressing AI-generated content. With growing concerns of how to control fake news and new forms of state-backed disinformation campaigns, there are a dozen bills sitting in Congress to ban or restrict Deep Fakes, also known as Generative Adversarial Networks.  Another front related to content is reform of copyright as it relates to AI and questions such as; if an AI algorithm creates new art or publishes an “original” piece from underlaying content, who owns the copyright?

Perhaps more consequentially to the broader economy and ways that AI is already used today, the American judiciary appears set to challenge artificial intelligence in court cases — most controversially in employment law, where cases will soon enter the legal system regarding how uses of AI may discriminate against certain groups during the hiring process. Since the Anglo-American regulatory system is shaped as much by common law precedent as it is by regulatory agency dictate, these cases could have significant impacts on the policy environment.

Finally, while state and local measures (such as in San Francisco) against deployments of AI like facial recognition over privacy concerns briefly flashed into the headlines in 2019, we could see much more consequential action on the federal front in 2020.  Without garnering much notice, several proposed new comprehensive privacy laws being discussed in the US House and Senate contain protections or heightened scrutiny related to algorithmic decision-making. One bill introduced from the left would even enshrine a “right to human review of automated decisions” and a “right to individual autonomy,” requiring affirmative express consent for algorithmic personalization based on behaviour. Another from the right would require mechanisms to access “non-personalised” versions of services.  Particularly forward leaning measures are unlikely to make it through any bipartisan deal on a comprehensive consumer privacy law – and to be clear the prospects of such a deal in 2020 are not great – however, it is moving the conversation in the direction of regulation not seen before.

The Impact on Companies in Europe

The beginning of the new five-year EU legislative period, brings with it the opportunity for Europe to exert its AI leadership globally. AI ethics is one area where Europe wants to lead and, through tools such as the High-Level Expert Group on AI Policy and Investment Recommendations and Ethics Guidelines for Trustworthy AI, Brussels is set on making a global impact. We can expect to see European politicians stepping up efforts in global fora such as the UN, as well as in bilateral negotiations, to stress the importance of ethical AI as a business differentiator.

Companies wishing to operate in Europe, using AI technologies will need to be prepared for a potentially hard stance to be adopted by European policymakers, which risks excluding “non-European” technologies from public procurement markets and “non-European” companies from influencing the way AI legislation is shaped. Commission President von der Leyen has said that key technologies such as AI must be managed and kept in Europe. 2020 will shed some light on what this will mean in practice.

How Should Companies Respond?

Engage, and do so as soon as possible. For European AI companies, this is a crucial time to forge relationships with legislators and ensure their business models are accurately reflected in the upcoming legislation. EU research funds, such as Horizon Europe, should also be a key target for smaller businesses wishing to grow their products.

For non-European companies which increasingly rely on AI for their products and services, engagement over the next year will help to ensure their participation in the European market. It is easier to shape regulation while it is still in its inception than correct draft legislation at a later stage, which might have an inadvertently wide scope or pose significant risks for innovation.

Policymakers also want validation and to hear about opportunities regarding key initiatives. Companies on both sides of the Atlantic should come prepared with concrete solutions. This will help to grow key relationships with businesses. The time for suggestions and generic exchanges of opinions is very nearly over – policymakers now want to discuss substantive solutions a talk about concrete solutions and recommendations. 2020 is set to be a defining year for European and US AI regulation, and the coming decade will certainly be crucial in shaping the global community’s approach to AI policy.


Related Articles

Catalysing Change in Latin America’s Tech and Eco Spheres

Catalysing Change in Latin America’s Tech and Eco Spheres

In this episode of LATAM Digital from Access Partnership, join host Geusseppe and special guest Juan David Aristizabal for a...

17 May 2024 Opinion
Access Alert: Navigating new horizons in European tech policy

Access Alert: Navigating new horizons in European tech policy

On 14 May, Access Partnership held a private EU briefing in Brussels with industry leaders and EU policymakers. The session...

17 May 2024 Opinion
World Telecommunication & Information Society Day: Digital Innovation for Sustainable Development

World Telecommunication & Information Society Day: Digital Innovation for Sustainable Development

Today marks World Telecommunication & Information Society Day, which celebrates international progress in the ITU’s mission of driving inclusive digital...

17 May 2024 Opinion
SES & Intelsat Are The Headline – SpaceX Is The Punchline

SES & Intelsat Are The Headline – SpaceX Is The Punchline

The answer is SpaceX. If anything strikes you as out of the ordinary or more extreme than you expect in...

16 May 2024 Opinion