The UK government has unveiled its response to the AI White Paper consultation, through which the Department for Science, Innovation and Technology (DSIT) outlined a strategy to govern AI. The document underscores the UK’s determination to lead in making rules, and its aspiration of attracting more talent and investment in AI development. More than a sector-by-sector approach, deliberately different from the EU AI Act, the UK is positioning itself to create an alternative model for AI regulation, different from the ambitions of Europe, the US, and China. Its chances for success are limited.
Regulatory Coordination: Many chefs in an under-funded kitchen
The UK clearly prefers a non-statutory approach to AI, with voluntary measures, and a specific focus on general-purpose AI. Highly capable AI models may face distinct obligations, with the government engaging experts and planning updates throughout the year. The government has issued GBP 10 million in funding to support ‘preparation and upskilling’ and will require key regulators (including ICO, CMA, and Ofcom) to publish their 12-month implementation strategy by 30 April 2024.
DSIT argues this will avoid unnecessary, inappropriate horizontal rules that inhibit AI innovation and adoption; industry hopes that this approach does not portend a scramble of competing regulations replete with overlaps. To avoid this, a new “steering committee” and a “government central function” to streamline regulatory coordination and prevent duplication of efforts will be formed. The government will nominate a Lead AI Minister to help coordinate government AI efforts across departments.
Short-term Risks and Priorities: Jobs and Election Returns
DSIT has outlined short-term risks and priorities for regulatory action, including preparing the workforce, intellectual property challenges, bias and discrimination, data protection, trust and safety concerns, competition issues, and AI’s role in the public sector. Their response proposes specific action: developing guidance on using AI in HR processes and prioritising AI-related risks to the extent to which information can be trusted. A notable initiative is the focus on electoral interference, with the Defending Democracy Taskforce intensifying engagement with partners to safeguard the democratic process.
Global Collaboration: Hope over experience
The response calls for collaboration with international partners, including multilateral organisations such as the G7, G20, Global Partnership on AI (GPAI), Council of Europe, OECD, UN-associated agencies, and Global Standards Development Organisations (SDOs). The UK also hopes to find like-minded jurisdictions who have also, for now, shied away from overarching horizontal rules favoured across the Channel.
Challenges and Oversights
Early challenges for the UK’s model include insufficient funding for regulators and a short deadline for readiness strategies. Additionally, critics argue that the government’s response falls short of addressing core international developments in AI, such as the EU AI Act. Moreover, there is limited mention of efforts to ensure that SMEs and startups have the necessary support to integrate voluntary commitments.
The fundamental challenge: is there enough opportunity for growth to outweigh the ponderous regulatory framework needed to manage it? If the UK’s first big break with EU tech policy can demonstrate that, we may see “Global Britain” spring, finally, into being. Otherwise, we may well see a future UK government validate Brussels’ claims that it, rather than London, is the de facto tech regulator.
As the UK navigates this transformative phase in AI governance, Access Partnership works closely with business and government stakeholders to ensure a fine balance between regulation, innovation, and international cooperation. To understand more about the UK’s approach and any potential impact on your business, please contact Michael Laughton at [email protected] or Jessica Birch at [email protected].