Access Partnership is thrilled to announce the launch of our AI Policy Lab and our AI Newsletter.
The newsletter will provide a concise summary of important advancements in AI policy and feature opinions from industry experts. The AI Policy Lab will foster open, objective, and pragmatic discussions on AI and its responsible deployment. Through its activities, the Lab aims to promote a shared understanding of the societal and ethical implications of AI, as well as provide actionable guidance to policymakers, industry leaders, and other stakeholders.
AI-related policy developments – what do governments and policymakers have to say?
Access Partnership’s sample AI Policy Report: Our AI Policy Report contains policy and legislative updates as well as an analysis of potential risks and opportunities, from January 2023 to March 2023. If a comprehensive overview of global or regional policy developments would be useful to you, please email Jacob Hafey at email@example.com.
Italy bans ChatGPT: Italy’s national data protection authority has temporarily banned ChatGPT over alleged privacy violations. The regulator has blocked Open AI from processing the data of Italian users, citing the company’s failure to comply with the EU’s General Data Protection Regulation (GDPR). The authority claims that OpenAI does not have legal justification for the ‘mass collection and storage of personal data’ used to train ChatGPT’s algorithms, adding that the platform does not include age verification and exposes minors to unsuitable information. Open AI has 20 days to clarify how it plans to make ChatGPT compliant with EU privacy rules. Failure to do so will result in fines of up to 4% of its global revenue.
The EU’s AI Act is reaching its final stages in the European Parliament. MEPs are said to have finally agreed on a definition for AI systems, one that is in line with the OECD and international standards. The updated definition of an AI system as being a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments. The main debate remains on how general-purpose AI systems should be included in the Act. As concerns around ChatGPT and other generative AI systems grow, MEPs are considering including additional specific provisions for these systems. We can expect draft versions of the final text to be circulated before the Parliamentary Committees’ vote on 26 April. Leading MEPs are pushing the Commission to issue common specifications on rights requirements for high-risk systems, with these specifications to then be repealed following their inclusion in technical standards.
The UK government has released an AI white paper aimed at guiding the responsible use of artificial intelligence in the country. On 29 March, the UK Government Department for Science, Technology and Innovation (DSIT) published its Policy Paper: A pro-innovation approach to AI Regulation designed to drive growth and prosperity, increase public trustin AI and, strengthen the UK’s position as a global leader in AI.
The paper notes the enormous potential of generative AI and states the UK is prepared to lead globally in the AI sector with the values of transparency, accountability and innovation. The framework seeks to support this ambition while addressing risks including those to national security and mental health and complimenting rather than replacing people’s jobs. The paper also highlights the UK’s commitment to engaging internationally and to support interoperability across different regulatory regimes.
Rather than proposing new legislation immediately, the UK government will continue to monitor the regulatory landscape, promote education, testbed and sandbox initiatives and act as a centralizing coordinator and oversight authority. The framework is underpinned by a set of principles, to support consistency for sectoral regulatory actions while remaining sufficiently flexible to account for technological developments. These five principle include:
- Safety, security and robustness
- Appropriate transparency and explainability
- Accountability and governance
- Contestability and redress
The government will develop and maintain a central monitoring and evaluation (M&E) framework, Develop and maintain central regulatory guidance to support regulators in the implementation of the principles, Develop and maintain a cross-economy, society-wide AI risk register to support regulators’ internal risks assessments, Develop AI testbeds and regulatory sandboxes.
Lobbying set to be transformed by automation: AI has the potential to transform the lobbying industry through the use of ‘microlegislation’ – small pieces of proposed law designed to protect niche industry interests. While teams of humans currently take weeks to design lobbying strategies that shape legislation through proposed amendments, machine-learning systems are now capable of working out the smallest hypothetical modification to a bill or law that would have the widest impact, predicting its detectability by human readers, and guiding lobbying strategies by mapping the likely direction of legal processes.
What’s happening from an industry perspective?
Gaming industry only beginning to tap AI’s potential: The gaming industry is still in the earliest stages of AI adoption, with more good ideas than evidence of actual implementation, according to Andrew Uerkwitz, Managing Director of Jefferies. From efficiencies and cost savings in development to the acceleration of game launches, AI has the potential to have a transformative impact on gaming, but industry-wide incorporation of such changes is yet to emerge. Uerkwitz argues that a seismic shift in how games get made, built, and played is possible within the next 5-10 years as the sector evolves to AI’s potential.
Spotify to expand AI features: The successful launch of Spotify’s AI-driven ‘DJ’ feature will prompt the platform to expand the use of automated technologies in line with its wider investments in personalisation and machine learning. The DJ feature provides a curated selection of music combined with spoken commentary delivered in an AI-generated, human-sounding voice. The spoken information is derived from Spotify’s in-house experts’ knowledge base, rather than distilling information found across the Internet like in more ambitious large language models. The updated Spotify app will offer the DJ feature at the top of subscribers’ screens, with the company currently experimenting with ways to expand the technology into other areas.
ChatGPT success highlights voice assistant failures: The rise of ChatGPT has underlined how the likes of Apple, Amazon, and Google have squandered their opportunity to dominate the AI sector. While features like Apple’s Siri have been in operation for well over a decade, technological hurdles and miscalculations of how products like Amazon’s Alexa and Google Assistant would be used have resulted in misdirected investment and a rapid cooling of enthusiasm. These command-and-control systems have been usurped by large language model-driven chatbots over the past year, with Apple and Google both now developing generative AI systems as they seek to catch up with their rivals.
Burger King and Coca-Cola embrace AI for innovation: Burger King utilised the AI-based image generator Midjourney to develop new product ideas, resulting in the Cheeseburger Nugget’s addition to its German menu. Meanwhile, Coca-Cola launched an AI-driven design competition, inviting the public to create artwork using iconic images and AI tools like OpenAI’s DALL-E and GPT systems. Winning designs will be showcased in Times Square and Piccadilly Circus, with artists retaining rights to their submissions.
What should you be reading?
The Release of GPT-4, Turing Tests, and the Uncanny Valley: Artificial Intelligence at an Inflexion Point. OpenAI’s latest release demonstrates groundbreaking capabilities such as multimodal inputs, surpassing human-level performance in various tests, and a focus on safety & ethics. The Fair Tech Institute (FTI) promotes an empathetic approach towards AI and ChatGPT/GPT-4, inviting professionals to join their efforts in comprehending the implications of these technological advancements on global data governance, regulations, and socio-economic policies. The Institute recognises that achieving sustainable solutions requires the creation of a collaborative platform where individuals can collectively frame pertinent issues through appropriate cultural and regional lenses.
Open letter calls for pause to AI development: A group of AI experts and industry executives have published an open letter campaigning to halt the development of advanced AI for the next six months. The pause would apply to systems more powerful than Open AI’s GPT-4, which was released earlier in March. The group issuing the letter, the Future of Life Institute, consists of Elon Musk’s Musk Foundation, the London-based group Founders Pledge, and the Silicon Valley Community Foundation. The letter garnered more than 1,000 signatures but did not include Open AI CEO, Sam Altman, nor the CEOs of Alphabet and Microsoft.
What should you be attending
Generative AI & the Creative Sector: The EU’s AI Act on Thursday, 27 April at 14:00 – 15:00 BST | 15:00 – 16:00 CETAccess Partnership will be exploring many of the issues discussed above in a webinar on the rapidly evolving world of AI-generated content. With a particular focus on Deepfakes and ChatGPT, the event will discuss the EU’s AI Act, exploring the provisions it makes for creative uses of such technologies. The conversation will explore the scope and impact of artificially generated content on various industries, as well as issues around misinformation, intellectual property rights, and text and data mining, debating how best to navigate this complex regulatory landscape.
How should policymakers approach regulating generative AI? Wednesday, 03 May at 15:00 – 18:00 EDT in Washington D.C.Access Partnership is hosting a closed-door roundtable event at our Washington DC office to discuss the topic of regulating generative AI. The roundtable event will bring together government stakeholders, industry practitioners, and academics to discuss the regulatory response to generative AI in the US. The event provides an excellent opportunity for government stakeholders, industry practitioners, and academics to come together and discuss the future of generative AI regulation. With a focus on accountability and trust, the event will explore the regulatory response necessary to promote innovation while safeguarding citizens’ rights and safety. If you would like to take part in the conversation, please email Meghan Chilappa at firstname.lastname@example.org.
If you have been forwarded this email and would like to subscribe to our news alerts, please click here.
US – Jacob Hafey, Meghan Chilappa
APAC – Jonathan Gonzalez
UK – Jessica Birch
EU – Lydia Dettling
Editorial assistance – Phil Constable, Luca O’Neill