AI Policy Lab Newsletter April 2023

AI Policy Lab Newsletter April 2023

AI-Related Policy Developments – What do governments and policymakers have to say?

 

Increased global attention and oversight from governments on AI development and use.

Globally, there has been a heightened level of government scrutiny regarding AI. In Europe, Italy has persisted with its ban on ChatGPT until OpenAI complies with a set of regulatory requirements, the Spanish Data Protection Authority has launched an investigation into ChatGPT’s compliance with the GDPR, and the European Data Protection Board has set up a task force to coordinate on ChatGPT privacy.  

In Asia, China has proposed regulatory measures for addressing the risks generated by AI technology (Austria has also noted its concern with China’s new rules for generative AI, which would need to reflect the country’s socialist values) while Japan’s ruling party has published a white paper urging the government to adopt a stricter regulatory approach.  

In addition to the updates below, several other jurisdictions and multilateral organisations (ITU and OECD) have released AI guidelines, bills, and ambitions. If you would benefit from a 15-minute briefing session, highlighting this month’s developments, please email [email protected]  

Finalisation of Europe’s AI Act – VLOPs, Generative AI and Open Source foundational models falling within scope

On 27 April, an agreement was reached on the AI Act during the final scheduled political-level meetings of the lead committees (IMCO & LIBE) in the European Parliament. The agreement is expected to be voted through with no amendments in the lead committees on 11 May and in Plenary for the Parliament’s final text in June. After this the Parliament will begin trilogue negotiations with the European Commission and Council with the aim of having a final agreed EU AI Act by the end of 2023.

MEPs came to an agreement on a new article on general principles to be applied to all AI systems including foundation models, the definition of ‘significant risk’ which will determine whether an AI system will fall into the high-risk category, the inclusion of recommender systems of very large online platforms in the scope high risk systems, new provisions relating to the sustainability of high-risk AI systems and the provisions concerning obligations for providers of open source AI. An area in risk of potential further amendments is the provision which specifies what falls into the scope of prohibited ex-post remote biometric identification. For more information on this, please contact [email protected].

 –

US Government seeks public comment on AI Accountability Policies

The US National Telecommunications and Information Administration (NITA) has issued a request for comment on developing measures for ensuring that products using AI are ‘legal, effective, safe, ethical, and otherwise trustworthy’. Stakeholders have been invited to provide comments on the present state of AI accountability, including ways that governmental and non-governmental actions can enforce and support AI accountability practices, before 10 June. The RFC comes amid a flurry of AI tools released to the public this year, as well as the Biden Administration’s efforts to develop a uniform standard in the face of regulatory fragmentation. AI-related bills ranging from establishing commissions to requiring companies to conduct impact assessments for each tool they develop have already been drafted in nearly 20 states. For information on this, please contact [email protected]  

 

What’s happening from an industry perspective?

 

Amazon introduces new generative AI tools

Amazon has introduced new tools for generative AI, including Bedrock which offers access to Foundational Models (FMs) through an API. Currently, it is available only to select partners and allows customers to choose from a range of FMs or customise them as per their needs. Additionally, Amazon has launched CodeWhisperer, a free AI coding companion for individual users that generates code for various AWS services. CodeWhisperer supports multiple programming languages and features built-in security scanning and filtering to flag biased code suggestions. 

– 

Alibaba launches ChatGPT rival

Alibaba has launched its own rival to ChatGPT. The product, called Tongyi Qianwen, has Chinese and English language capabilities. It will initially be deployed on the company’s workspace communication software, DingTalk. Alibaba plans to integrate the system across its full range of products in the near future, while Alibaba Cloud will offer clients access to the chatbot to help them build customised large language models. The company hopes to use Tongyi Qianwen to facilitate business from all industries with their intelligence transformation, boosting productivity. 

Humane unveils wearable AI device

AI startup Humane has developed a wearable AI device that can project digital information onto any surface. The projector, which can be attached to a shirt or jacket, aims to remove the need for a traditional screen by using ambient and contextual AI to interact with the world as humans do. During a TEDTalk presentation demonstrating the technology, co-founder Imran Chaudhri showed how the device can be used to translate languages, provide recommendations based on preferences, and manage food requirements. Humane secured USD 100m in March from investors including Microsoft, Volvo, LG, and OpenAI founder Sam Altman. 

Stanford and Google use ChatGPT to populate virtual town

Researchers from Stanford and Google have created a virtual town filled with 25 AI ChatGPT agents, each prompted with similar information to play the role of a person in a fictional town. The experiment aimed to produce ‘believable simulacra of human behaviour’ by using machine learning models to produce generative agents. Rather than interacting visually, the agents used a complex and hidden text layer that synthesised and organised the information pertaining to each agent. Users could also write in events and circumstances, prompting the agents to respond appropriately. The study has not been peer-reviewed or accepted for publication but has potentially significant implications for simulations of human interactions. 

 

What should you be interacting with?

 

ap (2)

 

Google CEO describes AI as humanity’s most important innovation

Google CEO Sundar Pichai spoke to CBS’ ’60 Minutes’ about the rapid advancement of AI, stating that society isn’t prepared for its consequences and that laws regulating its advancements are ‘not for a company to decide’ alone. Describing AI as ‘more profound than fire or electricity’, Pichai warned that AI will soon impact ‘every product of every company’. During the interview, Google provided a demonstration of Project Starline, which uses breakthrough light field display technology to create a sense of volume and depth without additional glasses or headsets. 

Watch here

What should you be attending?

 

AI event (11)

Generative AI & the Creative Sector

23 May at 14:00 – 15:00 BST

As the European Parliament finalises its position on the AI Act, Access Partnership Policy Manager Lydia Dettling will be moderating a virtual roundtable on artificially generated content next month. Framed around the General-Purpose AI approach under the EU AI Act, and with a particular focus on deepfakes and ChatGPT, the webinar will debate whether this approach is the most effective way to regulate AI-generated content or if alternative strategies should be considered. Speakers include stakeholders from across the public and private sectors, trade associations, and think tanks.

Register here

Copy of AI event (5)

 

Responsible AI: From concept to theory

16 May at 09:00 – 10:00 BST

Autonomous weapons, biased algorithms, disinformation, and deepfakes are just a few examples of the potential harm that can arise if AI is misused or abused. Given these concerns, Senior Policy Manager Jonathan Gonzalez will moderate a virtual event next month on the challenges of defining and operationalising responsible AI, exploring how it can be turned into usable principles, guidelines, or policies. The webinar will bring together a panel of distinguished academics, policymakers, and industry experts who work across different aspects of responsible AI for a thought-provoking discussion.

Register here 

 

If you would like to subscribe to our news alerts, please click here.

Contributors:
US – Jacob Hafey, Meghan Chilappa
APAC – Jonathan Gonzalez, Lim May-Ann
UK – Jessica Birch
EU – Lydia Dettling
Editorial assistance – Phil Constable, Luca O’Neill
Lead – Melissa Govender

Related Articles

Access Alert: New agency for digital transformation and telecommunications in Mexico

Access Alert: New agency for digital transformation and telecommunications in Mexico

The Mexican Congress has approved the creation of the Agency of Digital Transformation and Telecommunications, which will have the level...

19 Nov 2024 Opinion
Access Alert: The wider impact of Australia’s social media ban for under-16s

Access Alert: The wider impact of Australia’s social media ban for under-16s

Australia’s states and territories have unanimously backed a national plan to ban children under sixteen from most forms of social...

18 Nov 2024 Opinion
Access Alert: What Trump’s 2024 victory means for tech and trade

Access Alert: What Trump’s 2024 victory means for tech and trade

The election of Donald Trump as the 47th US President portends change in US technology and digital policy. Artificial Intelligence...

8 Nov 2024 Opinion
Access Alert: The intensifying battle between Musk and Ambani over India’s satellite broadband spectrum

Access Alert: The intensifying battle between Musk and Ambani over India’s satellite broadband spectrum

Space industry players should take note of the escalating competition in India’s satellite broadband market, as Elon Musk’s Starlink and...

25 Oct 2024 Opinion