What is the impact of the new “AI Opportunities Action Plan” on UK AI regulation? Our 3 key insights

Written By

kate deniston Module
Kate Deniston

Professional Support Lawyer
UK

As the knowledge lawyer in our Tech Transactions team, I play a key role in keeping both clients and colleagues at the forefront of tech-related legal and market developments.

To assess the impact of the newly published AI Opportunities Action Plan on UK AI regulation, we should firstly take a quick look at the current status of UK AI regulation, what the AI Opportunities Action Plan is and how it has been received by the UK government.

To skip straight to the impact, scroll down to section D below. 

A. What is the status of UK AI regulation?

The previous Conservative government proposed a pro-innovation approach to AI regulation in which existing regulators would regulate AI within their sectors in accordance with five principles. They asked each of the key regulators to publish their strategic approach to AI for their sector in order to demonstrate how they are responding to AI risks and opportunities. There were no plans in the short term to introduce any binding legislation. 

On coming into power in July 2024, the new Labour government pledged in the King’s speech to establish narrow binding legislation that would place requirements on developers of the most powerful AI models. It was unclear whether the new government would continue their predecessor’s sector-based regulatory approach.

B. What is the “AI Opportunities Action Plan”?

Shortly after coming to power in July 2024, the UK’s new Labour government commissioned Matt Clifford, tech entrepreneur and Chair of the Advanced Research and Invention Agency, to conduct an independent study on how the UK can harness the opportunities presented by AI.

A report of Clifford’s findings, the “AI Opportunities Action Plan”, was published on Monday 13 January 2025. 

It sets out a plan for the UK to foster a flourishing AI ecosystem across the AI stack from developing AI to using it, with the intention of this leading to economic growth. 

To achieve this, the action plan recommends: 

  1. putting in place the foundations needed for AI (computing and data infrastructure, regulation and facilitating access to talent).
  2.  embracing AI adoption in both the public and private sectors (leading to better outcomes for citizens as well as boosting productivity); and
  3. positioning the UK as an “AI maker”, not an “AI taker” (by providing an attractive home to build and scale frontier AI companies). 

The action plan then sets out no less than 50 practical recommendations that the government should enact in order to achieve these three objectives.   

C. How did the UK government react to the “AI Opportunities Action Plan”? 

On the same day that Clifford’s report was released, the UK government published its “government response” to the AI Opportunities Action Plan. In the foreword, the UK Prime Minister, Sir Keir Starmer, states, “I am happy to endorse it and take the recommendations forward”.  

The government response paper states that the government has agreed to 48 of the 50 practical recommendations and partially agreed to two.  It provides details on how the government will implement each recommendation and the delivery date for each one. 

Keir Starmer also made a speech on the action plan and authored an opinion piece in the Financial Times, publicising his support of the action plan. 

D. What is the impact of the AI Opportunities Action Plan on the UK’s AI regulation strategy? 

Here are our three key insights. 

1. UK regulators

The most noteworthy impact is on the UK’s regulators.  The focus has changed from them regulating AI to promoting AI. 

In the action plan, Matt Clifford recommends that when government departments provide strategic guidance to the regulators they sponsor, they should emphasise the importance of “enabling safe AI innovation”. The aim is that all regulators should support innovation as part of their statutory duty to promote economic growth (known as the “Growth Duty”). They should encourage and enable the safe adoption of AI within their sectors, instead of being blockers. 

Clifford also recommends that “all regulators [should be required to] publish annually how they have enabled innovation and growth driven by AI in their sector”, using “transparent metrics”. Agreed by the government, we can expect a report on the progress of “regulators with significant AI activities” by Summer 2025.  

Clifford advises that to speed up AI in priority sectors, the government works with the relevant regulators to implement regulatory sandboxes and other pro-innovation initiatives. This accords with the previous Conservative government’s proposals to utilise AI regulatory sandboxes. The Labour government will release a progress report on this by Summer 2025. 

Clifford suggests that if the government has evidence that the sector regulators are not promoting AI innovation enough, perhaps due to a lack of incentive, then the government “should consider more radical changes to our regulatory model for AI”. Clifford adds that the government could, for example, “empower a central body” with a mandate and higher risk tolerance to promote innovation. For AI products that do not comply with generally applicable regulations in the relevant sector, this central body could “override” those sector regulations by issuing pilot sandbox licences for the AI products and taking on the liability for the risks involved. 

All this marks a notable shift in the role that sector regulators are expected to play in the UK AI regulatory landscape, with the Labour government’s priority being for the regulators to actively promote AI innovation within their sectors, instead of the focus being on conducting “detailed risk analysis and enforcement activities within their areas of expertise” as the Sunak-led government had envisaged. Regulators often hold enforcement powers to maintain adherence with standards and codes of practice in their sector. If a regulator makes decisions on issues which protect citizens’ rights but do not “promote innovation at the scale of the government’s ambition”, then it seems that the government could override that regulator by empowering a central body instead. This could potentially weaken the effectiveness of a regulator at protecting rights and regulating its sector.  

2. AI legislation

The action plan does not contain much detail on any plans by the UK government for AI legislation. From the limited content included, there are no significant changes. It recommends reforming the UK’s text and data mining regime, but the government has already launched a copyright and AI consultation a month ago to address this. The pro-innovation approach to regulation, introduced by the Sunak-led government, is supported by Clifford in the action plan. There is nothing to indicate a change to Labour’s pledge in the King’s speech to introduce narrow legislation which places requirements “on those working to develop the most powerful AI models”. In its response, the government promises to consult on proposed legislation to “protect UK citizens and assets from the critical risks associated with the next generation of the most powerful AI models”. However, Clifford stresses that any regulation should be done “without blocking the path towards AI’s transformative potential”.  

Clearly, the government’s main priority is to leverage AI to achieve economic growth. This starkly contrasts the EU’s legislative approach where the main objective of the EU AI Act is to protect EU citizens from the risks posed by AI to their safety and fundamental rights.   

Regulatory certainty is essential for economic growth. Businesses are less likely to invest in AI development in the UK without greater clarity on what the UK’s AI regulatory framework will look like. AI regulation is advancing rapidly across the world. If the UK government wants to achieve its aims of making Britain a world leader in AI, it must clarify the detail of any regulatory framework sooner rather than later.

3. AI safety

The Sunak-led government is often regarded as having made significant strides in AI safety. It sought to position the UK as a leader in AI safety by setting up the first AI Safety Institute and hosting the first global AI safety summit. 

The action plan recommends continuing to support the UK’s AI Safety Institute and in his speech on 13 January, Keir Starmer stated that “the last government was right to establish the world’s leading AI Safety Institute… and we will build on that.” However, he then goes on to say that “we shouldn’t just focus on safety and leave the rest to the market”.

The overall emphasis in the action plan indicates a shift away from AI safety issues, towards a greater focus on turbocharging the UK’s use and development of AI and joining the global AI arms race. 

E. What’s next for UK AI regulation?

Based on recent comments from Peter Kyle, the UK’s Technology Secretary, at the Financial Times’ Future of AI Summit, we anticipate that a first draft of AI legislation will be presented to the UK’s Parliament within the first half of 2025.  

Latest insights

More Insights
reflection

Reflections on the U.K. government’s pledge to turbocharge the data centre industry

Jan 17 2025

Read More
Curiosity line pink background

China TMT: Bi-monthly Update - November and December 2024 Issue

16 minutes Jan 16 2025

Read More
Multiple Magnifying Glasses on teal background

Mass claims across borders: a deep dive into the Netherlands, England & Wales, and Germany

Jan 15 2025

Read More