To assess the impact of the newly published AI Opportunities Action Plan on UK AI regulation, we should firstly take a quick look at the current status of UK AI regulation, what the AI Opportunities Action Plan is and how it has been received by the UK government.
To skip straight to the impact, scroll down to section D below.
The previous Conservative government proposed a pro-innovation approach to AI regulation in which existing regulators would regulate AI within their sectors in accordance with five principles. They asked each of the key regulators to publish their strategic approach to AI for their sector in order to demonstrate how they are responding to AI risks and opportunities. There were no plans in the short term to introduce any binding legislation.
On coming into power in July 2024, the new Labour government pledged in the King’s speech to establish narrow binding legislation that would place requirements on developers of the most powerful AI models. It was unclear whether the new government would continue their predecessor’s sector-based regulatory approach.
Shortly after coming to power in July 2024, the UK’s new Labour government commissioned Matt Clifford, tech entrepreneur and Chair of the Advanced Research and Invention Agency, to conduct an independent study on how the UK can harness the opportunities presented by AI.
A report of Clifford’s findings, the “AI Opportunities Action Plan”, was published on Monday 13 January 2025.
It sets out a plan for the UK to foster a flourishing AI ecosystem across the AI stack from developing AI to using it, with the intention of this leading to economic growth.
To achieve this, the action plan recommends:
The action plan then sets out no less than 50 practical recommendations that the government should enact in order to achieve these three objectives.
On the same day that Clifford’s report was released, the UK government published its “government response” to the AI Opportunities Action Plan. In the foreword, the UK Prime Minister, Sir Keir Starmer, states, “I am happy to endorse it and take the recommendations forward”.
The government response paper states that the government has agreed to 48 of the 50 practical recommendations and partially agreed to two. It provides details on how the government will implement each recommendation and the delivery date for each one.
Keir Starmer also made a speech on the action plan and authored an opinion piece in the Financial Times, publicising his support of the action plan.
Here are our three key insights.
The most noteworthy impact is on the UK’s regulators. The focus has changed from them regulating AI to promoting AI.
In the action plan, Matt Clifford recommends that when government departments provide strategic guidance to the regulators they sponsor, they should emphasise the importance of “enabling safe AI innovation”. The aim is that all regulators should support innovation as part of their statutory duty to promote economic growth (known as the “Growth Duty”). They should encourage and enable the safe adoption of AI within their sectors, instead of being blockers.
Clifford also recommends that “all regulators [should be required to] publish annually how they have enabled innovation and growth driven by AI in their sector”, using “transparent metrics”. Agreed by the government, we can expect a report on the progress of “regulators with significant AI activities” by Summer 2025.
Clifford advises that to speed up AI in priority sectors, the government works with the relevant regulators to implement regulatory sandboxes and other pro-innovation initiatives. This accords with the previous Conservative government’s proposals to utilise AI regulatory sandboxes. The Labour government will release a progress report on this by Summer 2025.
Clifford suggests that if the government has evidence that the sector regulators are not promoting AI innovation enough, perhaps due to a lack of incentive, then the government “should consider more radical changes to our regulatory model for AI”. Clifford adds that the government could, for example, “empower a central body” with a mandate and higher risk tolerance to promote innovation. For AI products that do not comply with generally applicable regulations in the relevant sector, this central body could “override” those sector regulations by issuing pilot sandbox licences for the AI products and taking on the liability for the risks involved.
All this marks a notable shift in the role that sector regulators are expected to play in the UK AI regulatory landscape, with the Labour government’s priority being for the regulators to actively promote AI innovation within their sectors, instead of the focus being on conducting “detailed risk analysis and enforcement activities within their areas of expertise” as the Sunak-led government had envisaged. Regulators often hold enforcement powers to maintain adherence with standards and codes of practice in their sector. If a regulator makes decisions on issues which protect citizens’ rights but do not “promote innovation at the scale of the government’s ambition”, then it seems that the government could override that regulator by empowering a central body instead. This could potentially weaken the effectiveness of a regulator at protecting rights and regulating its sector.
The action plan does not contain much detail on any plans by the UK government for AI legislation. From the limited content included, there are no significant changes. It recommends reforming the UK’s text and data mining regime, but the government has already launched a copyright and AI consultation a month ago to address this. The pro-innovation approach to regulation, introduced by the Sunak-led government, is supported by Clifford in the action plan. There is nothing to indicate a change to Labour’s pledge in the King’s speech to introduce narrow legislation which places requirements “on those working to develop the most powerful AI models”. In its response, the government promises to consult on proposed legislation to “protect UK citizens and assets from the critical risks associated with the next generation of the most powerful AI models”. However, Clifford stresses that any regulation should be done “without blocking the path towards AI’s transformative potential”.
Clearly, the government’s main priority is to leverage AI to achieve economic growth. This starkly contrasts the EU’s legislative approach where the main objective of the EU AI Act is to protect EU citizens from the risks posed by AI to their safety and fundamental rights.
Regulatory certainty is essential for economic growth. Businesses are less likely to invest in AI development in the UK without greater clarity on what the UK’s AI regulatory framework will look like. AI regulation is advancing rapidly across the world. If the UK government wants to achieve its aims of making Britain a world leader in AI, it must clarify the detail of any regulatory framework sooner rather than later.
The Sunak-led government is often regarded as having made significant strides in AI safety. It sought to position the UK as a leader in AI safety by setting up the first AI Safety Institute and hosting the first global AI safety summit.
The action plan recommends continuing to support the UK’s AI Safety Institute and in his speech on 13 January, Keir Starmer stated that “the last government was right to establish the world’s leading AI Safety Institute… and we will build on that.” However, he then goes on to say that “we shouldn’t just focus on safety and leave the rest to the market”.
The overall emphasis in the action plan indicates a shift away from AI safety issues, towards a greater focus on turbocharging the UK’s use and development of AI and joining the global AI arms race.
Based on recent comments from Peter Kyle, the UK’s Technology Secretary, at the Financial Times’ Future of AI Summit, we anticipate that a first draft of AI legislation will be presented to the UK’s Parliament within the first half of 2025.