AI regulation in the UK – where are we now?

Introduction – Kate Deniston

On 29 March 2023, the UK Government published a white paper on regulating AI entitled “A pro-innovation approach to AI regulation” (the “White Paper”).  The White Paper set out the Government’s vision and proposals for implementing a “proportionate and pro-innovation regulatory framework” for regulating AI in the UK. 

At its core, the White Paper outlined five “cross-cutting principles” that would underpin the UK’s AI regulatory approach: 

  1. Safety, security and robustness;
  2. Appropriate transparency and explainability;
  3. Fairness;
  4. Accountability and governance; and
  5. Contestability and redress.

The White Paper proposed that the principles would be issued on a non-statutory basis and would be implemented by existing regulators. The existing regulators would interpret and apply the principles to AI within their remits. The intention was that this approach utilises regulators’ domain-specific expertise so as to tailor the implementation of the principles to the specific context in which AI is used. The Government proposed that it would then later introduce a statutory duty on regulators requiring them to have due regard to the principles. 

The Government then held a consultation period in which stakeholders could submit their responses to the proposed strategy. On 6 February 2024, the Government published its response to the feedback it had received during the consultation period. The approach largely remained the same. As part of its response, the Government asked key regulators to publish an update outlining their strategic approach to AI by 30 April 2024 in order to demonstrate how they are responding to AI risks and opportunities.

Over a year on from the publication of the initial White Paper, what is the current state of play in terms of AI regulation in the UK and how have key regulators reacted to the proposals by the Government? In this note, we take a look at the progress and developments of some of the key regulators in regulating AI in the UK.

The UK’s approach to the regulation of AI is now dependent on the outcome of the general election on 4 July 2024. To date, the Labour party has given little away on their proposed approach to AI regulation but their strategy is expected to be announced soon. 

1. Information Commissioner’s Office (“ICO”) – Katerina Tassi 

On 1 May 2024, the ICO - the UK’s data protection regulator - published its strategic approach to regulating AI. The ICO welcomes the Government’s approach to build on strengths of existing regulators and takes the view that AI risks do not require new legislation, but appropriate resourcing and empowerment of existing regulators. 

The ICO’s strategy emphasises the risk-based approach of data protection law: in the ICO’s view, the flexibility of this approach is suitable for AI technology, which is probabilistic in nature. In this vein, the document highlights in particular high-risk AI applications, facial recognition technology and AI impacting children or other vulnerable groups. 

The ICO notes that the principles set out in the White Paper mirror to a large extent the principles set forth in UK data protection law, and explains how its existing work maps with the Government’s guidance.  

The ICO has been active in this area over the past years, having published a range of AI-related guidance, providing advice and support to organisations, and taking regulatory action. It will continue building on this ongoing work over the coming months, as AI - and its application on biometric technologies – is a focus area for the ICO in 2024/25, along with children’s privacy and online tracking. The upcoming developments in the ICO’s work include: 

The strategy also highlights the ICO’s work with other regulators (e.g. the Digital Regulation Cooperation Forum (“DRCF”), and the Regulators and AI Working Group), the Government, standards bodies and international partners in AI-related areas.  

2. Competition and Markets Authority (“CMA”) - Saskia King and Aimee Guzinska-Bowley

The CMA largely endorsed the UK Government’s proposed principles-based approach to AI regulation in its stakeholder response to the consultation on the White Paper, emphasising that it was committed to taking a pro-innovation approach. The CMA considers this framework complementary to its existing competition and consumer protection functions. Its plan focuses on preventing and enforcing against the negative impacts of AI in these areas, such as large incumbents leveraging AI to strengthen their dominance, or AI systems misleading consumers or lacking transparency. 

The CMA has been quite active, including publishing its initial report on AI Foundation Models (“AI FMs”) in September 2023, as well as an updated paper on AI FMs and a Technical Update Report (both published in April 2024) identifying three key AI risks and developing AI principles to address these. In April 2024 the CMA also published its AI strategic update, expressing its intent to step up its use of merger control to examine whether AI FM partnerships fall within the current merger rules and, if so, whether they give rise to competition concerns.  Recently, the CMA has launched its first merger inquiry into an AI FM partnership. 

The key elements of the CMA’s plan include:

  1. Building expertise: the CMA is developing its capabilities in AI to effectively assess competition and consumer concerns, for example through its DaTA unit, the DMU, and horizon scanning.
  2. Collaboration: the CMA is working with other regulators, including to provide joint regulatory advice to innovators through the DRCF AI and Digital Hub pilot, and seeking feedback from stakeholders to research and develop best AI practices. 
  3. Enforcement: the CMA has signalled its willingness to use existing competition and consumer law in conjunction with its strengthened powers under the DMCC Bill to investigate and potentially take action against companies whose AI practices raise concerns, including the requirement of remedies, or the imposition of significant fines. 

Companies most affected by the CMA’s approach will likely be those developing or deploying AI systems that interact with consumers or have the potential to distort competition, particularly tech giants, as we have seen from recent CMA investigations.

3. Financial Conduct Authority (“FCA”) - Gavin Punia and Tom Hepplewhite 

On 22 April 2024, the FCA published its AI update, which welcomes the Government’s pro-innovation strategy on AI and outlines the FCA’s roles, objectives and plans for AI for the next 12 months. 

For each of the Government’s five principles for regulating AI, the FCA highlights the relevance of its existing regulations and guidance. For instance, in relation to fairness, transparency and explainability, the FCA highlights the Consumer Duty obligations in particular. For accountability and governance, the FCA points to its Principles of Business, Threshold Conditions and its conduct of business sourcebooks.  

The FCA sees beneficial innovation as a vital component of effective competition, and is taking a proactive approach to understanding emerging technologies. Over the next 12 months, the FCA will:

  1. Collaborate with the Payment Systems Regulator (“PSR”), the Bank of England, regulated firms, civil society, academia and international peers to further its understanding of AI deployment in UK financial markets and create consensus on best practice and potential future regulatory work. There is also mention of further research as part of the DRCF into deepfakes and simulated content.
  2. Develop the pilot AI and Digital Hub with DRCF member regulators, and continue operating its current FCA Innovation Hub to help firms launch innovation products/services with consumers and navigate regulatory requirements. 
  3. Invest in more AI technologies. The FCA is not just regulating AI but is also using AI to assist its own regulation. The FCA currently uses web scraping and social media tools that are able to proactively monitor markets and potential scam websites. The FCA’s Advanced Analytics unit is also using AI to develop additional tools to protect consumers and markets.

4. Office of Communications (“OFCOM”) - Matt Buckwell

Ofcom’s White Paper response recognised that the UK is at a “tipping point” for AI and set out its views on the implications of AI in its sectors. On 26 March 2024, Ofcom published its Strategic approach to AI.

Ofcom covers a wide array of topics including communications, online safety, TV, radio broadcasting and other media such as video sharing platforms. As such, Ofcom has set out three cross-cutting risks where it is focussing its work on AI:

  1. Synthetic media,
  2. Personalisation, and
  3. Security and resilience.

With this in focus, Ofcom’s planned AI work for 2024/25 identified a number of areas that it intends to focus on, including:

  1. Online Safety – using AI in risk assessments and content moderation as well as research on vulnerabilities in AI models, the merits of synthetic media detection tools and automated content classifiers.
  2. Telecoms - use of AI in fraud and scams, vendor supply chain risks, AI standards and monitoring the impact of AI on the market.
  3. Broadcasting – guidance to broadcasters on AI, the impact of AI in recommender systems and plurality and sustainability as well as the impact on public sector broadcasting.

5. Office of Gas and Electricity Markets (“OFGEM”) - Michael Rudd and Kathryn Parker

Following the Government’s request to outline its strategic approach to regulating AI, OFGEM responded in two ways. They issued a call for input on the safe use of AI in the energy sector, which ran from 4 April to 7 May 2024, and they published a strategic approach for robust AI regulation in the energy sector, based on the White Paper’s five principles. 

OFGEM concluded that while existing regulation is sufficient, additional guidance on risk-based AI use in the energy sector would be beneficial. Views on their proposals outlined within the call for input were requested from various stakeholders, including energy organisations, AI developers, consumer groups, charities, academia, and others working on AI policy. 

OFGEM aims to develop an outcome-based approach to AI regulation, which is proportionate and based on managing risk, rather than setting prescriptive rules. A suggested risk framework was presented to guide ‘dutyholders’, like energy sector licensees, and other organisations contemplating AI adoption, towards proportionate actions and to prevent or lessen the likelihood of failures. 

Despite Ofgem’s reassurance about existing regulation, considerable effort is still needed to address the legal and regulatory complexities related to AI collusion, liability, the AI supply chain, and sustainability.

6. Civil Aviation Authority (“CAA”) - Simon Phippard and Carey Tang

In January 2024, the CAA introduced its Strategy for AI, designed to set a baseline of terminology for AI, automation and autonomy and to ensure a level and transparent conversation with innovators. In February 2024, Building Trust in AI was published, which sets out five principles for AI and autonomy (almost identical to those set out in the White Paper). 

Both documents were branded as part of “The CAA’s Strategy for AI”, but the new strategy itself will be published in Summer 2024. To that end, the CAA has been in the process of gathering information and feedback through a survey, which closed in March 2024.

There has been considerable focus, especially in the ‘novel aerospace’ part of the aviation sector, on the use of AI as a means to higher levels of automation. Consistent with the White Paper, the CAA identifies the autonomy and adaptability of AI systems as a challenge when regulating aviation safety and security, because of the impact on human control, conflict with human intent, or the ability to explain why a certain course was taken. In the fullness of time, as more parts of the aviation industry seek to adopt AI technology it will not just be drone and eVTOL operators but most designers, manufacturers and operators of aircraft, their service providers, airports and air traffic control agencies that will need to be concerned with the regulation of AI in aerospace.

7. Equality and Human Rights Commission (“EHRC”) – Roger Bickerstaff

The EHRC commented that significantly more funding for regulators is urgently needed to manage the potential of the rapidly advancing technology. 

The EHRC has identified AI as one of its strategic priorities. It will prioritise a small number of important and strategic issues, such as the Public Sector Equality Duty. It has also developed strong relationships with other regulators, particularly through the DRCF. 

In 2024–25, the EHRC will focus predominantly on reducing digital exclusion, the use of AI in recruitment practices, developing solutions to address bias and discrimination in AI systems, and police use of facial recognition technology (“FRT”). The EHRC will also partner with the Centre for Data Ethics and Innovation (“CDEI”) on fairness issues.

The EHRC has a wide-ranging focus and intersects across many sectors so many will be affected by the EHRC’s plans. It has a clear current focus on FRT. Organisations developing and using FRT should monitor EHRC closely.   

The EHRC has begun to implement its plans. It is seeking to influence other regulators, particularly around the recognition of the fairness principle.  It has produced guidance, such as the Public Sector Equality Duty.

The EHRC is determined to ensure equality and human rights are central to the development and use of AI. It is working with regulators to explore fairness across regulatory remits. The principles will be incorporated into EHRC’s compliance and enforcement work.

Next steps - Kate Deniston

Over the past year, the key regulators have begun to develop their strategies for applying the principles and regulating AI, as initially proposed in the White Paper. This has taken longer than expected, probably due to the complexity of dealing with the issue on a sectoral, rather than a central, basis. The UK now lags behind many other countries in its approach to AI regulation.

Looking ahead, over the next year we expect to see further research, collaboration, development and implementation of sectoral AI regulatory guidance, but little real regulation. We may see investigations and inquiries by regulators into specific AI practices and players. 

The Government is collaborating with regulators and intends to release further updates on its cross-sectoral guidance by summer 2024. 

It is worth noting again, as set out in our introductory comments, that the approach to regulating AI as proposed in the White Paper is dependent on the results of the upcoming UK general election on 4 July 2024 and could change significantly.

For further details and information on any topics covered in this article, please contact us. 

Latest insights

More Insights
featured image

EDPB weighs in on key questions on personal data in AI models

1 minute Dec 20 2024

Read More
featured image

Update on recent UK data protection guidance in the financial services space

3 minutes Dec 19 2024

Read More
Bank card propped up against laptop

Germany: BaFin updates AML guidance

Dec 19 2024

Read More