Does the UK Online Safety Act regulate AI?

Written By

In the rush to put AI-specific legislation on the statute books it is easy to overlook the extent to which existing laws are capable of reading on to AI. ChatGPT, to its credit, is more aware than many human beings. Asked (before the official publication of the EU AI Act) whether it is regulated by law, it gave me this answer:

“ChatGPT, like other AI technologies, is subject to a variety of legal and regulatory frameworks, though it may not be regulated by specific laws targeting AI directly. Here are key aspects of how ChatGPT is regulated by law.”

It went on to enumerate half a dozen kinds of legal regulation: data protection and privacy, consumer protection, intellectual property, content and speech regulations, AI-specific regulations, and ethical guidelines and best practices (the last of which ChatGPT fairly acknowledged are not actual laws). It concluded, accurately:

“While there isn't a singular global law regulating ChatGPT, it operates within a complex web of existing legal frameworks, guidelines, and emerging regulations that collectively govern its development and use.”

A recent addition to the complex web of existing legal frameworks - one not specifically cited by ChatGPT - is the UK Online Safety Act 2023 (OSA). The OSA has the possibility to map on to AI in a variety of different ways. Here are some of them.

User content

The OSA seeks to harness the control that U2U platforms (social media, discussion forums and so on) can exercise over their users in order to regulate indirectly some kinds of user-posted content. The two main ones are illegal content and content harmful to children.

The OSA does this by placing a variety of duties on a service provider, such as (for illegality) to take proportionate measures relating to the design or operation of the service to prevent individuals encountering ‘priority’ illegal content; and to use proportionate systems and processes designed to swiftly take down a broader range of illegal content upon becoming aware of it. Ofcom, the designated OSA regulator, is currently considering responses to its over 1700 page consultation paper on how to implement the illegality duties.

Ofcom has more recently issued another lengthy consultation paper on the OSA duties in relation to protection of children.

The OSA duties are technology neutral. So if a user makes use of a generative AI tool to create a post, the platform’s duties will apply just as if the user had personally drafted or drawn it. That is put beyond any doubt by S.55(4)(a) of the Act: “the reference to content generated, uploaded or shared by a user includes content generated, uploaded or shared by means of software or an automated tool applied by the user.”

The same will often be the case for an underlying offence to which the platform’s illegality duties relate. Thus Ofcom’s draft Illegal Content Judgements Guidance states:

“A4.40 Where generative AI has been used to create an indecent or a prohibited image of child this should be considered illegal content... . Discussion of how to use generative AI for this purpose may also be illegal content if it amounts to encouraging or assisting the creation of such an image.”

User bots

What if the user itself is not a human being, but an AI-driven bot? The drafters of the OSA have thought of that, too. S.55(4)(b) says that: “a bot or other automated tool is to be regarded as a user of a service if-

  1. the functions of the bot or tool include interacting with user-generated content, and
  2. the bot or tool is not controlled by or on behalf of the provider of the service.”

The Act does not define “bot”, but in its general definitions section provides that “automated tool” includes “bot”.

Risk mitigation

In addition to the prevention and takedown duties, a third category of illegality duty requires a U2U service provider to take proportionate measures, relating to the service’s design or operation, to effectively mitigate and manage risks of harm to individuals arising in various ways from illegal content and illegal use of the service. Harm, for this purpose, means physical or psychological harm.

In relation to all three categories of illegality duty, the way the service is designed, operated and used includes, among other things, proportionate measures in the area of functionalities, algorithms and other features. This has the potential to impinge on AI systems used by service providers in the operation of their systems.

There are broadly corresponding prevention and risk mitigation and management duties in relation to content harmful to children.

Transparency

The OSA imposes duties to include in terms of service (U2U services) or in a publicly available statement (search services) provisions giving information about any proactive technology used by a service for the purpose of compliance with their illegality or protection of children duties. The information must include the kind of technology, when it is used, and how it works. The provisions must be clear and accessible.

There is a long, complicated definition of proactive technology. In summary, and subject to some exclusions, it includes content identification technology, user profiling technology, and behaviour identification technology. In the Act’s only express reference to AI, those include such technologies which utilise artificial intelligence or machine learning.

There is potential overlap here with data protection law on solely automated decision making. The ICO commented in its February 2024 Content Moderation Guidance:

“Complying with [the OSA] duty may help you provide the transparency to users that UK GDPR requires. However, you must provide the necessary transparency for data protection law.”

AI-powered search

Some of the OSA’s duties apply to search engines. Again, the Act is technology-neutral: if a search engine service within the scope of the Act makes use of AI to provide the service, then the AI-driven aspects of the service will be in scope along with the rest.

AI to fulfil OSA duties

The Act contemplates that service providers may use technology to fulfil safety duties imposed by the Act. For instance, Section 192 on provider judgements about content (such as whether the content is illegal) envisages that such judgements may be made “by human moderators, by means of automated systems or processes or by means of automated systems or processes together with human moderators”.

Ofcom has the task of preparing Codes of Practice in relation to the OSA duties, and accompanying Guidance for many of them. Ofcom has considerable discretion as to what systems, processes and measures to recommend in its Codes of Practice, but it has to be satisfied that they are proportionate. Ofcom’s recommendations have significant force, since compliance with a Code of Practice is deemed to fulfil the Act’s safety duties.

The question of whether to recommend AI technology for any purpose has already arisen. Ofcom’s November 2023 Illegal Harms consultation stated:

“We recognise that identifying previously unknown content is an important part of many services’ processes for detecting and removing illegal content. We do not yet have the evidence base to set out clear proposals regarding the deployment of technologies such as machine learning or artificial intelligence to detect previously unknown content at this time. As our knowledge base develops, we will consider whether to include other recommendations on automated content classification in future iterations of our Codes.”

Ofcom announced in April 2024 that there may be a further consultation on proactive technologies, as it gathers more evidence. Ofcom is:

“planning an additional consultation later this year on how automated detection tools, including AI, can be used to mitigate the risk of illegal harms and content most harmful to children - including previously undetected child sexual abuse material and content encouraging suicide and self-harm. These proposals will draw on our growing technical evidence base and build on the existing measures set out in our illegal harms draft Codes of Practice.”

AI also features in Ofcom’s recently published Children’s Harms Online consultation in the context of age assurance, as well as in relation to content harmful to children.

Mandating AI

One of the most controversial features of the OSA is the S.121 power vested in Ofcom to direct a service provider (including a private messaging service) to use technology accredited by Ofcom to scan for CSEA material; and public platforms to scan for terrorism content. We are a long way away from that power coming into force, let alone being used, but in principle accredited technology could include AI.

Final word

The Online Safety Act touches AI at a number of points. Recommending and requiring the use of AI systems in Codes of Practice or through an accredited technology notice is an especially sensitive area, since any lack of accuracy impinges directly on the fundamental rights of users, with the concomitant risk of being held to be non-compliant with the European Convention on Human Rights.

Latest insights

More Insights
graph

UPC in Brief: Unitary Patent Protection Trends: Discrepancies, Statistics, and Language Requirements in 2024

Jul 15 2024

Read More
Carabiner

CSDDD is here to stay; the EU clock is ticking for mandatory supply chain due diligence

Jul 12 2024

Read More
pink tiles on grey background

Compliance countdown begins as EU AI Act prepares to take effect

Jul 12 2024

Read More