Procurement of AI Solutions – Legal and commercial developments, and considerations for contracting authorities

Written By

roger bickerstaff module
Roger Bickerstaff

Partner
UK

With over 25 years' experience as a leading technology lawyer and now based in both our London and San Francisco offices, I have extensive experience advising on tech infrastructure and outsourcing projects.

henna malik Module
Henna Malik

Associate
UK

I am an associate in Bird & Birds public sector projects team within Bird & Bird's Commercial department, supporting clients in the public and private sectors on commercial transactions and relationships.

This article discusses legislative and regulatory developments relating to the procurement of artificial intelligence (“AI”) solutions in the UK public sector, recent commercial developments, risks posed by AI technologies, and practical steps that contracting authorities can take to mitigate these risks.

Legislative Landscape

2020 – International and UK-based guidance

OECD Guidelines and UK Guidelines getting published

In June 2020, the World Economic Forum Centre for the Fourth Industrial Revolution Global Network published a guidance and corresponding toolkit, “AI Procurement in a Box” for the procurement of AI solutions in the public sector. The UK Government, which had co-created this guidance, simultaneously published UK-specific guidelines (the “Guidelines”) and included a section with AI-specific considerations for contracting authorities.

UK Guidelines considerations and AI specific considerations

At the time, the Guidelines were published as part of the UK Government’s strategy to become a global leader in AI and data and intended to support contracting authorities to “engage effectively with innovative suppliers or to develop the right AI-specific criteria and terms and conditions that allow effective and ethical deployment of AI technologies”. The Guidelines largely mirrored the principles and considerations outlined in the OECD’s version of the guidance. However they were a useful supplement, as the current UK’s public procurement legislation is silent on the use of AI technologies.

The Guidelines were split into two sections:

  1. Top 10 considerations; and
  2. AI-specific considerations within the procurement process.

These top 10 considerations were:

“1. Include your procurement within a strategy for AI adoption.
2. Make decisions in a diverse multidisciplinary team.
3. Conduct a data assessment before starting your procurement process.
4. Assess the benefits and risks of AI deployment.
5. Engaging effectively with the market from the outset.
6. Establish the right route to market and focus on the challenge rather than a specific solution.
7. Develop a plan for governance and information assurance.
8. Avoid Black Box algorithms and vendor lock in.
9. Focus on the need to address technical and ethical limitations of AI deployment during your evaluation.
10. Consider the lifecycle management of the AI system.”

The Guidelines subsequently set out AI-specific considerations at each stage of the procurement process. These were:

  1. Preparation and Planning
    Building multidisciplinary procurement teams, having in place appropriate data assessments and governance processes, conducting AI impact assessments, conducting pre-market engagements and establishing the most appropriate procurement approach and vehicle for the procurement, were the focus of this subsection on the preparation and planning stage.

  2. Publication
    Notices and other procurement documents, in particular the ITT, should include the following information:
    - a clear problem statement (or, what challenge the solution is intended to address);
    - results of data discovery to highlight your requirements and limitations of data;
    - the requirement to understanding any given supplier’s approach e.g. “drafting evaluation questions that give you information about the algorithms and models”;
    - requirements that focus on highly explainable outputs to avoid vendor lock-in and ‘Black Box’ AI systems; and
    - adopting appropriate contractual commercial strategies to establish intellectual property ownership, ongoing support requirements such as knowledge transfers, and periodic review of the model.

  3. Selection, Evaluation and Award
    The selection, evaluation and award of AI related contracts should allow for the evolution of the AI system and this should be built into the relevant stages of the procurement. Having appropriate processes in place to ensure accountability over the outputs of algorithms, “avoiding outputs that could be unfairly discriminatory”, regular model stress-testing, reproducibility, acceptable performance parameters, and security should also be considered.

  4. Contract Implementation and Ongoing Management
    This section addressed information gathering across all stages of the project’s lifecycle (modelling, training, testing, verifying, and implementation) and having in place:
    - process-based governance frameworks which support end-to-end auditability;
    - regular model testing;
    - adequate knowledge transfer and training; and
    - appropriate end-of-life processes. 

Commercial and Legislative Updates

2022 – Launch of Generative AI, such as ChatGPT

The Guidelines noted that it “mostly refer(red) to the use of machine learning. Machine learning is a subset of AI, and refers to the development of digital systems that improve their performance on a given task over time through experience. Machine learning is the most widely-used form of AI, and has contributed to innovations like self-driving cars, speech recognition and machine translation.” Perhaps unsurprisingly, the Guidelines did not anticipate, or account for, the rapid, large-scale developments and innovations in the field shortly thereafter, namely, the launch of Generative AI solutions, such as the introduction of ChatGPT in November 2022.

Generative AI, according to the Generative AI Framework for HMG guidance published in January 2024, and (discussed further below) is “a form of Artificial Intelligence (AI) - a broad field which aims to use computers to emulate the products of human intelligence - or to build capabilities which go beyond human intelligence.

Unlike previous forms of AI, generative AI produces new content, such as images, text or music. It is this capability, particularly the ability to generate language, which has captured the public imagination, and creates potential applications within government.”

This subsequently led to multiple publications, and UK government initiatives to understand, regulate and ensure the safety of the use of AI more generally, including, but not limited to:

  • Setting up a Frontier AI Taskforce (formerly the Foundational Model Taskforce) in April 2023;
  • Publishing a white paper “A pro-innovation approach to AI regulation” in August 2023;
  • Organising an AI Safety Summit in early November 2023; and
  • the Chancellor announcing “a further £500 million in compute for AI over the next two financial years” to deliver an expansion of the AI Research Resource programme” in the 2023 Autumn Statement in late November 2023.

2022 – Developments in UK Procurement Law

The UK’s new Procurement Act 2023 (“the Act”) received Royal Assent on 26 October 2023, and will come into force in October 2024. The key objectives of the Procurement Act 2023 are to localise, increase flexibility and consolidate the public procurement regime in the UK (other than Scotland – which has decided not to be subject to the new Act). While it does not largely deviate from the current EU-based public procurement regime, it remains silent on the procurement of AI technologies and so, in the case of AI procurement, the Act would need to be read alongside current and future guidance. For further details and information on the new Procurement Act, we have also recently published an article on it here.

The EU AI Act is also in the process of implementation, having obtained political agreement on 9 December 2023 and, following further procedures in the coming months, is anticipated to come into force mid-2026. The EU AI Act (which will not come into force in the UK – following the UK’s departure from the EU) introduces into the EU member states, a tiered, risk-based approach to the classification of different types of AI ranging from those that are used in facial recognition systems, to generative AI, to “simpler” systems which do not pose harm or discrimination. These categories (ranging from “Prohibited” practices to minimal risk) place corresponding transparency, reporting and management obligations to ensure the AI system can be deployed safely and securely. The EU AI Act is nonetheless worth consideration for UK-based contracting authorities as it is likely to affect supplier business practices to accommodate their EU customers, their supply chain and other entities within their organisation.

AI-specific risks faced by contracting authorities and what can contracting authorities do to mitigate these risks?

While use of artificial intelligence in the public sector may carry specific risks, implementing the right safeguards do not necessarily call for novel strategies to be put in place. Indeed, guidance published in 2019 by the Alan Turing Institute, together with the Office for Artificial Intelligence, remains relevant. This set of guidance “Managing your artificial intelligence project” sets out main risks posed by AI and relevant mitigations to address these risks:

Managing risk in your AI project

Risk How to mitigate 
Project shows signs of bias or discrimination  Make sure your model is fair, explainable, and you have a process for monitoring unexpected or biased outputs
Data use is not compliant with legislation, guidance or the government organisation’s public narrative  Consult guidance on preparing your data for AI (i.e. preparing data for AI to ensure it is secure and unbiased) 
Security protocols are not in place to make sure you maintain confidentiality and uphold data integrity  Build a data catalogue to define the security protocols required 
You cannot access data or it is of poor quality  Map the datasets you will use at an early stage both within and outside your government organisation. It’s then useful to assess the data against criteria for a combination of accuracy, completeness, uniqueness, relevancy, sufficiency, timeliness, representativeness, validity or consistency 
You cannot integrate the model  Include engineers early in the building of the AI model to make sure any code developed is production-ready 
There is no accountability framework for the model 

Establish a clear responsibility record to define who has accountability for the different areas of the AI model (i.e consideration of whether:

  • the models are achieving their purpose and business objectives;
  • there is a clear accountability framework for models in production;
  • there is a clear testing and monitoring framework in place;
  • your team has reviewed and validated the code;
  • the algorithms are robust, unbiased, fair and explainable; and
  • the project fits with how citizens and users expect their data to be used.)” 


The Guidelines mentioned above are also transferable across to newer types of AI, such as generative AI. One of the most recent sets of guidance, the “Generative AI Framework for HMG”, published in January 2024, provides similar narrative. In a similar fashion to the Guidelines, 10 principles are defined to “guide the safe, responsible and effective use of generative AI in government organisations”:

“Principle 1: You know what generative AI is and what its limitations are.
Principle 2: You use generative AI lawfully, ethically and responsibly.
Principle 3: You know how to keep generative AI tools secure.
Principle 4: You have meaningful human control at the right stage.
Principle 5: You understand how to manage the full generative AI lifecycle.
Principle 6: You use the right tool for the job.
Principle 7: You are open and collaborative.
Principle 8: You work with commercial colleagues from the start.
Principle 9: You have the skills and expertise needed to build and use generative AI.
Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place.”

It also discusses “Buying generative AI” where the Guidelines have been cited as a providing “a summary of best practice when buying AI technologies in government”. The section largely reiterates principles and considerations from the Guidelines but nonetheless notes the following practical recommendations:

  • “Engage your commercial colleagues from the outset.
  • Understand and make use of existing guidance.
  • Understand and make use of existing routes to market, including Frameworks, Dynamic Purchasing Systems and Memoranda of Understanding.
  • Specify clear requirements and plan your procurement carefully.
  • Seek support from your commercial colleagues to help navigate the evolving market, regulatory and policy landscape.
  • Ensure that your procurement is aligned to ethical principles.”

In summary, consideration of the UK Guidelines outlined at the beginning of this article should remain in active contemplation in the minds of contracting authorities in addition to other risks not specific to AI-related procurement.

Concluding remarks

While there are a growing number of resources to keep up with the developments in AI, alongside imminent implementation across the UK (and internationally), the majority of resources emphasise that contracting authorities should “treat AI technology like you would any other technological solution, and use it in appropriate situations. Every service is different, and many of the decisions you make about technology will be unique to your service.” In other words, it is crucial for contracting authorities to adopt a principled, risk-based approach to remain compliant with current and upcoming legislation, but also ensure that such approaches are sufficiently flexible to ensure policies and processes are able to be updated as and when as AI projects evolve over the course of the contract.

Latest insights

More Insights
Tropical beach

Offshore Developments in the Netherlands: Updates on the Wind Energy Roadmap and Offshore Hydrogen Demo Project

Dec 03 2024

Read More
Satellite dish against a pink sky

The cost of space: Navigating fixed-price bidding in the space and satellite sector

Nov 27 2024

Read More
trees

Legislative Traffic Jam after the German Traffic Light Coalition Goes Dark

Nov 26 2024

Read More