Artificial intelligence in civil aviation: pre-flight checks under way in the UK

Written By

simon phippard module
Simon Phippard

Of Counsel
UK

I am Of Counsel in our Aviation & Aerospace practice in London. I bring more than 30 years' commercial and litigious experience to a diverse array of aerospace issues.

Since our legal update (here), in November 2024 the UK Civil Aviation Authority ("CAA") set out its comprehensive approach in regulating artificial intelligence (“AI”), which follows from its published paper detailing its Strategy for AI in August 2024. This article aims to provide a high-level overview of this strategy. 

What’s the vision?

Recognising the opportunities afforded by the application of AI, the CAA’s ultimate vision is to “enhance aerospace efficiency, sustainability, and while ensuring safety, security, consumer protection, and environmental sustainability through proportionate governance”.

How will this be achieved?

The CAA’s strategy is two-pronged, focussing on Regulating AI in Aerospace (CAP3064A) and Using AI in the CAA (CAP3064B).

The CAA recognises the significance and opportunity for AI to automate complex tasks, and from an aerospace regulatory perspective it outlines two main challenges:

  1. Assuring the robustness of AI’s software – the use of complex algorithms and machine learning makes it more challenging to verify AI’s decision-making processes and prove it will operate safely and securely, particularly when continual machine learning results in system development. 
  2. People and AI working together – to ensure that they work together effectively, and adequate trust is in place. 

The CAA highlights the importance of predictability in safety-critical systems, and welcomes the fact that most contemplated applications are deterministic machine learning systems. There is little indication of how, if at all, non-deterministic systems would be managed.

Under the regulatory part of the strategy, an AI Strategy & Portfolio Hub will be established to “provide centralised oversight and expertise, manage the AI Portfolio, implement the UK's AI regulatory principles, and collaborate across CAA teams to integrate AI into internal operations”. Given the range of potential applications of AI, the CAA will be using its AI Portfolio to assess, and keep under ongoing review, the areas which require attention. As an initial assessment, the CAA has identified how AI may affect the eight critical elements of effective safety and security oversight identified by ICAO. These elements include primary aviation and security legislation, specific operating regulations, personnel and equipment certification and licensing, and surveillance. It is interesting that the CAA does not consider it likely that primary aviation legislation will have to be amended immediately to take account of AI, but the resulting autonomy might do so. The strategy paper comments on a number of implications of increased autonomy and promises ongoing collaboration with the Law Commission consultation on the subject, on which we have commented: Aviation Autonomy – a New Legal Order?.

The CAA has also outlined that it is unlikely to certify specific AI algorithms or models directly; it envisages that its regulatory role will extend to software systems that contain AI and their intended functions. 

In terms of using AI in the CAA, the emphasis of this part of the strategy is to take a ‘human-first’ approach. This is to ensure that the CAA personnel are provided with adequate support to use AI tools safely and appropriately; ensuring that human effectiveness is improved rather than just replacing humans with technology. This includes updating and developing comprehensive policies for AI governance, with the ultimate objective to use AI to “enable colleagues, customers, and consumers to work, collaborate, learn and operate more effectively”. 

When will this be achieved?

This will be subject to on-going consultation, monitoring and implementation. 

The CAA intends, through the AI Portfolio, to deliver the aerospace regulation part of the strategy between 2024-2028 based on the following three pillars:

Pillar 1 – Horizon Scanning and Market Insights 

Pillar 2 – Defining the Strategic Directions

Pillar 3 – Building CAA Capability

Under both limbs of the strategy, the CAA’s initial plan for AI entails:

  1. Pre-Flight Checks (to Summer 2025) – here, the CAA will start its AI journey with ‘pre-flight checks’ to deliver the initial strategy and develop business plans; and
  2. Taxi and Take-off (to the end of 2026) – with the CAA intending to establish the AI Strategy & Portfolio Hub and initiate collaborative projects, conduct systematic reviews and develop rulemaking mandates.

Current examples of the CAA’s response to AI

Another key area of the CAA's attention for AI implementation is aviation infrastructure. The driving forces behind this is the desire to enhance airport efficiency and boost passenger satisfaction. In the CAA’s Strategy for AI (CAP3019), the CAA identified that AI could be used to increase efficiency in baggage handling and passenger movement. The CAA also noted that AI would be helpful to improve the safety of passengers by employing advanced technologies such as facial recognition (which in itself would also require considerations around GDPR). Developments are already underway in this area as the CAA confirmed early trials of integrating AI into ground operations such as aircraft maintenance. This is anticipated to enhance sustainability. While progress is already underway, full integration of AI in this field is not expected to be completed until 2027.

According to the statistical analysis conducted by the CAA in 2023, the most used application of AI in aviation was within airspace management, which recorded 179 global patents in 2023 alone. The CAA is already making progress in increasing the efficiency of airspace management systems and improving risk monitoring by harnessing algorithm frameworks to predict weather conditions. It is achieving this by engaging with aerodromes to discuss how performance-based navigation systems can be applied based on the individual performance metrics of each airport. The suggested timeline for implementing AI in this area of aviation is 2030; however, this is not definitive and may change based on market conditions.

Watch this space and what next?

Given the evolving AI landscape, close collaboration within the sector is important. The CAA has already noted its intention to continually develop its engagement with stakeholders, including national aviation authorities, non-governmental organisations and other UK sector regulators. The CAA also notes that the UK will not necessarily proceed in isolation: given the international nature of both aviation and AI technology, concepts which are developed in other jurisdictions, such as the EU AI Act, are likely to affect development of AI applications in the UK. Whilst the CAA has noted that it will not develop standards directly, it will actively participate and contribute to relevant industry standards. Moreover, it will also be necessary to keep an eye on emerging risks which the use of AI or automation may introduce, but which have not yet been apparent or regulated.

For further information, please don’t hesitate to contact the authors of this article, Simon Phippard, Carey Tang and Charlotte Edmondson.

 

Latest insights

More Insights
featured image

Australia’s New Merger Control Regime: a Transition Roadmap

7 minutes Mar 11 2025

Read More
telescope

Further trends in the defence sector - supply chains, cybersecurity and space

Feb 20 2025

Read More
Folder

Victory for West Ham – High Court orders Stadium to repay £3.6m after setting aside an expert determination for manifest error

Feb 10 2025

Read More