Impact of the EU's AI Act proposal on automated and autonomous vehicles

Written By

nils loelfing module
Dr. Nils Lölfing

Counsel
Germany

I am a counsel in our Technology & Communications Sector Group. I provide pragmatic and solution-driven advice to our clients on all issues around data and information technology law, with a strong focus on and experience with AI and machine learning projects.

AI is considered as the main enabler for assisted, automated and autonomous driving. Although the EU´s AI Act* proposal for high-risk AI systems does not – despite its cross-sectoral approach – directly apply to automated and autonomous vehicles (hereafter “AVs”) and their AI components, several of the AI Act´s accountability requirements will also apply to AVs. This is because they will be introduced to AVs in the future through delegated acts of the Commission under the type-approval framework for motor vehicles.

We will discuss below what this could mean for OEMs as well as traditional automotive and in-vehicle software suppliers. This article will also outline why the AI Act requirements for AVs have the potential to significantly impact the way AVs and their AI-based safety components are developed going forward.

1) Introduction

The EU´s AI Act proposal is a game-changer for many industries developing or using so-called “high-risk” AI systems, as it will introduce a couple of new requirements for developers, providers, and users of such AI systems.

Many businesses are therefore concerned about the AI Act, which appears ever more clearly on the horizon as the legislative process in Brussels moves forward. While it is widely recognised that AI requires some form of regulation to address the new risks that arise, in particular, from varying degrees of autonomous algorithmic behaviour, it is feared that the regulatory burden created by the AI Act will significantly stifle innovation. This would also thwart the strong potential associated with AI to deliver societal benefits and economic growth, as well as enhance global competitiveness.

At least at first sight, this does not seem to hold true for the automotive industry though. AVs, which are the main beneficiaries of AI technology in this sector, are not supposed to fall within the scope of the AI Act, at least not directly. The Commission (and so far also the Parliament and Council as co-legislators in their discussions) followed the calls of industry associations such as from the European Automobile Manufacturers’ Association (ACEA) who advocated for a “sectoral and light‐touch approach” to AI (see here) long before the AI Act proposal by the Commission was published.

Nevertheless, this does not mean that AI systems in AVs shall consequently remain untouched from a regulatory perspective. In fact, the AI Act in its current form requests the Commission to take certain requirements of the AI Act into account when adopting delegated acts under the sector-specific type approval legislation for vehicles. Given the importance of AI for AVs, this has the potential to hugely impact the automotive industry and the development of AVs.

2) Importance of AI for AVs

AI is considered as the main enabler for assisted, automated and autonomous driving at present. If you look at AVs, they are sophisticated “computer on wheels” with their large number of connected and complex AI systems that are deployed in modern vehicles. This includes not only genuine driving functions but also a broad range of other in-vehicle AI applications, such as vehicle safety functions, comfort functions, advanced driver-assistance systems, connectivity systems, and infotainment systems. These interrelated AI systems play a crucial role in taking automated and autonomous driving to the next level.

To measure the level of automation or autonomy of AVs enabled by AI, the terms automated and autonomous vehicles were defined by the industry-standard of the Society of Automotive Engineers (SAE) scaling from zero (no automation) to five (fully autonomous). Developers worldwide, which include OEMs, traditional automotive suppliers as well as software companies entering this space (such as Google's subsidiary Waymo or Apple), are currently working towards level three and level four (for which certain use cases were already approved in the past). These levels allow the vehicle to take over the driving task under certain limited conditions (whereas at level 2, the human driver must remain in charge). While truly autonomous driving (at level 5) is certainly off in the future, vehicle automation develops swiftly with the continuous integration of further advanced safety features into vehicles.

3) The EU AI´s Act indirect impact on AVs

While the EU´s AI Act has no direct impact on AVs, specific future delegated acts will become relevant for AVs going forward. Next to OEMs, also traditional automotive suppliers as well as the mentioned software companies who play a central role in the development of AVs, and which increasingly supply these AI applications for their integration in AVs by OEMs, should be wary of these upcoming requirements.

The AI Act is generally drafted as cross-sectoral law applicable to AI systems in all sorts of industries. Nevertheless, certain exemptions are established to avoid frictions with existing sector-specific legislation. This is true for the automotive sector and AVs as well. Regulation (EU) 2018/858 (hereafter “Type-Approval Framework Regulation”) already requires a comprehensive type-approval process (including a certification by an approval authority) demonstrating that vehicles and their components comply with the relevant administrative provisions and technical requirements specifically for vehicles before placing them on the EU market.

Consequently, rather than being subject to the AI Act, certain vehicle-related components (namely AI-related safety components within the meaning of the AI Act) shall likewise be caught by the Type-Approval Framework Regulation as sector-specific legislation, even if these products and systems qualify as high-risk AI (Art. 2(2)(f)). At the same time, nevertheless, the Commission must take certain accountability requirements under the AI Act into account when adopting delegated acts under the Type-Approval Framework Regulation, provided these acts concern AI systems which are safety components in the meaning of the AI Act (cf. Art. 80 AI Act amending Art. 5 of the Type-Approval Framework Regulation).

Said accountability requirements result from Title III, Chapter 2 of the AI Act which sets out the legal requirements for high-risk AI systems in relation to, amongst others, risk management, data and data governance, documentation and recording keeping, transparency, human oversight, robustness, accuracy and security. These requirements must hence be considered in future delegated acts adopted by the Commission based on the technical and regulatory specificities of the automotive sector and AVs. While delegated acts are of a non-legislative nature, they are still legally binding.

4) Scope and core requirements for vehicle-related AI systems under future delegated acts for AVs

Some of the most controversial topics under future delegated acts will likely pertain to the scope of these AI Act requirements for AVs, as well as how the requirements of data governance, risk management and human oversight must be applied to AVs. To avoid hampering the development of AVs, it will be crucial for the Commission to provide clear requirements that allow OEMs, automotive and software suppliers to reasonably implement high-risk AI systems into AVs. The goal must be to find an appropriate balance between mitigating risks and allowing the innovation of AVs to reap their societal benefits, such as a reduction of traffic levels and accidents or cutting of fuel consumption and emissions.

a) Scope

To be caught by potential future requirements under a delegated act of the Commission, the AI system supplied for or build into AVs must qualify as a safety component within the meaning of the AI Act. Given the inherently high-risk nature of AVs (able to cause severe physical harm or death, as well as property damage), it is arguably often intuitive to also qualify AI systems of AVs as safety components. Nevertheless, given that an average passenger car contains more than 30,000 parts among which a wide variety of AI systems for different functions will be available (with a tendency to increase), it should be clearly defined which AI systems used in which specific context qualify as safety component. This is ever more important looking at the definition of the term “safety component of a product or system” in Article 3 (14) AI-Act, which is overly broad:

“‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property”.

Especially in vehicles the first part of this definition is far-reaching. Safety components pursuant to this definition may be AI systems which are supporting self-driving systems (e.g. AI systems analysing street signs, identifying objects, etc.). Also, AI systems analysing the functions of the vehicle (e.g. monitoring tyre pressure or oil temperature or the possible range of the battery/tank) could fall into the scope of a delegated act. One could even go as far as including facial recognition software giving access to the vehicle as high-risk (though this is a standard for every modern smartphone not at all linked to high risks).

Therefore, certain safety-related automotive applications, like comfort functions, infotainment applications, vehicle warnings, or other minor automations in driving, for example, should not be subjected to future delegated acts supposed to only govern high-risk AI build into AVs. The scope should also be continually updated to reflect the progressing automation levels under the industry-standard of the SAE to remain proportionate to the possible risks and leave room for innovation.

b) Data governance

The need for high quality data for well-functioning AI systems is obvious in any sector in which AI is deployed. Consequently, the AI Act requires training, validation, and testing data sets to be representative. However, for AVs this is particularly important because of the extreme variability of driving environments around the world and the fact that safety critical systems, like self-driving cars, are expected to operate flawlessly irrespective of weather conditions, visibility, or road surface quality. Training the algorithms of the AVs means petabytes of training data in practice and may take long.

It is therefore important that OEMs as well as traditional automotive and software suppliers are on the one hand required to provide appropriate training for AVs with real world data to avoid overfitting (resulting from non-diverse and thus biased training data) and societal harm. But, on the other hand, this must not hamper the development of AVs, if the benefits resulting from, for example, a reduction of the environmental impact of road traffic, outweigh any reasonable lacks in the representativity of training data. The standard for representative datasets should therefore not be absolute but allow for balancing. This is certainly true for other sectors as well, but particularly crucial for AVs given the enormous benefits tied to their development.

c) Risk management

A risk management system must be established for high-risk AI systems and requires – amongst others – the identification, analysis and mitigation of the possible risks related to the high-risk AI system. Traditional vehicles spread different risks than those that will be caused by the adoption of AVs. Also, risks related to AVs are not only different but much more widespread, and also change as the levels of autonomy of AVs progress (see the SAE industry standards mentioned above).

Therefore, it should be clarified to which extent other risks which are not necessarily obvious to OEMs as well as traditional automotive and software suppliers must be included in a risk management system for AVs, and how they should be balanced. For example, should the societal risks relating to the significant impact AVs have on workers employed in driving occupations, servicing and repair centre and the insurance industry be considered, and how can they be balanced against – for example – a reduction of emissions by improving the way AVs brake and accelerate.

d) Human oversight

Under the AI Act proposal, high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. For AVs, it is hardly feasible to effectively oversee the entire use of AVs by their occupants as most decisions are taken in real-time with no opportunity to override these decisions given the very nature of immediately implemented decisions. Also, AVs, as the technology progresses in line with the SAE standards mentioned above, will invite its occupants to increasingly use their time in the vehicle for matters unrelated to navigating the vehicle, which further complicates effective oversight mechanisms.

To effectively oversee the actions of AVs it is important that occupants are being provided with the operating status of the vehicle in an informed manner, so they are in fact aware of it, and know when and how to intervene, if necessary. This does not only require technological innovations for enabling occupants to become aware of the current circumstances and allow them intervening. Also, solutions to educate occupants with respect to the level of autonomy a vehicle exactly has, and what this means in terms of oversight and intervention opportunities are needed to implement this sensible requirement into a regulation for AVs.

5) Watch this space

It will be important for OEMs as well as traditional automotive and software suppliers to watch this space as the AI Act requirements that will be applied under the Type-Approval Framework Regulation have the potential to significantly impact the way AVs and their AI based safety components are developed in future. Indeed, the Commission is required to consult expert groups composed of member states' representatives which meet on a regular or occasional basis before delegated acts (in which those requirements will be enshrined) can be adopted. However, it will – already today – be important from an industry perspective to further emphasize on the right balance between risks and innovation and the importance of context-specific requirements for the automotive sector to achieve legal certainty.

Thanks to Bird & Bird trainee Timon-Johannes Engel for his contribution to this article.

* All references to the EU´s AI Act proposal refer to the linked version of the EU Commission unless otherwise stated.

Latest insights

More Insights
Curiosity line pink background

ASIC’s 2025 enforcement priorities – what’s on the corporate regulator’s mind?

Nov 21 2024

Read More
featured image

Understanding the Impact of the Transposition of the CER Directive into Irish Law

5 minutes Nov 19 2024

Read More
Carabiner

Update: Reform of product liability adopted - New liability and litigation risks for companies!

Nov 19 2024

Read More