How to prepare for the European Artificial Intelligence Act: A 12-commandments guide for Manufacturers of Automated and Autonomous Vehicles and In-Vehicle Software Suppliers

Written By

lawrence freeman Module
Lawrence Freeman

Senior Counsel
Belgium

I'm Senior Counsel in our Brussels office with over 30 years' experience handling issues on European competition, regulatory and commercial law, both in private practice and in-house. I have unique experience regarding the regulation of electric connected and autonomous vehicles.

The European Union’s Artificial Intelligence (“AI”) Act came into force on 1 August 2024 and will become applicable in a phased manner from between 6 and 36 months thereafter, with most provisions applying after 24 months. It aims to ensure that AI developed and used in the EU is transparent, secure and accountable with safeguards to protect people's fundamental rights.

The AI Act is in the form of a Regulation and is thereby directly applicable in the EU’s 27 Member States without the need for any domestic implementing legislation.

In particular, penalties for infringements of the AI Act can reach up to €35 million or 7% of an undertaking’s annual global turnover whichever is higher for the most serious violations. Non-compliance could also lead to product recalls or market bans.

It categorizes AI systems into four different risk levels: unacceptable, high, limited, and minimal risk - plus an additional category for general-purpose AI. Each category has different requirements for organizations developing or using AI systems.

Many of the systems used in automated and autonomous vehicles (“AVs”) will fall into the high-risk category due to their potential impact on public safety and fundamental rights. Manufacturers of AVs and suppliers of in-vehicle software will therefore need to prepare for stringent obligations to ensure compliance with requirements regarding transparency and traceability, data governance, safety, and accountability.

Despite its limited direct scope of application to the automotive sector, the AI Act requests the European Commission to take certain requirements of the AI Act into account when adopting delegated acts under the sector-specific type approval legislation for vehicles.

The following provides a 12-commandments guide for manufacturers of AVs and suppliers of in-vehicle software as to how to prepare for compliance with the AI Act:

  1. Classify AI Systems According to Risk: Assess whether the AI systems used in AVs (e.g., perception, decision-making, or control systems) fall into high-risk categories under the AI Act.

    The AI Act places stricter compliance requirements on high-risk AI systems related to transportation safety such as AVs. Ensuring accurate classification is key to avoiding penalties and ensuring compliance.

  2. Establish Robust Risk Management Systems: Implement continuous risk assessment frameworks to evaluate potential AI-related safety risks, such as system failures or data biases that could impact the performance of AV systems.

    Since AVs operate in real-world, high-stakes environments, the AI Act mandates rigorous testing and validation of AI systems for safety and reliability. For AV manufacturers and in-vehicle software suppliers, this engenders pre-deployment testing, post-market monitoring and updates, adherence to harmonized standards, implementing robust accountability measures and defining liability for failures.

  3. Ensure Data Governance and Data Quality: Establish strong data governance practices, ensuring that data used to train AI systems (e.g., sensor inputs, environment maps) is high-quality, unbiased, and diverse.

    Data plays a critical role in the development and operation of autonomous driving systems. These systems collect vast amounts of data from various sensors, cameras, and other sources to navigate roads, interact with other vehicles, and detect potential hazards. To comply with the AI Act, AV manufacturers and suppliers of in-vehicle software suppliers must use high-quality datasets, ensure compliance with the EU General Data Protection Regulation, and continually monitor for bias.

  4. Guarantee Human Oversight: Incorporate mechanisms for human oversight into the design of AV systems, ensuring that human operators can intervene if the AI system behaves in an unsafe or unpredicted manner.

    Although AVs aim to minimize human intervention, the AI Act emphasizes that high-risk AI systems must have appropriate human oversight mechanisms. AV Manufacturers will need to incorporate fail-safe systems that allow a human to take control of the vehicle in emergencies or if the AI system malfunctions.

    Moreover, suppliers of in-vehicle software, particularly those responsible for advanced driver-assistance systems (“ADAS”), must ensure that their AI solutions can smoothly transition between human and autonomous control through integration of human-machine interfaces (“HMIs”).

  5. Enhance Transparency and explainability: Make the functioning of AI systems more transparent by documenting system behavior, decision-making processes, and limitations in an understandable way for regulators, users, and operators.

    One of the AI Act’s key requirements is that AI systems—especially those in high-risk categories—must be transparent and explainable. For AV manufacturers, this means being able to explain how their AI systems make driving decisions. If an accident occurs, the manufacturer must be able to demonstrate how the AI reached its conclusions in a given situation.

    To comply with this, manufacturers may need to invest in post-hoc explainability tools and frameworks that allow engineers to interpret complex AI behavior. Furthermore, drivers or vehicle owners may need to be informed when and how AI systems are used, particularly in partially autonomous systems.

  6. Develop and Follow Ethical Guidelines: Implement ethical design and development practices that align with the principles of fairness, accountability, and non-discrimination when creating in-vehicle software.

    Ethical guidelines help manufacturers build trust with consumers and ensure compliance with the EU AI Act that mandates fairness and non-discriminatory practices.

  7. Establish Continuous Monitoring and Auditing: Set up systems for the ongoing monitoring of AV performance post-deployment, including AI system updates and changes, to ensure continued compliance with safety and performance standards.

    The AI Act emphasizes the need for continuous oversight and periodic audits of high-risk AI systems to ensure they remain safe and effective after deployment.

  8. Prepare for Conformity Assessments: Ensure that AV systems pass conformity assessments and meet all technical and safety requirements before they are released on the European market.

    The AI Act requires that high-risk AI systems undergo regular third-party auditing to ensure compliance with EU safety and transparency standards. This is especially important for AV manufacturers, as their systems operate in public spaces and directly impact safety.

    AV manufacturers and software suppliers will need to prepare for this by developing systems that are auditable by design. This includes establishing comprehensive internal processes for testing, validating, and certifying AI components, in line with the requirements of the AI Act.

  9. Implement Robust Cybersecurity Measures: Design AI systems with security features to protect against cyber-attacks, system manipulations, or breaches that could compromise the safety of autonomous vehicles.

    The AI Act mandates that AI systems must be secure and resilient to cyber threats, particularly when they are part of critical infrastructure like AVs.

  10. Ensure User Safety and Well-Being: Integrate safety features that prioritize user well-being, such as collision avoidance, emergency braking, and protective measures for pedestrians and other road users.

    The AI Act emphasizes that high-risk AI systems must not endanger the health or safety of individuals.

  11. Maintain Proper Documentation and Reporting: Keep detailed records of system development, testing, deployment, and risk assessments, and ensure procedures for reporting incidents or failures to regulatory authorities.

    Compliance with the AI Act requires a high level of documentation, ensuring that authorities can audit the development and deployment of AI systems to assess conformity with EU regulations.

    To help companies navigate the new regulatory landscape, the AI Act introduces the concept of "regulatory sandboxes," which allow businesses to test innovative AI systems in controlled environments under the supervision of regulatory bodies. AV manufacturers and in-vehicle software providers can benefit from these sandboxes by collaborating with regulators and leveraging sandbox environments for innovation.

  12. Support AI Act’s Human-Centric Approach: Prioritize the user experience by ensuring that the AV's decision-making systems are understandable and intuitive, allowing passengers to feel confident in the vehicle’s AI-based functionalities.

    The AI Act promotes a human-centric approach to AI to increase public trust and acceptability, particularly for technologies like AVs that directly interact with people.

Conclusion

“AI is a rare case where I think we need to be proactive in regulation than be reactive” – Elon Musk, CEO, Tesla, speaking at the South by Southwest (SXSW) tech conference in 2018 in Austin, Texas.

Proactivity is key. It is important for manufacturers of AVs and suppliers of in-vehicle software to prepare for compliance with the AI Act in this manner to avoid the risk of financial penalties, product recalls, and market bans, to build greater trust with consumers and authorities, and to generally ensure a smooth transition into this new era of AI regulation. 

Get in touch

Lawrence Freeman
Senior Counsel, Bird & Bird, Brussels

[email protected]

About the author: Lawrence Freeman is a Senior Counsel based in Bird & Bird's Brussels office with over 30 years of experience handling issues of European regulatory, competition, and commercial law both in private practice and in-house roles.

Lawrence joined Bird & Bird in 2018 after having spent 5 years as European Counsel of Tesla, Inc. where he founded the European legal department. At Tesla Lawrence advised both on day-to-day and on highly strategic legal and regulatory issues and developed a unique expertise in regulatory issues regarding electric, connected and autonomous vehicles.

Lawrence is admitted as a Solicitor (England and Wales) and is a member of the Brussels Bar.

EU AI Act Guide – now ready to download! 

To guide you through the EU AI Act, our multi-disciplinary global team of AI experts has launched our EU AI Act Guide which summarises key aspects of the new regulation and highlights the most important actions organisations should take in seeking to comply with it. Serving a similar purpose as our GDPR Guide, our EU AI Act Guide is divided into thematic sections, with a speed-read summary and a list of suggested priority action points. 

To access the guide, click here.

Latest insights

More Insights
featured image

The Long & Winding Road to EV value chain for African countries

2 minutes Dec 17 2024

Read More
Tech AI robot

Key Areas of Focus in Legal Due Diligence for AI Companies in Germany: Assessing Risks and Ensuring Compliance

Dec 04 2024

Read More
Aeroplane on tarmac

Women in Tech: At the forefront of innovation - Key takeaways from Andrea Wu, Urban-Air Port

Nov 27 2024

Read More