California’s AI bill vs. the EU AI Act: a cross-continental analysis of AI regulations

Written By

tobias brautigam module
Tobias Bräutigam

Partner
Finland

I am a partner and the head of our Privacy and Data Protection group in Helsinki, where I advise our local and international clients on complex privacy and data issues.

riku rauhanen Module
Riku Rauhanen

Senior Associate
Finland

I am a Senior Associate in our Commercial and Privacy & Data Protection groups in Helsinki, where I work with our local and international clients advising them on data protection, other data regulation, and commercial contracts.

The regulation of AI has become a priority for an increasing number of governments around the world, with the EU’s AI Act having been passed last May leading the way. California is the newest addition to this list, with the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (Senate Bill 1047) having been passed in August. This article compares the two approaches; for a more deeper analysis of the EU AI Act, please refer to the July 'AI edition' of our Connected newsletter.

The EU AI Act: A comprehensive framework for AI regulation

At the core of the EU AI Act lies a risk-based approach with a “the higher the risk, the stricter the rules” framework. The Act classifies different AI systems as prohibited, high-risk, limited risk and minimal risk systems. The majority of the obligations in the AI Act are assigned to providers of high-risk AI systems, some obligations are also assigned to users of AI systems. Examples of high-risk AI systems in the Act include AI systems used to manage and operate safety components in critical infrastructure or AI systems used for recruitment and to evaluate candidates.
Providers of high-risk AI systems are required among others to have their AI system undergo conformity assessments, to register stand-alone AI systems in an EU database and to sign a declaration of conformity before the system can be places on the European market or put into service.

California’s AI bill: shaping ethical and transparent AI development

California’s bill is more focused on large-scale AI systems and meet a certain threshold, for example where the cost of training the model exceeds 100 million dollars. The AI bill tackles future AI innovation and development and aims to prevent possible threats to safety and security by requiring certain human controls. Moreover, the bill tries to stifle risks posed by the rapidly increased use of AI models in the majority of sectors of society and industry. The law aims to do so by requiring thorough testing and transparency, for example through a written and separate safety and security protocol that states compliance requirements and identifies testing procedure. 

Key differences

Both the EU AI Act and California’s AI bill aim to promote safety regarding AI systems and minimize the potential risks. The EU AI Act, however, is a more comprehensive and strict set of rules.

In addition to the geographical application, a key difference is that California’s AI bill has more specific whistleblowing protection for employees of companies that are developing AI models. Whistleblower protection would be granted to employees who bring forward risks associated with the AI models that are being developed. The EU AI Act also instructs that persons acting as whistleblowers on breaches of the AI Act should be protected under EU law, but whistleblowing is tackled more so in the EU Whistleblowing Directive.

On the other hand, the EU’s enforcement mechanisms and repercussions for breaches are a lot stricter and more clearly defined than those in California’s AI bill. The EU AI Act calculates fines based on the total worldwide annual turnover (up to 7%), while sanctions under AI bill are calculated based on the cost of computing power used to train the covered model, with some thresholds.

As other jurisdictions, such as Brazil, South Korea and Canada continue to develop their AI regulation frameworks, businesses should stay up to date on the changes and reflect on how they affect their endeavours. Our team will be following closely the implementation process of the EU AI Act going forward. 

For more information, please contact Tobias Bräutigam or Riku Rauhanen.

Latest insights

More Insights
collection of files with coloured bulldog clips

Key digital takeaways from the hearings of incoming Commissioners

Dec 03 2024

Read More
Curiosity line blue background

ENISA Implementing Guidance on NIS2 security measures - draft for consultation

Dec 03 2024

Read More
electronic fingerprint

Version 4: OfDIA announces the gamma trust framework

Dec 03 2024

Read More