The EU Artificial Intelligence Act (AI Act) is a regulation that aims to structure the development and use of artificial intelligence systems in the European Union. One of the main features of the regulation is its differentiated approach to the various categories of AI systems, depending on the level of risk they may pose.
In this article, I show, using data on the structure of AI Act regulation, that the primary focus of the AI Act is on high-risk AI systems, while other categories of AI systems are regulated to a much lesser extent.
The table below provides data on the number of articles, recitals and total number of provisions for each category of AI systems in the AI Act:
Category | Articles | Recitals | Total |
High-risk AI systems | 30 | 61 | 91 |
Prohibited AI systems | 1 | 18 | 19 |
General purpose AI systems and models | 6 | 22 | 28 |
AI systems with transparency obligations | 1 | 6 | 7 |
Common rules for all AI systems | 4 | 35 | 39 |
Administrative provisions | 70 | 36 | 106 |
Total | 112 | 178 | 290 |
Articles are the main content of the regulation, setting out specific obligations and requirements. Recitals explain the purpose of introducing particular regulations. Administrative provisions are articles or recitals that deal with procedures, supervisory authorities, penalties or issues related to the entry into force of the AI Act and therefore do not directly regulate AI systems themselves.
To prove that the AI Act mainly regulates high-risk AI systems, let us compare the number of articles, i.e. binding legal provisions, dedicated to each AI system category.
It is clear from the chart that by far the largest number of AI Act provisions relate to high-risk AI systems - as many as 30 - while:
This disparity makes it clear that the AI Act places particular emphasis on regulating high-risk AI systems.
A pie chart can serve to further illustrate the dominance of high-risk AI provisions.
The total number of articles for the categories directly regulating AI systems (high-risk, prohibited, general purpose, with transparency obligations, common provisions) is 42. The percentage share of each category is therefore as follows:
For comparative purposes, it is also worth looking at a chart which not only draws attention to the articles themselves, but also to the number of recitals dedicated to particular types of AI systems. Here again, it can be seen that the AI Act devotes almost half of the provisions to high-risk AI systems.
Why does the AI Act focus so strongly on high-risk systems? These systems, which are used in, for example, health care, recruitment, education, the provision of essential public services or law enforcement, can significantly affect people's lives and the functioning of society. They therefore require specific provisions to ensure their security and compliance with fundamental rights.
Other categories of AI systems are regulated to a much lesser extent:
A simple comparison of the number of AI Act provisions makes it clear that the AI Act mainly regulates high-risk AI systems, which is reflected in the number of provisions dedicated to them (30) and their percentage share (more than 71%) among AI systems provisions. Other categories of AI systems are regulated to a much lesser extent.
Consequently, there is no basis for asserting that the AI Act is a regulation that hampers the development of AI systems in Europe and the competitiveness of EU providers of such systems. The AI Act only impedes the development of those AI systems that entail significant risks to health, safety or fundamental rights. And, in my view, this is a good thing.
Other categories of AI systems - prevalent in practice - are regulated either little or not at all. And that, too, is a good thing.