AI can boost security but leave door open to cyber attacks

Artificial intelligence has provided new technologies that have proven beneficial across various industries but at the same time it has also opened the door for the occurrence of cyber-attacks. The integration of machine learning introduces both vulnerabilities and advancements in cyberattacks. Areas of concern encompass data poisoning, ransomware, phishing, zero-day attacks, and model stealing.

AI and machine learning will help to advance cybersecurity and aid cyber teams in analysing vast datasets, detecting threat patterns and anomalies, and accelerating the learning process through feedback mechanisms. It is important for businesses to protect themselves against any cyberattacks and comply with the latest regulations.

The AI Act aims to ensure a level of cybersecurity appropriate to the risks. Suitable measures, such as security controls, should therefore be taken by providers of high-risk AI systems and underlying ICT infrastructure. Determining whether an AI-system can be classified as a high-risk system needs careful consideration.

Cybersecurity by design

As a matter of principle, the AI Act stipulates that for high-risk AI systems need to be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. For this purpose, the European Commission is encouraging the development of benchmarks and measurement methodologies together with stakeholders. This task may be assigned to the Commission’s AI Office which was launched on 16 June 2024.

AI models, systems and cyber resilience

High-risk AI systems and general-purpose AI models presenting systemic risks need to be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. Model and systems are likely to qualify as “products with digital elements” as defined in the upcoming Cyber Resilience Act (CRA) which is expected to be published in the Official Journal of the European Union after the summer.

High-risk AI systems which fall within the scope of the CRA may demonstrate compliance with the cybersecurity requirements of the AI Act by fulfilling the essential cybersecurity requirements set out in that regulation. High-risk AI systems that qualify as ‘important and critical products’ under the CRA will be subject to conformity assessment provisions. Non high-risk systems may also fall within the scope of the CRA as ‘products with digital elements’ and will also be subject to conformity assessment provisions dependent on the product classification set out in the Annexes to the CRA.

Further guidance is to be expected through Commission guidance on the implementation of the AI Act and implementing acts pursuant to the CRA.

For further information, please contact Feyo Sickinghe

TO SUBSCRIBE TO OUR CONNECTED NEWSLETTER CLICK HERE