AI can boost security but leave door open to cyber attacks

Written By

feyo sickinghe Module
Feyo Sickinghe

Of Counsel
Netherlands

I am a Principal Regulatory Counsel in our Regulatory & Public Affairs practice in the Netherlands and Brussels. I have a focus on tech and comms and digital markets regulation, drawing on in-depth business knowledge and extensive experience in TMT and public administration.

Artificial intelligence has provided new technologies that have proven beneficial across various industries but at the same time it has also opened the door for the occurrence of cyber-attacks. The integration of machine learning introduces both vulnerabilities and advancements in cyberattacks. Areas of concern encompass data poisoning, ransomware, phishing, zero-day attacks, and model stealing.

AI and machine learning will help to advance cybersecurity and aid cyber teams in analysing vast datasets, detecting threat patterns and anomalies, and accelerating the learning process through feedback mechanisms. It is important for businesses to protect themselves against any cyberattacks and comply with the latest regulations.

The AI Act aims to ensure a level of cybersecurity appropriate to the risks. Suitable measures, such as security controls, should therefore be taken by providers of high-risk AI systems and underlying ICT infrastructure. Determining whether an AI-system can be classified as a high-risk system needs careful consideration.

Cybersecurity by design

As a matter of principle, the AI Act stipulates that for high-risk AI systems need to be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. For this purpose, the European Commission is encouraging the development of benchmarks and measurement methodologies together with stakeholders. This task may be assigned to the Commission’s AI Office which was launched on 16 June 2024.

AI models, systems and cyber resilience

High-risk AI systems and general-purpose AI models presenting systemic risks need to be resilient against attempts by unauthorised third parties to…

Full article available on Welcome to Artificial Intelligence Insights

Latest insights

More Insights
Curiosity line green background

China Cybersecurity and Data Protection: Monthly Update - February 2025 Issue

Feb 21 2025

Read More
city building security cameras

AI and Other Technological Advancements in the Defence Sector

Feb 20 2025

Read More
books

An “AI Playbook for the UK Government” has been released by the UK Government Digital Service – 5 key questions answered

Feb 14 2025

Read More