When AI systems cause harm: the application of civil and criminal liability

Written By

russell williamson module
Russell Williamson

Senior Associate
UK

I'm a senior associate in our Dispute Resolution Group in London. I specialise in advising clients on complex commercial disputes, particularly in the technology, retail and consumer, energy, financial services and automotive sectors.

“Good morning, Dave.” 

It’s fairly safe to say that, in the main, those of us who practice commercial law do not have sufficient expertise in computer science to assess whether any given computing system is based on artificial intelligence (AI) techniques or on more traditional system development techniques. Indeed, AI systems are often described in terms that, to us laypersons, seem better suited to science fiction – as exemplified by HAL, the sentient computer in Stanley Kubrick’s 1968 film 2001: A Space Odyssey – than real life. Even in debates between computer scientists, AI has been light-heartedly defined as “whatever hasn’t been done yet“, suggesting that it is more akin to magic or wishful thinking than reality.

But, bringing a more blame-focused lawyer’s perspective, we can see that, possibly more so than traditional systems, systems based on AI techniques or methods are developed by combinations of separate designers, developers, software programmers, hardware manufacturers, system integrators and data or network service providers.

Full article available on here.

Latest insights

More Insights
Curiosity line pink background

China Cybersecurity and Data Protection: Monthly Update - February 2025 Issue

Feb 21 2025

Read More
featured image

In Cyberspace no one can hear you scream –Cybersecurity in the Media and Entertainment sector

8 minutes Feb 20 2025

Read More
city building security cameras

AI and Other Technological Advancements in the Defence Sector

Feb 20 2025

Read More