AI drives an illegitimate future too

Written By

simon shooter module
Simon Shooter

Partner
United Arab Emirates

I am the head of the firm's International Commercial Group, and established the cyber-security team back in 2010. I am a commercial lawyer engaged in providing a full spectrum of legal support to clients for their day to day business.

saarah badr Module
Saarah Badr

Senior Counsel
United Arab Emirates

Through working in-house in the media & entertainment industry for many years, I bring extensive regional knowledge, coupled with a practical and commercial approach.

With my apologies to any industrious and dedicated cyber criminals, I have this long held hunch that as a breed, while they may be smarter than your average robber and pickpocket, cyber criminals are inclined to laziness. After all, it’s not a necessity to be sitting at a desk in your suit and tie at 08.00 to launch a DDOS (aka Distributed Denial of Service[1]) attack or throw the switch on a middleman attack. 

How appealing then to the indolent hacker are the opportunities presented by AI? Whether for marshalling bot armies, polishing up the phishing techniques or taking the hard graft out of vishing[2] and smishing[3], AI opens the door to even longer times in the pool or under the duvet. 

The plain ugly fact is that the cyber underworld are as excited by the opportunities offered by AI as the rest of the world and they have taken to the task with gusto. 

In my day job I have advised two substantial international clients in connection with AI deep fake attacks. The first attack was about nine months ago. In this case the client’s CFO was called on his mobile phone, having just left a board meeting, by his parent company CFO – or so he thought. The accent was absolutely spot on, as was the intonation and phraseology. The core of the call was an instruction to make a significant funds transfer to a special purpose account the parent company CFO had set up to finance a major project that had been discussed in the preceding board meeting. The conversation was natural, with the parent company CFO responding cogently to questions as asked. The correspondent was an AI agent that had been trained through analysing publicly available recordings of the parent company CFO. 

In this case the client dodged the bullet by the skin of their teeth – or rather by the prickling of the hairs on the neck of the client’s CFO. While he was completely convinced that he had been speaking to his parent company CFO there was a niggle of doubt that held him back from hitting the button to instruct the transfer. That was the sliver of luck that saved his (beef) bacon. 

My other client was not so lucky. The attack was more recent and neatly describes the cyber criminals advance along the learning curve. My client has a European parent company with operating subsidiaries in the Americas, Asia Pac and Europe. The subsidiaries make significant quarterly transfers of funds to the parent. The American subsidiary’s CFO was involved in two days of calls and video conference calls with someone she thought was the parent company CFO. At the end of those meetings, she made the quarterly transfer to the new account details provided by the parent company CFO. Sadly, her correspondent was again an AI agent, trained from the freely available online video footage of the parent company CFO addressing the business. This time, again, the accent, the intonation and phraseology were perfect in the voice and video calls, added to that the mannerisms and body language of the parent company CFO were perfect too. 

As we are in the exotic world of cyber, clearly we need a new term of reference. I am not aware of anyone tagging the deep fake video scam as yet and so I am going to plant the flag and tentatively propose “mashing” – a decent sequential familiarity to ‘phishing’, ‘vishing’ and ‘mishing’ with a pleasing proximity to ‘mash-up’. 

My scammed client is not the only unfortunate to fall for this highly sophisticated scam. In February last year British engineering company Arup fell for a mashing scam transferring HK$200m to a fake account. While in May WPP, the world’s biggest advertising group, was the attention of a vishing attack with a deep fake AI voice clone.  “Brad Pitt” (or, more accurately, his AI doppelganger) has also been doing the rounds recently, scamming a French fan out of US$ 850,000 to pay for his “medical treatment”. 

The cybercrime opportunities for AI are limited only by imagination. In addition to the deep fake vishing and mashing described above, the following are some known AI cybercrime use cases: 

  • Supercharging phishing – gone are the days of laughable spelling and grammar telltales (in fact we are probably in the phase of needing to be on highest alert if an email lands in your inbox in perfect American without a trace of a spelling mistake and with peerless grammar – especially if it purports to come from me!). AI can effortlessly make a phishing email targeted and irresistibly compelling.
  • Cyber reconnaissance – social profiling and network scanning are a breeze with AI while they were labour intensive chores before. Add clever social profiling into your supercharged phishing and see your penetration rates soar. Use your AI network scanning capacity to identify more and better vulnerabilities to exploit.
  • Adaptive malware – an endearing feature of generative AI is that it learns; not only does it learn but it can deploy that learning. AI offers the alluring prospect of endlessly adapting and improving malware, helping you stay a few strides ahead of the opposition.
  • Speeding development – AI has dramatically reduced the time between identifying a vulnerability and weaponizing malware to exploit it. A recent estimate has it that that time lag can be reduced by more than 60%. You have to love AI! 
  • Skipping around biometric devices – AI has CAPTCHA and biometric authentication for breakfast. So, the only person to be left struggling to find the darn squares with a bicycle in it will be me. 

Having possibly put you in a similar state of fear as when I was a young lad watching cybermen on Dr Who, I will sign off with a mildly tweaked quote from Professor Stephen Hawking ‘We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace [cyber criminals] altogether.’

Contact Us

Our expert team at Bird & Bird is fully equipped to assist with any questions. For queries, please contact: Simon Shooter, Nick O’Connell or Saarah Badr


 

[1] where an attacker floods a server with internet traffic to block users from accessing certain content/sites.

[2] or “voice fishing” where a voice is mimicked by the attacker on a phone call to defraud the victim.

[3] where fake text messages are sent to the victim to trick them into downloading malware or sharing personal data.

Latest insights

More Insights
Curiosity line teal background

Unlocking employee mobility: Government plans to target ‘non-compete’ clauses

Mar 26 2025

Read More
Mountain Range

The Future of Energy Newsletter - Winter 2025

Mar 26 2025

Read More
featured image

Whither Class Markings?

6 minutes Mar 25 2025

Read More