A recent fraudulent scheme in Hong Kong illustrates the alarming level of sophistication in deepfakes – and the urgent need for established protocols. Scammers deployed highly realistic, synthetic duplicates of upper management to impersonate and convince a junior employee to transfer HK$200 million (approximately US$25.6 million) out of the multinational company.
Deepfakes utilise powerful AI algorithms to digitally graft a person’s likeness onto video or audio content. The AI learns from source material provided – a person’s facial expressions, speech patterns, and mannerisms to generate a ‘model’ of an individual which can be manipulated to say and do things. Over the past 12 months, deepfake technology has gone from a highly technical, time-consuming operation to create a few seconds of audio, to a quirky webapp-operated ‘feature’. End-users can now create relatively believable digital clones with just a few images and/or audio snippets of an individual, much of which we have already uploaded online.
Criminals actively deploy deepfakes to impersonate executives, politicians, or other public figures in phishing schemes, fraudulent money transfers, and other cybercrime.
One of the first public breaches stemming from this technology occurred in 2019 when an energy firm CEO’s voice was cloned to call a colleague urgently requesting them to transfer €220,000 within the hour.
The Hong Kong case represents not only one of the largest reported losses from a deepfake scam – but also the need for AI governance where staged, coordinated ruses using digital clones are an active threat to businesses.
We consider that a combination of technical tools like digital watermarking, detection algorithms, deep learning, forensic analysis and digital asset registration of content provenance on trusted infrastructure, may all form part of a businesses’ deepfake defence strategy.
Education has been a foundational part of businesses’ mitigation strategy against deepfake losses, we consider that this approach is becoming less helpful. As AI improves, humans (without technological assistance) are increasingly unable to identify and avoid deepfakes.
The law also has a role to play in the prohibition and prosecution of deepfakes. Prosecuting authorities need to be globally aligned (always difficult), sufficiently resourced and supported with financially or criminally impactful legal penalties for this approach to be a true deterrent.
Expensive technology investments, major workflow changes to authorisations, and other driven defence protocols are always going to be fallible.
Stirling & Rose’s recommendation for management is to create unique verbal codewords or physical gestures known only to staff at the company. These should be shared in person (where possible) and never digitally recorded. Video calling protocols which require the ‘requester’ to provide an established answer before any financial transfers or sensitive actions proceed.
Suggestions for strong codewords include randomly generated passphrase strings, or agreeing on innocuous phrases like ‘How is Aunt Vivian?’ Code gestures could be distinctive hand signals or poses. The cues should be changed periodically to stay secure. Scammers with even the most powerful AI tools can’t circumvent secret phrases and gestures they don’t know exist.
Of course, this solution should complement cybersecurity, multi-factor authentication and fraud prevention measures – but they provide a reliable last line of defence if internal systems fail or are simply outsmarted.
Stirling & Rose is uniquely positioned to guide clients in this new landscape with extensive experience in AI alongside corporate compliance and risk management. The cross-pollination of these domains will only continue to grow, and proactive governance is only one component of a myriad of considerations.