Police stated ‘…everybody on the video call apart from the victim was a fake representation of a real person.’

A recent fraudulent scheme in Hong Kong illustrates the alarming level of sophistication in deepfakes – and the urgent need for established protocols. Scammers deployed highly realistic, synthetic duplicates of upper management to impersonate and convince a junior employee to transfer HK$200 million (approximately US$25.6 million) out of the multinational company.

Deepfakes utilise powerful AI algorithms to digitally graft a person’s likeness onto video or audio content. The AI learns from source material provided – a person’s facial expressions, speech patterns, and mannerisms to generate a ‘model’ of an individual which can be manipulated to say and do things. Over the past 12 months, deepfake technology has gone from a highly technical, time-consuming operation to create a few seconds of audio, to a quirky webapp-operated ‘feature’. End-users can now create relatively believable digital clones with just a few images and/or audio snippets of an individual, much of which we have already uploaded online.

Deepfake Fraud

Criminals actively deploy deepfakes to impersonate executives, politicians, or other public figures in phishing schemes, fraudulent money transfers, and other cybercrime.

One of the first public breaches stemming from this technology occurred in 2019 when an energy firm CEO’s voice was cloned to call a colleague urgently requesting them to transfer €220,000 within the hour.

The Hong Kong case represents not only one of the largest reported losses from a deepfake scam – but also the need for AI governance where staged, coordinated ruses using digital clones are an active threat to businesses.

Multi-prong fix?

We consider that a combination of technical tools like digital watermarking, detection algorithms, deep learning, forensic analysis and digital asset registration of content provenance on trusted infrastructure, may all form part of a businesses’ deepfake defence strategy.

Education has been a foundational part of businesses’ mitigation strategy against deepfake losses, we consider that this approach is becoming less helpful. As AI improves, humans (without technological assistance) are increasingly unable to identify and avoid deepfakes.

The law also has a role to play in the prohibition and prosecution of deepfakes.  Prosecuting authorities need to be globally aligned (always difficult), sufficiently resourced and supported with financially or criminally impactful legal penalties for this approach to be a true deterrent.

A simple fix?

Expensive technology investments, major workflow changes to authorisations, and other driven defence protocols are always going to be fallible.

Stirling & Rose’s recommendation for management is to create unique verbal codewords or physical gestures known only to staff at the company. These should be shared in person (where possible) and never digitally recorded. Video calling protocols which require the ‘requester’ to provide an established answer before any financial transfers or sensitive actions proceed.

Suggestions for strong codewords include randomly generated passphrase strings, or agreeing on innocuous phrases like ‘How is Aunt Vivian?’ Code gestures could be distinctive hand signals or poses. The cues should be changed periodically to stay secure. Scammers with even the most powerful AI tools can’t circumvent secret phrases and gestures they don’t know exist.

Of course, this solution should complement cybersecurity, multi-factor authentication and fraud prevention measures – but they provide a reliable last line of defence if internal systems fail or are simply outsmarted.

Stirling & Rose is uniquely positioned to guide clients in this new landscape with extensive experience in AI alongside corporate compliance and risk management. The cross-pollination of these domains will only continue to grow, and proactive governance is only one component of a myriad of considerations.

For now, a secret code may be the simple and cheap solution to what may be a very expensive future problem.

Please get in touch with us at info@stirlingandrose.com or on our LinkedIn to discuss how we can help your organisation adapt with AI. 

Back to top

Stirling & Rose is an end-to-end corporate law advisory service for the lawyers, the technologists, the founders, and the policy makers.

We specialise in providing corporate advisory services to emerging technology companies.

We are experts in artificial intelligence, digital assets, smart legal contracts, regulation, corporate fundraising, AOs/DAOs, space, quantum, digital identity, robotics, privacy and cybersecurity.

When you pursue new frontiers, we bring the legal infrastructure.

Want to discuss the digital future?

Get in touch at info@stirlingandrose.com | 1800 178 218