
Last March, a finance worker in Hong Kong wired $25 million to fraudsters after attending what he thought was a video conference with his company’s CFO. Everyone on the call looked real, sounded real, and knew internal details only executives would know. The problem? Every single participant was a deepfake.
This wasn’t science fiction. It was a Tuesday afternoon, and it exposed a terrifying gap in our legal framework. When artificial intelligence can perfectly replicate your face, voice, and mannerisms, who’s really responsible when fraud happens? Our laws were written for a world where seeing meant believing, but that world no longer exists. Financial institutions have begun scrambling to update their verification protocols, and if you want to understand the full scope of regulatory responses currently being developed, click here to see how different jurisdictions are approaching this challenge. Yet even with these efforts, the legal system is barely keeping pace with the technology.
The identity crisis courts can’t ignore
Traditional fraud law hinges on a simple concept: proof of identity. Someone forges a signature, impersonates another person, or uses stolen credentials. But deepfakes shatter this framework entirely. When a synthetic video of a CEO authorizes a wire transfer, who committed the crime? The person who created the deepfake? The one who deployed it? What if the AI tool was commercially available and used for what its makers claim was a “legitimate” purpose?
The problem extends beyond criminal courts. Every area of law that needs to check someone’s identity is now at risk, including civil lawsuits, contract disputes, and insurance claims. A business partner might say they never agreed to the terms and point to the possibility that their signature video was fake. How do you prove a negative? How do you demonstrate that something is real when the technology to fake it is indistinguishable from reality?
When technology moves faster than legislation
Lawmakers are trying to catch up, but they’re hampered by the speed of technological advancement. By the time a bill becomes law, the technology it was designed to regulate has already evolved.
The AI Act of the European Union tries to deal with some of these problems by making AI-generated content clear, but it’s still not clear how it will be put into action. The law says that deepfakes must be clearly labeled, but there is no one standard for what that label should look like or how it should be put into content. A small watermark that is easy to get rid of? A metadata tag that social media platforms might strip away? The technical solutions are as fragmented as the legal ones.
| Jurisdiction | Primary approach | Key limitation |
| United States | State-by-state laws targeting specific uses (revenge porn, election interference) | No federal framework; inconsistent enforcement |
| European Union | Mandatory labeling under AI Act | Unclear technical standards; cross-border enforcement challenges |
| United Kingdom | Existing fraud and impersonation laws applied case-by-case | Laws predate deepfake technology; slow judicial interpretation |
| China | Strict content regulation and platform liability | Focused on political control rather than commercial fraud |
Financial institutions on the front lines
Banks and financial institutions are arguably facing the most immediate threat. They’re the ones transferring millions based on video calls and voice authorizations. Some have responded by abandoning certain verification methods entirely. One major European bank recently announced it would no longer accept video identification for high-value transactions, returning instead to in-person verification – a step that feels like retreating into the past.
Others are fighting technology with technology, deploying AI detection tools to spot deepfakes. But this creates an arms race where each improvement in detection is met with an improvement in generation.Researchers at a London university showed earlier this year that their deepfake detection algorithm, which was 94% accurate in January, fell to 63% accuracy by June as generative models got better.
Where do we go from here?
The uncomfortable truth is that our legal framework for identity verification is fundamentally broken, and there’s no quick fix. We need new evidentiary standards that account for the possibility that any digital media could be synthetic. We need international cooperation on standards and enforcement because deepfake fraud doesn’t respect borders. And we need it all implemented before the technology becomes so sophisticated that distinguishing real from fake becomes genuinely impossible.
The Hong Kong case should serve as a wake-up call. $25 million lost because someone looked and sounded exactly like who they claimed to be. That fraud was discovered quickly, investigated thoroughly, and still, no one has been held accountable because the legal framework simply doesn’t know how to handle it.
Until lawmakers, courts, and technology companies align on solutions, every video call is potentially suspect, every voice authorization questionable, and every piece of digital evidence uncertain. The deepfake crisis isn’t coming – it’s already here. The only question is whether our legal system will adapt quickly enough to address it.