Deepfake scams do not need to "hack" your systems to cause real damage. They just need to hack trust.
In the Arup case (reported by Hong Kong police and covered widely), a finance employee joined what looked like a normal video call with the company's CFO and colleagues. The request sounded urgent and confidential. The faces looked familiar. The employee transferred funds, and the company lost about $25 million USD.
This is the uncomfortable truth about deepfakes: when the video looks real, our brains stop asking the right questions.
Across the reporting, the pattern is consistent:
The key detail: this was not "reckless" behavior. It was a cautious person who got overridden by a convincing "social proof" moment.
Most finance controls assume one of two things is true:
Deepfakes break both assumptions.
Attackers do not need to perfectly imitate one person if they can overwhelm you with credibility signals: multiple faces, multiple voices, a realistic meeting vibe, and a tight timeline. The goal is to make you feel like you are the only one slowing things down.
Yes, there are sometimes visual tells (odd lighting, lip-sync issues). But relying on humans to catch those details is not a long-term defense. Especially in finance,in crisis management, in incident response, and in all cases where people are busy and the pressure is real.
A stronger approach is to change the rule from:
to:
That is exactly where VerifyHuman fits.
VerifyHuman is designed around a simple idea: trust is something you build once, then you can verify quickly when it matters.
That matters for finance teams because the highest-risk moments are predictable: approvals, bank detail changes, urgent payments, and anything "confidential".
VerifyHuman is not meant to magically identify a total stranger. Instead, it supports a trust-building process where two parties establish a trusted connection ahead of time. Once that relationship is established, future checks become fast and meaningful. In the Enterprise version, this trust is established by the organization, beforehand.
If someone claims to be a leader on a video call, VerifyHuman adds a quick step that is hard to fake in real time: A short, time-bound verification check (for example, using a QR flow). If the person cannot complete it, you do not proceed, even if the face looks right.
Deepfake scams do not always require video. Voice impersonation is rising fast, and scammers often use excuses like "camera is broken". VerifyHuman supports audio-only verification using a simple one-time code shared between trusted parties. In the enterprise version, the one-time code of each individual is established by the organization.
Here is a lightweight process that finance teams can adopt without turning every payment into a bureaucracy.
Do this once, calmly, when there is no pressure:
Trigger the gate when any of these are true:
If they can not verify, do not argue. Do not negotiate.
Hong Kong authorities have linked deepfake-assisted fraud to broader identity abuse. It is a reminder that "seeing a face" is no longer the same as "knowing a person".
Want a simple "deepfake-safe" approval flow for your team?
Deepfake scams are not just a tech problem — they are a workflow problem. The goal is to make the safe action, the easy action.
Learn more in our FAQ