Deepfakes in Zoom Meetings

Deepfake Zoom Meetings + ClickFix Attacks: Where VerifyHuman Helps

The new scam pattern: "Let's hop on a quick call"

A recent Google Cloud / Mandiant threat intelligence write-up describes a modern, high-conversion attack chain aimed at crypto and FinTech teams: a trusted contact reaches out, schedules a short meeting, and then uses a fake Zoom call plus AI-generated video to create urgency and credibility. Once the victim is "in the meeting", the attacker manufactures a technical problem ("audio issues") and pushes the victim into a ClickFix flow — running "troubleshooting" commands that actually start the infection chain.

This is a big deal because it is not just malware — it is identity + trust manipulation. The attacker does not need to break your firewall if they can convince you to break your own device.

Quick recap (in plain language)

Based on the report, the sequence looks like this:

  • Compromise a real person's messaging account and use it to contact targets.
  • Build rapport and schedule a meeting.
  • Send a link leading to a spoofed Zoom sitehosted on attacker infrastructure.
  • Use a convincing video persona (victim reported a deepfake CEO) to keep the target engaged.
  • Trigger a ClickFix moment: "Run these commands to fix your audio".
  • Victim runs the commands → malware chain begins → data theft.

If you have ever helped someone troubleshoot on a call, you can see why this works: it feels normal.

Where VerifyHuman fits: the trust gap this attack exploits

VerifyHuman is built for one core problem: proving the person on the other side of a call is the real person. Right now.

This attack chain relies on you accepting visual presence as proof. But deepfakes and pre-recorded video break that assumption.

VerifyHuman adds a second channel of proof that is hard to fake in real time:

  • A trust-building step up front. You do not "verify strangers." You first establish trust with the real person, and only then can future checks be meaningful.
  • Time-bound QR verification for video calls: a quick check that expires fast
  • Audio-only verification when cameras are off. A simple one-time code both sides can confirm

>Human-friendly challenge steps that work even when you are tired, rushed, or distracted

In other words: even if an attacker can generate a convincing face and voice, they still can not pass a real-time check that is tied to a relationship you already established.

The exact moments in the attack where VerifyHuman would have helped

1) The "trusted contact" moment

When someone messages you from a known account, your brain relaxes.
Better default: treat "a familiar username" as not enough. If the message is asking for a meeting, money, access, or urgency, do a quick VerifyHuman check.

  • If the account is hijacked, the attacker can not complete a check that depends on the trust you already built with the real person.
  • You get a clean "verified / not verified" outcome without debating tone, grammar, or vibes.

2) The "fake Zoom meeting" moment

Verify identity before you treat the call as legitimate. If they stall, deflect, or can't complete it, you end the call before the attacker gets to the ClickFix step.

3) The "audio issue — run these commands" moment

ClickFix works because it reframes a security boundary as "helpful troubleshooting." VerifyHuman can prevent you from ever entering the attacker's scripted funnel. If the person can not verify, you don't follow their instructions; especially not "paste this into Terminal."

What to do today

If you are a founder, developer, or exec who takes a lot of calls, adopt these defaults:

  • Do not treat "familiar account" as proof. Hijacked accounts are exactly how these scams start.
  • Build trust once, then verify fast for any sensitive conversation such as money, access, credentials, urgent requests.
  • Never run commands from a call, even if the person seems legitimate
  • Treat "quick call" + "audio issues" as a red-flag combo.

Want a lightweight way to verify who is really on the call?

If deepfakes are part of the threat landscape, we need a better default than "they looked real on Zoom."

Learn more in our FAQ