AI-powered fraud involving cloned voices and fake meeting participants rose 1210% over the past year, according to voice security firm Pindrop, which says attackers are increasingly using synthetic audio and deepfake-style impersonation to trick employees, customers and call center staff.
The warning, reported by Infosecurity Magazine, points to two fast-growing channels: voice fraud over phone calls and “virtual meeting fraud,” where criminals use manipulated audio or video to pose as executives, co-workers or trusted contacts in video conferences. The shift matters because many organizations still treat a familiar voice or face on a call as informal proof of identity.
Pindrop’s findings align with recent real-world cases. In one of the most cited examples, a Hong Kong employee was reportedly convinced to transfer about $25 million after joining a video meeting populated by deepfake versions of colleagues and a senior executive, according to Reuters. The case showed how business email compromise can be strengthened by AI-generated voice and video, making fraudulent requests harder to spot.
The technical barrier has also dropped. Attackers can now pull voice samples from earnings calls, interviews, social media clips and voicemail greetings, then use AI tools to generate convincing speech on demand. In meeting scams, they can combine stolen profile images, compromised collaboration accounts and synthetic media to create a realistic but fake presence on Zoom or Teams. Traditional warning signs still apply, but they are more subtle: unusual urgency, requests to bypass approval steps, strange cadence in speech, limited facial movement, or pressure to move conversations off normal channels.
The immediate risk is financial loss through wire fraud, payment diversion and account takeover. Longer term, the trend undermines trust in remote communications and weakens voice-based verification, including some call center authentication flows. Organizations reviewing defenses should focus less on whether a voice sounds real and more on process controls: callback verification using known numbers, dual approval for payments, and out-of-band checks for sensitive requests. For remote staff handling sensitive conversations, a trusted VPN may help reduce exposure to adjacent risks such as account compromise, but it will not solve impersonation fraud on its own.
Pindrop’s numbers add to a growing body of evidence that AI impersonation is moving from novelty to routine criminal tradecraft, especially anywhere trust is built over phone calls or video meetings.




