The rise of fake help desk calls

December 3, 2025

by imper.ai

A help desk agent answers an incoming call.

On the other end, “Sara” sounds rushed, apologetic – and extremely plausible. She knows the internal project names. She references a recent outage. She even uses the same casual phrases Sara always uses. She just needs a quick password reset. It happens to all of us at some point, right?

Except it isn’t Sara on the line.

Impersonation has always been the core of social engineering. In fact, around 60% of social engineering attacks involve impersonation, pretending to be a trusted colleague, vendor or authority figure.

What’s changed is the speed and precision with which attackers can now pull it off.
AI-powered tools means that attackers can scrape vast amounts of personal and organizational data in minutes, including:

  • What “Sara” sounds like
  • How she writes
  • What tools she uses
  • What pressures she’s under
  • Even the internal jargon she’d naturally reference


And here’s the twist: When the time comes to actually carry out the attack, they don’t actually need a cloned voice. They just need to sound confident – and convincingly human.
This is the real threat facing modern help desks: not AI replacing humans, but AI empowering attackers to impersonate them faster, more accurately and at scale.


When trust becomes the target

It’s common to think that impersonation is the new frontier of social engineering – but it’s actually the foundation. Today’s threat actors don’t necessarily need to breach software or deploy malicious code. Instead, they exploit trust. And that shift is quietly powerful.
In recent years, cybercrime has shifted away from technical break-ins and moved toward human compromise.

According to the Verizon 2024/25 Data Breach Investigations Report (DBIR), 68% of breaches involved a non-malicious human element – that is, someone being manipulated or making a mistake.

At the same time, about 17% of confirmed breaches were due to social engineering-driven incidents, putting it firmly in the top kick-points for attackers.

So, what’s changed?

  • Attackers now use AI and big-data tools to conduct research at scale – collecting what someone like “Sara” says, the emails she subscribes to, how she writes and what tools she uses.
  • Now they impersonate her voice, tone and workflow – and they just need to sound right.
  • The result: The barrier to entry has plummeted, meaning fewer technical exploits and more social finesse.


Why the help desk is a prime target

For attackers, the help desk is the perfect storm of high trust and high pressure. It’s one of the few places in an organization where strangers routinely ask for sensitive actions – and agents are expected to help fast.

Help desk teams exist to unblock people. They’re trained to solve problems quickly, reset credentials, grant temporary access and keep operations running smoothly. But of course, that makes them a natural target.

Attackers know this and they weaponize it.


One login is all they need to move laterally

A single credential reset might feel like a small thing. But to an attacker, it’s a valuable foothold.
With one valid login, a threat actor can:

  • Blend into normal traffic
  • Explore internal resources
  • Identify higher-privileged accounts
  • Exploit weak segmentation
  • And pivot to other machines inside the network


This ability to move laterally is exactly how attackers escalate from a simple impersonation call to a full-scale breach. They rarely stop at the first account; they use it as a stepping stone to reach someone with more power and more valuable data.


A real-world example: Clorox–Cognizant

During the Clorox–Cognizant breach, attackers exploited help desk processes to reset a password – and that was the tipping point. From there, they navigated internally and accessed critical systems, including supply chains.

Clorox ended up suing Cognizant (which managed its IT help desk), holding them responsible for a cyberattack that crippled Clorox’s production capability and cost the company $380 million.

And it wasn’t even a sophisticated exploit. It was a social one.


Why traditional safeguards fail

Most organizations assume their existing safeguards – MFA, caller verification scripts, collaboration platform logs, even voice recognition cues – are enough. But modern attackers are slipping straight through the gaps between systems.

Why is this happening?

  1. Collaboration tools weren’t built to verify identity
    Slack, Teams, Zoom, email – these platforms connect people, but they don’t confirm that the person behind an account is who they claim to be.

    Attackers exploit this by using:
    • Compromised accounts
    • Newly created lookalike accounts
    • Hijacked session cookies
    • Convincing usernames or display names

      Once inside a communication channel, they can sound authoritative and appear legitimate. There is no built-in identity assurance, just the illusion of it.
  2. Attackers spoof multiple channels to stack legitimacy
    Modern social engineering rarely comes through just one vector. Attackers now combine channels to create urgency and credibility simultaneously.

    And the methods go both ways:

    When impersonating the employee
    They may:
    • Gather intel using AI reconnaissance
    • Call the help desk pretending to be the employee
    • Reference real projects, colleagues and workflows
    • Push for a password reset or MFA change

      But attackers also impersonate IT to trick employees

      A second pattern is growing fast: Attackers pose as the IT help desk and call employees directly.

      A typical sequence looks like this:
    • They trigger an MFA bomb, flooding the employee with approval requests.
    • They immediately follow up with a ‘helpful’ phone call, pretending to be IT.
    • They claim something is wrong with the employee’s MFA device, laptop or VPN.
    • They instruct the employee to download a remote-access tool like QuickAssist, AnyDesk or TeamViewer.
    • Once installed, the attacker gains full control of the device – and therefore the network.
  3. Voice authentication is unreliable in the age of synthesis
    Relying on vocal familiarity (‘that sounds like Sara’) is no longer enough. Even without high-end voice cloning, attackers can:
    • Mimic tone and cadence
    • Reuse scraped audio snippets
    • Sound rushed or emotionally charged to suppress scrutiny


And even when AI voice-synthesis detection tools exist, they tend to operate in a constant cat-and-mouse dynamic. As generative models evolve, attackers can quickly outpace any system that relies on analyzing the audio itself.

This is why organizations just can’t depend on voice analysis alone – it’s a moving target.


From awareness to assurance

For years, organizations have relied on security awareness training, verification scripts and manual checks to defend against social engineering. But awareness alone can’t keep up with attackers who move faster, learn faster and impersonate more convincingly than ever.

It’s time to move from human awareness to machine-backed assurance.


Real-time impersonation detection: built for the help desk frontline

imper.ai gives humans the seamless safety net they’ve never had.

Instead of depending on agents to spot subtle cues of deception, imper.ai analyzes signals that are extremely difficult for attackers to mimic:

  • Device fingerprints: is the request coming from a known device?
  • Network diagnostics: is the network signature consistent with the real user’s usual environment?
  • Behavioral metrics: are typing patterns, navigation flows and interaction habits typical for this person?


Within seconds, imper.ai can assess whether the interaction shows signs of impersonation.


Final thoughts

Impersonation has always been the heart of social engineering – and today, AI has supercharged its speed and accuracy.

Help desks sit directly in the path of these attacks, expected to deliver both efficiency and perfect judgment under pressure.

imper.ai exists to fix that. By providing real-time, invisible identity assurance, imper.ai turns trust from a vulnerability into a defense, empowering frontline teams to work with confidence and speed.

Help desk employees shouldn’t be expected to outsmart AI-powered attackers – but they should be equipped with technology that protects them automatically.