Four DPRK IT Workers Entered The Hiring Pipeline. Infrastructure-Layer Detection Stopped Them.

April 13, 2026

by imper.ai

In Q1 2026, imper.ai processed pre-employment verification for 600 candidates at a global enterprise. Four were running confirmed DPRK IT worker tradecraft – a form of candidate fraud that existing hiring controls are not designed to detect. All four were flagged automatically at the pre-credential stage, from network, device, and identity signals alone.

What DPRK IT worker hiring fraud looks like in practice

DPRK IT workers have been hitting US enterprise hiring pipelines since at least 2022. The program is structured and profitable: individual workers pull up to $300,000 annually at US companies, with the regime capturing up to 90% of those earnings. The DOJ, OFAC, Mandiant, Unit 42, Microsoft, and GitLab have all published on it. The TTP set has stayed remarkably stable across that entire period – Astrill VPN, remote access tooling, VoIP-provisioned identities, multi-hop proxy routing.

One number worth sitting with: nearly every Fortune 500 CISO interviewed by Mandiant has acknowledged hiring at least one DPRK IT worker. The program works not because it’s technically sophisticated, but because the hiring pipeline has no instrumentation at the layer where these signals live. Hiring pipeline security has historically meant background checks and document verification. Neither operates at the device or network layer, where DPRK IT worker tradecraft is visible.

The compliance angle compounds the risk. Paying a DPRK IT worker’s salary is an OFAC sanctions violation regardless of whether the employer knew. Same unmonitored gap, two distinct exposures. For legal and compliance teams, this reframes the hiring pipeline from an HR problem to a sanctions risk — one that originates from candidate fraud at the sourcing stage, not a post-hire security event.

How DPRK IT workers were detected: network, device, and identity signals

Detection ran across three correlated signal layers. No single indicator was treated as sufficient on its own.

Network layer

Astrill VPN was present across all four candidates – the same tooling documented in DPRK IT worker investigations by Mandiant, GitLab, Recorded Future, Team Cymru, and Microsoft.

One candidate was routing through a multi-hop chain: Astrill into a Japan exit node, then forwarded through a US-facing proxy. Another was running three VPN providers simultaneously: Astrill at the OS level, CyberGhost, and Urban VPN as active browser extensions.

Latency inconsistencies were present across multiple candidates relative to their claimed US locations. This matters because latency is physics — a VPN exit node can make a session appear US-based, but it can’t change the round-trip time between the device and infrastructure near the candidate’s actual location. That delta isn’t spoofable at scale.

Device layer

AnyDesk was running on two candidates’ interview devices. That’s consistent with real-time third-party session control — a technique documented across Recorded Future, Microsoft, and Mandiant research on this actor set.

Identity layer

All four presented VoIP phone numbers and low-tenure email accounts with no meaningful activity history. No account age, no verifiable identity footprint, virtual phone provisioning — the fingerprint of a constructed persona.

Every TTP observed maps directly to published threat intelligence. These findings are live field corroboration of a pattern the community has tracked since 2022, caught automatically within an active hiring workflow. Full methodology and per-candidate signal breakdown is in the imper.ai Q1 2026 threat research brief.

Why deepfake detection is insufficient for DPRK IT worker hiring fraud

A lot of security teams land on deepfake detection as the answer to hiring fraud. Two problems with that: it requires a live video interview to run, and all four candidates here were flagged before any interview was scheduled. More fundamentally, DPRK IT workers don’t always use deepfakes — some appear on camera as themselves, using a real person as a front. Video analysis misses that entirely. And even when deepfakes are in play, it’s a probabilistic control in an arms race with no stable equilibrium.

Infrastructure signals don’t have that problem. Latency is physics, VPN fingerprinting is behavioral, remote access tooling is binary — none of it shifts with the AI landscape. There’s also a legal dimension: analyzing candidate video creates real exposure under biometric data laws in many jurisdictions, and a lot of legal and HR teams simply can’t approve it. Network and device-layer instrumentation produces the same detection outcome without touching the video content.

Why pre-credential is the only window that matters

Once a fraudulent candidate gets credentials, they’re inside with legitimately issued access. No anomalous login. No lateral movement. The attacker is the employee.

imper.ai for Hiring embeds directly into the ATS and interview workflow and produces a risk score before credentials are issued — no document uploads, no biometric enrollment, no recruiter workflow changes.

All four were stopped before they were hired. That’s the window.



imper.ai Threat Research | Q1 2026 Attribution confidence: moderate-to-high, based on TTP alignment with published DPRK IT worker research. imper.ai does not claim definitive attribution absent additional corroborating intelligence.