The Configuration Drift Behind the Teams Helpdesk Breach

April 27, 2026

x minute read

What happened

On April 22, 2026, Google's Threat Intelligence Group and Mandiant disclosed a campaign by a threat actor they're tracking as UNC6692. The group breached enterprise networks by impersonating IT helpdesk staff over Microsoft Teams, ultimately exfiltrating Active Directory databases and achieving full domain compromise.

What's notable about UNC6692 is what they didn't do. They didn't use a zero-day. They didn't exploit a software vulnerability. They didn't bypass any of the security controls Microsoft has built into Teams.

They used the controls as configured.

The attack chain, in brief:

  1. Mass email bombing to overwhelm and distract the target
  2. A Teams chat from an external tenant, with the attacker posing as IT
  3. A link to a fake "patch" hosted on AWS S3
  4. Credential harvesting and a malicious browser extension installed on the endpoint
  5. Lateral movement, LSASS dumping, and ultimately exfiltration of the AD database

Microsoft's own April 2026 advisory was clear about how this works: the campaign abuses legitimate external collaboration features, and victims override clearly presented warnings to allow it to succeed.

So if Microsoft built the right controls and gave the right guidance, why did this work?

The answer is drift

The controls UNC6692 needed to be loose were exactly the ones that should be locked down: external chat with unmanaged tenants, guest user permissions, sideloading, URL reputation checks in messages.

These aren't obscure settings. They're documented, defaulted reasonably, and well-understood by Teams admins. The problem is that they don't stay where you put them.

A new project requires guest access for a vendor. Someone's policy gets relaxed. The change is made in good faith, scoped to one team, intended to be temporary. Six months later, the temporary policy is the global policy. No one revisited it. No one alerted on it. The tenant's posture has quietly drifted away from the secure baseline your security team documented and approved.  

This isn't really about Microsoft. It's about the gap between the configuration your team thinks is in place and the configuration that's actually live in production right now. That gap exists in every collaboration platform, every identity provider, and every email security tool.  

UNC6692 is a campaign that finds that gap.

What Reach customers see in their environment

Reach surfaces this gap in two ways.

Hardening findings evaluate your current Teams configuration against a security baseline –but more importantly, they bring context to the gap.  Reach shows which users are affected, what the risk is if a control stays loose, and what would change if you tightened it.  That context is the difference between a finding you action and a finding you ignore.   For the controls UNC6692's chain depends on, relevant findings include:

  • Malicious URL protection in Teams messages — directly in the path of Phase 1 of UNC6692's chain, where the attacker delivers the link to the fake "Mailbox Repair Utility" over chat. With this on, Microsoft warns users when they click unsafe links in Teams.
  • Meetings with unmanaged Microsoft accounts — restricts the surface area for external accounts to interact with internal users.
  • External participants give/request control — limits what external users can do once they're in a session.

Drift detection alerts when these and adjacent settings change after the fact. Across our customer base, the policies that drift most often in Teams environments include:

  • Allowchannelsharingtoexternaluser — whether channels can be shared externally
  • Allowsharedchannelcreation — whether new shared channels with external orgs can be created
  • Allowsideloading — whether non-Microsoft-vetted apps can be installed in Teams
  • Urlreputationcheck — whether URL reputation scanning is enforced on Teams messages

A loosening of any one of these is a yellow flag. Two or more loosened simultaneously is the configuration profile UNC6692 needs.

What we don't claim

Reach isn't your EDR, your SIEM, or your incident response tool, and we don't pretend to be. Those tools watch what's happening. Reach watches what's possible — the configuration assumptions an attacker like UNC6692 needs to be true before they ever send the first message.

By the time a malicious binary is executing on an endpoint, you're paying full price for the breach: incident response, forensics, customer notification, regulatory exposure. Stopping the attack before it starts to execute is a fundamentally different economic proposition than detecting it mid-flight.

Reach evaluates how the controls across your stack are actually configured right now — your collaboration platforms, your identity providers, your email security, and yes, your EDR. UNC6692's chain depends on a specific set of configuration assumptions across that whole stack. Reach makes those assumptions visible, scores them, and tells you when they change.

The longer those assumptions stay loose, the more campaigns find them.

Configurations drift. Attackers walk through the gap.

UNC6692 is one campaign. There will be others, and they'll abuse different combinations of legitimate features. The pattern is durable. UNC6692 won't be the last to find it.

The findings in your Reach console aren't just a checklist. They're the configuration assumptions that adversaries depend on staying loose. Watching them is upstream security work — quieter than detection, less dramatic than response, and the reason a campaign like this either lands or doesn't.

If you want to see what your environment looks like through this lens, that's a conversation we'd like to have.

Table of Contents

Related Posts

Getting Started with Reach

To join the community of customers enjoying the benefits of Reach and learn more about how it can transform your security posture, visit:

Reach Recognized in Gartner® Emerging Tech Report on Domain-Specific Language Models for SecOps
Get the report
arrow rightarrow right