The call comes in at 2 AM. Your SIEM just fired on something suspicious — a Windows host quietly checking in with an external server every 45 seconds. No user interaction, no application prompt, just a steady rhythmic heartbeat. By the time the analyst opens the ticket, the attacker has been inside your network for twelve hours.

That heartbeat is a Cobalt Strike beacon.

TL;DR

  • Cobalt Strike beacons leave network-level fingerprints: known JARM hashes, JA3/JA3S signatures, and predictable HTTP patterns.
  • At the host level, watch for named pipes matching \msagent_* / \postex_*, rundll32.exe with no arguments, and Sysmon events 8, 10, 17, 18 in sequence.
  • Malleable C2 profiles disguise beacon traffic as jQuery, OneDrive, or Amazon — but URI patterns, User-Agents, and TLS certificates still expose them.
  • RITA detects the beaconing rhythm even when jitter is applied, by scoring statistical periodicity across hundreds of connections.
  • Cracked copies of Cobalt Strike circulate freely in the threat actor ecosystem — you will encounter it regardless of attacker budget.

Why This Matters

Cobalt Strike shows up in the majority of ransomware intrusions and nation-state operations analyzed in incident response reports. It is not a niche tool for sophisticated groups — it is the standard. Cracked copies have been freely available since at least 2020, meaning underfunded criminal groups and APT teams are operating the same framework.

If you are a SOC analyst, blue teamer, or anyone responsible for endpoint or network security, Cobalt Strike will be part of your threat landscape. Understanding how it behaves — at the network level, at the host level, and under customization — is the starting point for detecting it.


Contents


What Is Cobalt Strike?

Cobalt Strike is a commercial penetration testing framework originally built for adversary simulation. Red teams use it to emulate real attackers inside a corporate environment, testing whether defenses would catch them. It provides a full post-exploitation toolkit: interactive sessions, lateral movement, privilege escalation, credential theft, and data exfiltration.

The core component is the beacon — a small agent that runs silently on a compromised host and “calls home” to a team server controlled by the operator. The beacon receives commands and executes them, making Cobalt Strike a classic C2 (Command and Control) framework. Think of it as a very capable remote desktop, except the victim never sees a window.

Cobalt Strike maps nearly completely to the MITRE ATT&CK framework, covering techniques from initial access all the way through exfiltration. That coverage is precisely why real attackers — not just red teamers — favor it.


The Beacon: How C2 Communication Works

The beacon is designed to be patient. In default configuration it checks in with the team server roughly every 45 seconds, with a 37% jitter — a randomization that makes the interval slightly unpredictable. Instead of exactly 45 seconds every time, the sleep might be 38 seconds, then 51, then 44. The randomization is intended to defeat simple threshold-based detection.

Default HTTP Patterns

Out of the box, before any customization, Cobalt Strike beacon HTTP traffic has these characteristics:

  • GET request — beacon retrieves pending commands from the team server
  • POST request — beacon sends back results and collected data
  • URIs — default paths include /jquery-3.3.1.min.js and /jquery-3.3.2.min.js
  • User-Agent — defaults to Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko, a spoofed IE11 string. This is almost always changed by real operators via a Malleable C2 profile — modern deployments use Chrome, Firefox, or application-specific UAs to blend in.

A GET request every 45 seconds to an external IP is already unusual enough to alert on regardless of the User-Agent. Experienced operators customize these defaults, but the customization itself creates other detectable inconsistencies, as covered below.


Network-Level Detection

JA3 / JA3S Fingerprinting

JA3 fingerprints TLS connections by hashing specific values from the TLS ClientHello packet — cipher suites, extensions, TLS version. Think of it as a behavioral fingerprint for how a program initiates an encrypted connection. Different software libraries produce different fingerprints, even when connecting to the same destination.

Cobalt Strike beacons produce consistent JA3 hashes in default configurations:

Fingerprint TypeHash Value
JA3 (client)72a589da586844d7f0818ce684948eea
JA3 (client, alternate)a0e9f5d64349fb13191bc781f81f42e1
JA3S (server response)ae4edc6faf64d08308082ad26be60767
JA3S (server response, alternate)b742b407517bac9536a77a7b0fee28e9

Zeek and Suricata collect JA3 hashes automatically. A Zeek notice to alert on the default beacon fingerprint:

# Alert when a connection matches the known Cobalt Strike JA3 fingerprint
if ( c$ssl?$ja3 && c$ssl$ja3 == "72a589da586844d7f0818ce684948eea" )
NOTICE([$note=Notice::ACTION_LOG,
$msg="Possible Cobalt Strike beacon (JA3 match)",
$conn=c]);

What this does: Every TLS connection gets a JA3 hash computed automatically. If it matches a known beacon fingerprint, the connection is flagged for review. Operators can modify their TLS configuration to shift this hash, but many don’t bother — especially with cracked or default builds.

JARM Fingerprinting

While JA3 looks at the client, JARM actively fingerprints the server by sending ten crafted TLS ClientHello packets and hashing the combined responses. It identifies the specific TLS stack running on the remote end.

The JARM fingerprint for a default Cobalt Strike team server is:

07d14d16d21d21d00042d41d00041de5fb3038104f457d92ba02e9311512c2

You can actively scan suspected C2 infrastructure:

Terminal window
# Probe a suspected C2 server and compare against known CS JARM
python3 jarm.py 192.168.1.100 -p 443
# Match against: 07d14d16d21d21d00042d41d00041de5fb3038104f457d92ba02e9311512c2

Plain explanation: Run this against any external IP that your endpoints are maintaining persistent connections with. A JARM match is a strong indicator — not definitive proof on its own, but enough to trigger deeper investigation. Operators can change the TLS configuration to alter the JARM hash, so absence of a match doesn’t rule out Cobalt Strike.

Beaconing Rhythm Detection with RITA

Even with jitter applied, beacons have a statistical rhythm. RITA (Real Intelligence Threat Analytics) analyzes Zeek logs and scores connections based on periodicity, byte consistency, and connection frequency.

The key insight: jitter changes the exact intervals but not the statistical pattern. A beacon sleeping between 35 and 55 seconds still looks periodic when you analyze 200 connections over three hours. RITA will surface it.

Terminal window
# Import Zeek logs into RITA and show beaconing scores
rita import /var/log/zeek/ my_dataset
rita show-beacons my_dataset | head -30
# Output columns: Score | Source IP | Dest IP | ConnCount | ...
# Score > 0.8 with high ConnCount = investigate immediately

Plain explanation: Scores range from 0 to 1. Above 0.8 with hundreds of connections means statistically regular traffic — the hallmark of an automated beacon. Legitimate applications do not connect to the same external IP hundreds of times in a perfectly paced rhythm.


Host-Level Detection

Named Pipes

Cobalt Strike’s SMB beacon uses named pipes to relay commands and output between beacons on the same compromised network. Named pipes are a Windows IPC (Inter-Process Communication) mechanism — essentially a private channel for two processes to exchange data. Attackers use them to route traffic from isolated hosts through a pivot point that has internet access.

Default pipe names follow predictable patterns:

Pipe PatternPurpose
\msagent_*Post-exploitation job relay
\postex_*Shell command output
\status_*Beacon status reporting
\MSSE-*-serverDefault SMB listener

Sysmon Events 17 (pipe created) and 18 (pipe connected) log this activity — but these events are off by default in Sysmon and must be explicitly enabled. A Sigma rule to detect default pipe patterns:

title: Cobalt Strike Default Named Pipe Patterns
id: c4b890d3-9b6c-4e2b-a4ab-6f4b3c1d6e2a
status: experimental
logsource:
product: windows
category: pipe_created # Sysmon Event ID 17
detection:
selection:
PipeName|re: '\\(msagent_|postex_|status_|MSSE-.*-server)'
condition: selection
level: high
tags:
- attack.execution
- attack.t1055

Plain explanation: Legitimate Windows software rarely creates pipes with these naming patterns. A hit on this rule almost always warrants investigation. Note that operators can rename their pipes — the defaults are used by lazy or automated deployments.

Process Injection and SpawnTo

When Cobalt Strike runs post-exploitation modules, it injects code into another process rather than running it in the beacon process itself. This limits exposure — if the injected process is killed, the beacon continues. The default injection target (spawnto) is dllhost.exe.

This creates a detectable sequence:

Sysmon EventIDWhat It Captures
ProcessAccess10Beacon process opens a handle to the injection target
CreateRemoteThread8A thread is injected into the target process
PipeCreated17Named pipe opened for output relay
NetworkConnect3The injected process phones home

Seeing events 10 → 8 → 17 → 3 on the same process within a short window, with dllhost.exe as the target, is a high-confidence Cobalt Strike indicator.

Sigma Rule: Rundll32 Without Arguments

A common staging pattern runs an encoded shellcode stage via rundll32.exe with no actual DLL path — just the binary name and nothing else:

title: Rundll32 Without Command-Line Arguments
id: a2f1d8e4-6b2c-4f3a-8d1e-9c4b2a7f3e5d
status: stable
logsource:
product: windows
category: process_creation # Sysmon Event ID 1
detection:
selection:
Image|endswith: '\rundll32.exe'
CommandLine: 'rundll32.exe'
condition: selection
level: high
tags:
- attack.defense_evasion
- attack.t1218.011

Plain explanation: A legitimate invocation of rundll32.exe always includes a DLL path and export function as arguments — for example, rundll32.exe shell32.dll,Control_RunDLL. If you see the process run with no arguments at all, a reflective loader injected it. This is not normal application behavior.

Windows Event Log Indicators

Beyond Sysmon, standard Windows Security and System logs expose several beacon-linked behaviors:

Event IDLogWhat It Flags
4688SecurityProcess creation — requires command-line logging enabled
4697SecurityService installed — from GetSystem via named-pipe impersonation
7045SystemNew service installed — same GetSystem path
4624SecurityLogon Type 9 + Negotiate package = Pass-the-Hash attempt

The GetSystem privilege escalation technique installs a temporary service with a 7-character random alphanumeric name in C:\Windows\. The service is removed after the escalation succeeds, but the event log entry (4697/7045) remains. Hunting for short random-named service installations is a high-fidelity signal.


Malleable C2 Profiles

A Malleable C2 profile completely reshapes how the beacon communicates. Operators define custom HTTP headers, URI paths, User-Agent strings, and response transforms to make beacon traffic blend in with legitimate application traffic. The same beacon binary can look like a jQuery CDN request, a Microsoft OneDrive sync, or an Amazon S3 API call — depending on the profile loaded.

Common profile families seen in real intrusions:

Profile ThemeTraffic It Imitates
jQueryCDN requests for jquery-3.3.1.min.js, IE11 User-Agent
Amazon AWSS3-style URIs, AWS SDK headers
Microsoft OneDriveOneDrive Graph API endpoints
OCSPCertificate status protocol patterns
GoogleSearch/Analytics-style URI parameters

Detecting Profile-Based Traffic

Even with customization, profiles have exploitable weaknesses:

1. UA / TLS mismatch. A profile spoofing an IE11 User-Agent (Trident/7.0) cannot produce a TLS 1.3 JA3 fingerprint. IE11 does not support TLS 1.3. If proxy logs show an IE11 UA but Zeek records a modern JA3 hash, the UA is spoofed.

2. Domain fronting tells. Profiles that use CDN domain fronting (e.g., routing through CloudFront while hitting a different backend) can be caught by comparing the HTTP Host header against the actual resolved IP. A Host: something.cloudfront.net header that resolves to a non-Amazon IP is fronting.

3. Timing still leaks. No matter how convincing the HTTP wrapper, the beacon’s sleep cycle remains. Proxy logs showing a host making requests to d3g9mqev8bfbta.cloudfront.net every 40–50 seconds for six hours is not CDN sync behavior.

A Sigma rule for the Amazon-themed Malleable C2 profile:

title: Cobalt Strike Malleable C2 — Amazon Profile (Proxy)
id: e3d4f5a6-7b8c-4d2e-9f1a-2b3c4d5e6f7a
status: experimental
logsource:
product: proxy
category: web
detection:
selection:
cs-uri-stem|contains:
- '/s/ref=nb_sb_noss_1/'
- '/field-keywords='
cs-host|endswith: '.cloudfront.net'
filter:
dst_ip|cidr: '13.224.0.0/14' # Legitimate AWS CloudFront range
condition: selection and not filter
level: medium
tags:
- attack.command_and_control
- attack.t1071.001
- attack.t1090.004

Plain explanation: This flags traffic that looks like an Amazon CDN request but is going to an IP outside Amazon’s actual infrastructure. Real Amazon CDN traffic stays within Amazon’s IP ranges.


Hunting for Team Servers in the Wild

Cobalt Strike team servers expose identifiable characteristics that enable proactive hunting via Shodan and Censys — even before they are used in an attack.

Default Self-Signed Certificate

Fresh Cobalt Strike installations come with a hardcoded default TLS certificate:

  • Issuer / Subject CN: jquery.com
  • Serial number: 146473198
  • Organization: Strategic Cyber LLC (older versions)

No legitimate jQuery infrastructure uses this certificate. A Shodan query to find exposed team servers:

ssl.cert.serial:146473198 AND ssl.cert.subject.cn:jquery.com

Plain explanation: Running this query surfaces team servers that operators forgot — or didn’t bother — to reconfigure. This is threat intelligence value: you can track infrastructure before it’s weaponized, feed the IPs into block lists, or identify patterns in attacker hosting preferences.

Port 50050

The Cobalt Strike team server console listens on TCP port 50050 by default. Shodan indexes open ports globally. Searching for this port alongside other indicators surfaces candidate team servers.

Hunting team servers is a passive threat intelligence activity — the point is to map attacker infrastructure for blocking and attribution, not to interact with it.


What You Can Do Today

Within 24 hours:

  1. Enable Sysmon Events 17 and 18 (named pipe create/connect) in your Sysmon configuration. These are off by default and critical for detecting CS pipe-based beaconing. The SwiftOnSecurity Sysmon config is a solid baseline.

  2. Enable PowerShell Script Block Logging via Group Policy: Computer Configuration → Administrative Templates → Windows Components → Windows PowerShell → Turn on Script Block Logging. CS payloads frequently arrive as encoded PowerShell stagers.

  3. Check proxy logs for rundll32.exe or dllhost.exe making outbound HTTPS connections. Neither process has a legitimate reason to initiate external network connections.

Within one week:

  1. Deploy RITA against your Zeek logs. Beaconing analysis is largely automated — run it nightly against the past 24 hours of logs and alert on scores above 0.7 with more than 100 connections.

  2. Import Cobalt Strike Sigma rules from the SigmaHQ repository into your SIEM. Filter for rules tagged attack.t1055, attack.t1071, and attack.t1218 — these cover process injection, C2 protocols, and LOLBin abuse.

  3. Create a named pipe alert in your SIEM or EDR for creation events matching \msagent_*, \postex_*, and \MSSE-*. Baseline your environment first to identify any legitimate matches before setting severity.

Ongoing:

  1. Track CS JARM fingerprints in threat intelligence feeds. MISP community sharing actively lists active Cobalt Strike team server IPs and TLS fingerprints. New infrastructure appears regularly and can be pre-blocked.

  2. Read The DFIR Report intrusion timelines. Real-world CS intrusions are documented in detail, including the exact sequence of artifacts left behind. Understanding the attacker timeline is the fastest way to close detection gaps.



Sources