You pay €15, enter a phone number, and seconds later receive a detailed call history — names, timestamps, durations, even WhatsApp logs. Except none of it is real. Every name, every number, every timestamp was randomly generated from a hardcoded list baked into the app itself.

That’s CallPhantom. Twenty-eight apps on Google Play. Over 7.3 million downloads. Real payments for data that was never possible to retrieve in the first place.

TL;DR

  • ESET identified 28 fake “call history” apps on Google Play with 7.3M+ combined downloads
  • Apps advertised impossible services — accessing another person’s call logs, SMS, or WhatsApp history
  • All “results” were randomly generated from hardcoded templates; no real data was ever retrieved
  • Several apps deliberately bypassed Google Play’s official billing (UPI, embedded card forms) to block refunds
  • Firebase Realtime Database was used as C2 infrastructure to rotate payment accounts without app updates
  • Google removed all apps after ESET reported them in December 2025; Google Play Official billing victims may be eligible for refunds

Why This Matters

CallPhantom is not an isolated incident — it’s a blueprint. The campaign operated for months, reached millions of users, and generated revenue through a technically simple but socially sophisticated attack. No malware in the traditional sense, no permissions, no data exfiltration. Just psychological manipulation and a deliberately broken payment pipeline.

For security teams, the threat model here is social engineering at scale via an app store. For individuals, it’s a reminder that “available on Google Play” does not equal “safe.” For incident responders, the billing bypass techniques used by CallPhantom represent a pattern increasingly seen in mobile fraud campaigns.


Table of Contents


The Impossible Promise

The CallPhantom apps advertised a service that is technically impossible under normal Android permissions: viewing another person’s call history, SMS records, and WhatsApp logs — for any phone number the user supplied.

This is worth pausing on. Android’s permission model specifically prevents apps from accessing call logs or messages belonging to other users or devices. An app would need either:

  • Physical access to the target device with unlocked screen, or
  • Exploitation of OS-level vulnerabilities (the kind nation-state actors use, not €15 apps)

The CallPhantom apps requested neither. They had minimal permissions and zero capability to retrieve real communications data. The product did not and could not exist.

ESET researchers, who named and investigated the campaign as part of their work with the App Defense Alliance, identified two operational clusters:

Cluster A — showed partial “results” upfront using generated data, then required payment to “unlock” the full history.

Cluster B — collected the user’s email address promising to deliver logs there after payment. No delivery ever occurred.

Both clusters converged on the same outcome: payment made, fabricated data displayed, no recourse.


How the Fake Data Was Generated

The technical implementation of the fraud was deliberately primitive — which is itself a finding.

The apps contained hardcoded datasets embedded directly in the application code:

  • Phone number pools consisting almost entirely of Indian mobile numbers (country code +91)
  • Name lists with common Indian and South/Southeast Asian names
  • Timestamp generators that created plausible-looking call times and durations
  • “App type” labels including calls, SMS, and WhatsApp entries

When a user entered a target phone number and paid, the app did not query any server, access any database, or communicate with any backend data source. It pulled randomly from these local lists and assembled a fake history on-device.

The irony: the apps didn’t even need network access to generate the “results.” The entire scam could run fully offline.

This approach served two purposes for the operators: it kept the app technically simple (no server infrastructure to maintain for results), and it made detection harder because there were no suspicious API calls to analyze in transit.


Payment Methods and the Billing Bypass

This is where CallPhantom becomes more technically interesting. ESET identified three distinct payment methods across the 28 apps — and two of them deliberately violated Google Play’s payment policies.

Method 1: Google Play Official Billing

Some apps used Google’s in-app purchase system legitimately. This is the safest path for users because:

  • Subscriptions are tracked in the Play Store
  • Refunds can be requested through Google
  • When Google removed the apps, active subscriptions were automatically cancelled

Method 2: Third-Party UPI Payments

UPI (Unified Payments Interface) is India’s dominant mobile payment system — instant bank-to-bank transfers. Several CallPhantom apps embedded hardcoded UPI payment URLs or, more cleverly, fetched them dynamically from Firebase Realtime Database.

The Firebase approach is significant: operators could rotate payment accounts without releasing an app update. If one UPI account was flagged or frozen, they simply updated the Firebase value and all app instances pointed to a new account within minutes. No app review. No detection window.

Victims who paid via UPI have no recourse through Google — these are bank transfers to an anonymous UPI ID, and Indian banking fraud recovery is slow and uncertain.

Method 3: Embedded Card Payment Forms

The most aggressive approach: some apps contained full card payment checkout forms built directly into the app interface. Users entered credit/debit card numbers, expiry dates, and CVVs directly into a screen controlled by the app operators.

This bypasses Google Play entirely. It also means:

  • No purchase record in Google Play
  • Card data potentially harvested (not confirmed by ESET, but the capability existed)
  • Zero refund path through Google
Payment MethodGoogle Play ProtectionRefund AvailableOperator Flexibility
Google Play BillingYesYes (via Play)Low
UPI (hardcoded)NoUnlikelyMedium
UPI (Firebase-dynamic)NoUnlikelyHigh
Embedded card formNoNo (bank only)High

The billing bypass is a deliberate fraud design choice, not an oversight.


Firebase as Command and Control

ESET identified two Firebase Realtime Database instances used as C2 (Command and Control) infrastructure, hosted on Google LLC infrastructure and active between April and May 2025.

Firebase is a legitimate Google cloud service used by millions of apps worldwide. Using it as C2 gives attackers several advantages:

  1. Traffic blending — Firebase traffic looks identical to legitimate app traffic
  2. No dedicated C2 server — operators don’t need to maintain infrastructure
  3. Dynamic configuration — payment URLs, app behavior, and targeting can be changed in real-time
  4. Detection evasion — network-level blocking of Firebase would break thousands of legitimate apps

In CallPhantom, Firebase served primarily to deliver rotating payment account details. This maps to MITRE ATT&CK T1437.001 — Application Layer Protocol: Web Protocols, which covers adversaries using legitimate web services for C2 communications to blend with normal traffic.


Why It Worked: The Social Engineering Layer

The technical components of CallPhantom are simple. The social engineering is more sophisticated.

Exploiting distrust and suspicion. The target demographic was primarily users in India and the Asia-Pacific region who suspected a partner, family member, or employee of hiding communications. The apps offered a resolution to an emotionally charged situation — for a few euros.

Leveraging “impossible = expensive” logic. Users may have assumed that accessing another device’s call history would require specialized tools. A price tag of €5–80 felt consistent with that assumption. The premium pricing actually increased perceived legitimacy.

Plausible partial results. Cluster A apps showed “partial” data before payment. Seeing generated phone numbers on-screen — even fake ones — created an illusion of functionality. The brain completes the pattern: if partial data appeared, full data exists behind the paywall.

Review manipulation. With 7.3 million downloads, the apps had review ecosystems. Users who felt embarrassed about being scammed (they were trying to spy on someone) were less likely to leave negative reviews. The social stigma of the use case suppressed victim reporting.


MITRE ATT&CK Mapping

TechniqueIDDescription
Application Layer Protocol: Web ProtocolsT1437.001Firebase FCM used for C2 communication
MasqueradingT1655Apps presented as legitimate utility tools
Stored Data ManipulationT1641Fabricated data presented as real results
Financial TheftT1643Direct monetization through fraudulent billing

Detection and Indicators

CallPhantom apps have been removed from Google Play, but the pattern will reappear. Detection logic for similar campaigns:

For Enterprise/MDM Environments

Unusual payment flows in app network traffic:

# Look for UPI payment URLs in app network traffic
# Pattern: upi://pay?pa=<id>&pn=<name>&am=<amount>
# Or dynamic fetch of payment URLs from Firebase

Firebase Realtime Database fetches from new/unrecognized apps:

# Suspicious Firebase pattern
https://<project-id>.firebaseio.com/<path>.json
# Flag: new app querying Firebase for config data
# that is then displayed in a payment UI

Known CallPhantom ESET detection family: Android/CallPhantom

Red Flags for Manual App Review

  • App claims to access another person’s device data with no explanation of how
  • Minimal permissions but expensive paid features
  • Payment flow redirects outside Google Play (browser opens, UPI app launches)
  • No verifiable company behind the app (privacy policy points to generic template sites)
  • Category mismatch: utility app with subscription pricing in the €10–80 range

Google Play Protect

As of 2025, Google Play Protect’s enhanced fraud protection analyzes apps that request sensitive permissions related to financial fraud. In 2025, Google blocked over 1.75 million policy-violating apps and banned 80,000+ developer accounts. CallPhantom slipped through by avoiding sensitive permissions altogether — the fraud was entirely social.


What Victims Can Do

If you paid through Google Play billing:

  1. Open Google Play → Profile → Payments & subscriptions → Subscriptions
  2. Check for any active subscriptions from removed apps (Google auto-cancelled these)
  3. Request a refund via Google Play support if charged after December 2025

If you paid via UPI:

  1. Report to your bank’s fraud department immediately
  2. File a complaint at India’s Cyber Crime Portal: cybercrime.gov.in
  3. UPI transfers are typically not reversible, but reporting builds a case trail

If you entered card details into an embedded form:

  1. Contact your bank or card issuer immediately
  2. Request a chargeback under fraudulent transaction
  3. Monitor for unauthorized charges — card data may have been harvested
  4. Consider requesting a new card number

Broader Lessons for App Store Security

CallPhantom raises uncomfortable questions about app store vetting at scale.

The review gap: Google reviews millions of apps. CallPhantom apps had minimal permissions, clean-looking code, and no traditional malware indicators. The fraud was in the business logic — what the app claimed to do versus what it actually did. Static analysis won’t catch that.

The billing bypass problem: Google’s payment policy exists partly to protect users. When apps route payments outside the ecosystem, that protection disappears. CallPhantom used this deliberately. Google has cracked down on billing policy violations, but enforcement at 7.3 million downloads suggests the gap between policy and detection remains wide.

The impossible-service category: A class of apps consistently offers services that are technically impossible — “find who owns this number,” “see who viewed your profile,” “track a phone without installing anything.” These have been a persistent vector for subscription fraud for years. The impossibility of the service is the point: victims can’t verify results, and operators can generate any output they want.

The App Defense Alliance matters: ESET’s ability to report directly to Google as an alliance partner accelerated removal. The standard user-report pathway is slower. Industry collaboration on threat intelligence sharing is how campaigns like this get cut shorter.



Sources