You installed Ollama because you wanted your AI conversations to stay private — off the cloud, on your machine, nobody watching. Good instinct.

But right now, unless you’ve specifically configured it otherwise, Ollama is advertising itself to every device on your Wi-Fi network. The coffee shop you worked from last Tuesday. Your neighbor’s computer. Anyone who happens to be on the same network as you.

TL;DR

  • Ollama and most local AI tools listen on 0.0.0.0 by default — meaning your entire local network, not just your computer
  • No authentication is required. Anyone on your network can use your GPU, access your models, and query cloud-connected ones at your expense
  • Censys found over 10,600 exposed Ollama instances on the public internet — many belonging to people who had no idea
  • This is not an Ollama bug — it’s an industry-wide pattern with developer tools going back decades
  • The fix takes 30 seconds, but only if you know it’s needed

Why You Should Care Even If You’re “Not a Target”

You’re probably not running a bank. You don’t have state secrets. Why would anyone bother with your local AI setup?

Because they don’t have to bother specifically with you. Attackers scan entire IP ranges automatically. A tool finds your open Ollama instance, recognizes it as a GPU resource, and starts sending inference requests — using your electricity, your hardware, and potentially your cloud API credits.

If you have cloud-connected models like kimi-k2.5:cloud or glm-5:cloud, those calls route through Ollama’s cloud infrastructure. Someone else using your open instance could rack up charges on your account. And they don’t need to hack anything — they just connect to a door you left open.


What’s Actually Happening Under the Hood

When you install a server application — any server application — it needs to decide which network interfaces to listen on. There are two options:

  • 127.0.0.1 (localhost): Only your own computer can connect. Nothing from the network gets in.
  • 0.0.0.0 (all interfaces): Your computer, your phone, your smart TV, your neighbor’s laptop, anyone on the same Wi-Fi — all can connect.

Ollama defaults to 0.0.0.0:11434. So does LM Studio’s server mode. So did MongoDB for years. So did Redis. So did Elasticsearch.

This isn’t a bug in the traditional sense. It’s a design choice optimized for developer convenience: “I want to test this from my phone / VM / other machine without configuring anything.” The problem is that the same setting that makes development easy makes deployment dangerous — and these tools are increasingly installed by people who are neither developers nor security professionals.


The Numbers Are Not Small

This isn’t a theoretical concern. Cisco Talos ran a scan in September 2025 and found 1,139 publicly exposed Ollama endpoints in ten minutes using Shodan. Of those, 214 responded to model queries with zero credentials required.

Censys, a broader internet scanning platform, found 10,600 exposed Ollama instances across 1,229 different networks worldwide. These are machines with Ollama directly accessible from the public internet — not just a local Wi-Fi network, but anyone with a browser.

The geographic breakdown: 36.6% in the United States, 22.5% in China, 8.9% in Germany.

Most of these people almost certainly had no idea.


Ollama Is Just the Current Example

Ollama gets attention right now because AI is hot and the installs are surging. But the underlying pattern has been repeating for twenty years.

MongoDB (2017): Launched with no authentication by default, listening on all interfaces. Security researcher Bob Diachenko found over 27,000 publicly exposed databases in 2017 alone. Hundreds of thousands of records stolen. MongoDB eventually changed the default, but only after years of breaches.

Redis: Shipped for years with no password and bind 0.0.0.0. Attackers discovered they could use Redis’s CONFIG SET command to write SSH keys to the server’s filesystem, turning an open database into a full system compromise — remote code execution with no exploit required.

Elasticsearch: No authentication, no TLS, no access control out of the box. Billions of records — healthcare data, financial records, personal information — leaked from misconfigured instances. The HaveIBeenPwned database itself was populated partly by Elasticsearch breaches.

Jupyter Notebook: The popular data science tool runs a web server that executes arbitrary Python code. By default it listens on 0.0.0.0:8888. Anyone who reaches the port gets a Python interpreter. On a researcher’s machine, that’s their entire filesystem and any credentials stored in environment variables.

ToolDefault portDefault bindingAuth requiredRisk if exposed
Ollama114340.0.0.0NoneModel access, GPU abuse, cloud costs
LM Studio12340.0.0.0NoneModel access, inference abuse
Open WebUI3000/80800.0.0.0OptionalUI access, model control
Jupyter Notebook88880.0.0.0Token onlyRemote code execution
LocalAI80800.0.0.0NoneFull API access
Docker API (TCP)23750.0.0.0NoneRoot-equivalent system access
Redis (old default)63790.0.0.0NoneData theft, RCE via CONFIG
MongoDB (old default)270170.0.0.0NoneFull database access

The pattern is always the same: tool built for developers, optimized for ease of use, assumes the user will handle network security separately. The user never does.


The “But I’m on My Home Network” Argument

Fair point — your home network is not the internet. But consider what “home network” actually means:

Every device shares the network. Your smart TV, your IoT thermostat, your spouse’s work laptop. Any compromised device on that network has a direct path to your open Ollama instance. If your router gets exploited (routers are frequently targeted), an attacker gains LAN access and can reach everything you thought was “internal.”

Coffee shops, hotels, airports. Every time you connect to a public Wi-Fi with Ollama running, you’re sharing that service with hundreds of strangers. Most people don’t think to stop Ollama before leaving the house.

Your router might be forwarding ports without you knowing. UPnP (Universal Plug and Play) — which we also found enabled on the Windows machine in this audit — allows applications to automatically open ports on your router. An application can request port forwarding without your knowledge or consent, punching holes directly from the internet to your local service.


How Attackers Find These Instances

You don’t need to be specifically targeted. Shodan and Censys are search engines for internet-connected devices. They continuously scan the entire IPv4 address space and index everything that responds.

Searching for port:11434 "Ollama is running" returns thousands of results. For each one, an attacker gets:

  1. The IP address of the machine
  2. A list of all installed models (via /api/tags)
  3. The ability to send inference requests — for free
  4. For cloud-connected models, potential access to the owner’s API-connected accounts

No exploitation required. No vulnerability to patch. Just a door that was left open.


What You Can Do Right Now

Fix Ollama

This is a one-time change that takes 30 seconds:

Windows:

# Set the environment variable permanently for your user account
[System.Environment]::SetEnvironmentVariable("OLLAMA_HOST", "127.0.0.1", "User")

# Then restart Ollama from the system tray icon (right-click → Quit, then relaunch)

macOS:

# Add to your shell profile (~/.bashrc, ~/.zshrc, etc.)
export OLLAMA_HOST=127.0.0.1

# Restart Ollama — quit from the menu bar icon, then relaunch
# If installed via Homebrew:
# brew services restart ollama

Linux:

# Add to /etc/systemd/system/ollama.service under [Service]:
# Environment="OLLAMA_HOST=127.0.0.1"

systemctl daemon-reload
systemctl restart ollama

Verify it worked — from another device on the same network:

curl http://<your-machine-ip>:11434/
# Should time out or refuse connection — NOT return "Ollama is running"

Fix LM Studio

In LM Studio: Settings → Local Server → Hostname → change from 0.0.0.0 to 127.0.0.1.

Fix Jupyter Notebook

Windows:

# Generate config file
jupyter notebook --generate-config

# Config file is located at:
# C:\Users\<YourUsername>\.jupyter\jupyter_notebook_config.py
# Open it in Notepad and add these lines:
c.NotebookApp.ip = '127.0.0.1'  # Classic Jupyter Notebook
c.ServerApp.ip = '127.0.0.1'    # JupyterLab

macOS / Linux:

jupyter notebook --generate-config

# Edit ~/.jupyter/jupyter_notebook_config.py and add:
c.NotebookApp.ip = '127.0.0.1'  # Classic Jupyter Notebook
c.ServerApp.ip = '127.0.0.1'    # JupyterLab

Check What’s Actually Open on Your Machine

Run this and look at the second column. Anything showing 0.0.0.0 is accessible from your network:

Windows:

netstat -ano | findstr "LISTENING" | findstr "0.0.0.0"

macOS / Linux:

ss -tlnp | grep "0.0.0.0"
# or
netstat -tlnp | grep "0.0.0.0"

The output will look something like this:

TCP    0.0.0.0:135     0.0.0.0:0    LISTENING    1234
TCP    0.0.0.0:445     0.0.0.0:0    LISTENING    4
TCP    0.0.0.0:11434   0.0.0.0:0    LISTENING    9876
TCP    0.0.0.0:8888    0.0.0.0:0    LISTENING    5432

The number in the middle is the port. Here’s how to read it:

  • Port 135, 445 — Standard Windows system ports. Normal, ignore these.
  • Port 11434 — That’s Ollama. This should not be here after you apply the fix above.
  • Port 8888 — That’s likely Jupyter Notebook. Anyone on your network can reach it.
  • Any port you don’t recognize — Look it up. Take the last number (the Process ID) and find out what application it belongs to:

Find out which application owns the process ID:

Windows — take the PID number from the last column and run:

Get-Process -Id <PID> | Select-Object Name, Path

macOS / Linux:

# Replace <PID> with the number from the last column
ps -p <PID> -o comm=

Disable UPnP on Your Router

Log into your router’s admin panel (usually 192.168.1.1 or 192.168.0.1) and look for UPnP in the advanced settings. Disable it. No legitimate home application requires it, and it’s a persistent security weakness that allows applications to silently open ports on your behalf.


If You Actually Need Remote Access to Your Local AI

Sometimes you legitimately want to access Ollama from another machine — a server in another room, a colleague on your network. The answer is not 0.0.0.0.

Use a VPN (recommended for most users): Tailscale or WireGuard creates a private network between your devices. Ollama stays on 127.0.0.1, and only devices on your VPN can reach your machine at all. Tailscale has a free tier and takes about five minutes to set up on Windows — it’s the most practical option for non-technical users.

Use a reverse proxy with authentication (advanced):

On Linux/macOS, nginx works well:

server {
    listen 11435;

    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;

    location / {
        proxy_pass http://127.0.0.1:11434;
    }
}

On Windows, Caddy is easier to set up than nginx and handles authentication with a simple config file:

:11435 {
    basicauth {
        # Generate hash with: caddy hash-password
        username $2a$14$...hashedpassword...
    }
    reverse_proxy localhost:11434
}

Use SSH tunneling for temporary access:

Works on Windows (PowerShell), macOS, and Linux — SSH is built into all three:

# Run this on the machine that wants to access Ollama remotely
ssh -L 11434:localhost:11434 user@your-machine-ip
# Now localhost:11434 on this machine tunnels to Ollama on the remote machine

The Bigger Picture: Developer Tools Were Never Designed for You

The uncomfortable truth is that most of these tools were built by developers, for developers, in an era when “local tool” meant “running on a server you control in a datacenter.” The mental model was: the network is your problem, we’ll build the functionality.

That model broke down when the same tools started being installed by data scientists, AI enthusiasts, students, and professionals who have no reason to know what 0.0.0.0 means. The installation experience says “getting started is easy” and it is — but the security configuration that should be mandatory is buried in documentation nobody reads.

Redis changed its defaults after years of breaches. MongoDB did too. Elasticsearch added security-by-default in version 8.0. Each change came after significant real-world harm.

Ollama is on that trajectory. A GitHub issue requesting a “secure mode” with safe defaults has been open and actively discussed. The change will probably come. But until it does, the default installation is an open service on your local network, and the security is entirely your responsibility.

The rule of thumb: any application that runs a server should be assumed to be listening on your network until you verify otherwise. Check it. Fix it. Then check again after the next update, because defaults have a way of being reset.


What You Can Do Today — Checklist

  • Set OLLAMA_HOST=127.0.0.1 and restart Ollama
  • Check LM Studio, Open WebUI, and any other local AI tool for binding settings
  • Run netstat -ano and audit everything listening on 0.0.0.0
  • Disable UPnP on your router
  • If you need remote access, use a VPN or reverse proxy with auth — never 0.0.0.0
  • Verify from another device that Ollama’s port is no longer reachable


Sources