Table of Contents
Exposing Docker containers directly to the internet without proper controls can lead to security risks. A secure setup limits attack surface while keeping services accessible on a VPS.
Step 1: Avoid Binding Containers to All Interfaces
Binding containers to 0.0.0.0 exposes them to the public internet. Limit exposure to the VPS only when possible.
docker run -d -p 127.0.0.1:3000:3000 my-app
This makes the service accessible only locally.
Step 2: Use a Reverse Proxy for Public Access
A reverse proxy exposes a single secure entry point.
sudo apt update
sudo apt install nginx -y
Proxy traffic to the container:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Step 3: Restrict Access Using Firewall Rules
Limit which ports are publicly reachable.
sudo ufw allow 80/tcp
sudo ufw deny 3000/tcp
Only the reverse proxy port remains exposed.
Step 4: Enable HTTPS for Containers
Encrypt traffic before exposing services.
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx
HTTPS protects data and prevents MITM attacks.
Step 5: Isolate Containers Using Custom Networks
Use a private Docker network to isolate internal services.
docker network create secure-net
docker run -d --network secure-net my-app
Only containers on the same network can communicate.
Step 6: Avoid Running Containers as Root
Limit container privileges where possible.
docker run -d --user 1000:1000 my-app
This reduces impact if a container is compromised.
Step 7: Monitor Exposed Containers
Regularly check which ports are publicly exposed.
docker ps
ss -tulnp
You may also want to review this related article: How Docker Networking Works on a VPS
Optional Step: Limit Access by IP
Restrict access to specific IP addresses.
sudo ufw allow from 203.0.113.10 to any port 80