Last year I moved my homelab from a single Docker Compose stack to a K3s cluster. It took a weekend, broke half my services, and taught me more about container security than any course I’ve taken. Here’s what I learned about when each tool actually makes sense—and the security traps in both.
The real question: how big is your homelab?
📌 TL;DR: Last year I moved my homelab from a single Docker Compose stack to a K3s cluster. It took a weekend, broke half my services, and taught me more about container security than any course I’ve taken. Here’s what I learned about when each tool actually makes sense—and the security traps in both.
🎯 Quick Answer: Use Docker Compose for homelabs with fewer than 10 containers—it’s simpler and has a smaller attack surface. Switch to K3s when you need multi-node scheduling, automatic failover, or network policies for workload isolation.
I ran Docker Compose for two years. Password manager, Jellyfin, Gitea, a reverse proxy, some monitoring. Maybe 12 containers. It worked fine. The YAML was readable, docker compose up -d got everything running in seconds, and I could debug problems by reading one file.
Then I hit ~25 containers across three machines. Compose started showing cracks—no built-in way to schedule across nodes, no health-based restarts that actually worked reliably, and secrets management was basically “put it in an .env file and hope nobody reads it.”
That’s when I looked at Kubernetes seriously. Not because it’s trendy, but because I needed workload isolation, proper RBAC, and network policies that Docker’s bridge networking couldn’t give me.
Docker Compose security: what most people miss
Compose is great for getting started, but it has security defaults that will bite you. The biggest one: containers run as root by default. Most people never change this.
Here’s the minimum I run on every Compose service now:
version: '3.8'
services:
app:
image: my-app:latest
user: "1000:1000"
read_only: true
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
networks:
- isolated
logging:
driver: json-file
options:
max-size: "10m"
networks:
isolated:
driver: bridge
The key additions most tutorials skip: read_only: true prevents containers from writing to their filesystem (mount specific writable paths if needed), no-new-privileges blocks privilege escalation, and cap_drop: ALL removes Linux capabilities you almost certainly don’t need.
Other things I do with Compose that aren’t optional anymore:
- Network segmentation. Separate Docker networks for databases, frontend services, and monitoring. My Postgres container can’t talk to Traefik directly—it goes through the app layer only.
- Image scanning. I run Trivy on every image before deploying. One
trivy image my-app:latest catches CVEs that would otherwise sit there for months. - TLS everywhere. Even internal services get certificates via Let’s Encrypt and Traefik’s ACME resolver.
Scan your images before they run—it takes 10 seconds and catches the obvious stuff:
# Quick scan
trivy image my-app:latest
# Fail CI if HIGH/CRITICAL vulns found
trivy image --exit-code 1 --severity HIGH,CRITICAL my-app:latest
Kubernetes: when the complexity pays off
I use K3s specifically because full Kubernetes is absurd for a homelab. K3s strips out the cloud-provider bloat and runs the control plane in a single binary. My cluster runs on a TrueNAS box with 32GB RAM—plenty for ~40 pods.
The security features that actually matter for homelabs:
RBAC — I can give my partner read-only access to monitoring dashboards without exposing cluster admin. Here’s a minimal read-only role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: monitoring
name: dashboard-viewer
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: viewer-binding
namespace: monitoring
subjects:
- kind: User
name: reader
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: dashboard-viewer
apiGroup: rbac.authorization.k8s.io
Network policies — This is the killer feature. In Compose, network isolation is coarse (whole networks). In Kubernetes, I can say “this pod can only talk to that pod on port 5432, nothing else.” If a container gets compromised, lateral movement is blocked.
Namespaces — I run separate namespaces for media, security tools, monitoring, and databases. Each namespace has its own resource quotas and network policies. A runaway Jellyfin transcode can’t starve my password manager.
The tradeoff is real though. I spent a full day debugging a network policy that was silently dropping traffic between my app and its database. The YAML looked right. Turned out I had a label mismatch—app: postgres vs app: postgresql. Kubernetes won’t warn you about this. It just drops packets.
Networking: the part everyone gets wrong
Whether you’re on Compose or Kubernetes, your reverse proxy config matters more than most security settings. I use Traefik for both setups. Here’s my Compose config for automatic TLS:
version: '3.8'
services:
traefik:
image: traefik:v3.0
command:
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.web.http.redirections.entryPoint.to=websecure"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
- "[email protected]"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
volumes:
- "./letsencrypt:/letsencrypt"
ports:
- "80:80"
- "443:443"
Key detail: that HTTP-to-HTTPS redirect on the web entrypoint. Without it, you’ll have services accessible over plain HTTP and not realize it until someone sniffs your traffic.
For storage, encrypt volumes at rest. If you’re on ZFS (like my TrueNAS setup), native encryption handles this. For Docker volumes specifically:
# Create a volume backed by encrypted storage
docker volume create --driver local \
--opt type=none \
--opt o=bind \
--opt device=/mnt/encrypted/app-data \
my_secure_volume
My Homelab Security Hardening Checklist
After running both Docker Compose and K3s in production for over a year, I’ve distilled my security hardening into a checklist I apply to every new service. The specifics differ between the two platforms, but the principles are the same: minimize attack surface, enforce least privilege, and assume every container will eventually be compromised.
Docker Compose hardening — here’s my battle-tested template with every security flag I use. This goes beyond the basics I showed earlier:
version: '3.8'
services:
secure-app:
image: my-app:latest
user: "1000:1000"
read_only: true
security_opt:
- no-new-privileges:true
- seccomp:seccomp-profile.json
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # Only if binding to ports below 1024
tmpfs:
- /tmp:size=64M,noexec,nosuid
- /run:size=32M,noexec,nosuid
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 128M
cpus: '0.1'
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8080/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
restart: unless-stopped
networks:
- app-tier
volumes:
- app-data:/data # Only specific paths are writable
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
volumes:
app-data:
driver: local
networks:
app-tier:
driver: bridge
internal: true # No direct internet access
The key additions here: seccomp:seccomp-profile.json loads a custom seccomp profile that restricts which syscalls the container can make. The default Docker seccomp profile blocks about 44 syscalls, but you can tighten it further for specific workloads. The tmpfs mounts with noexec prevent anything written to temp directories from being executed—this blocks a whole class of container escape techniques. And internal: true on the network means the container can only reach other containers on the same network, not the internet directly.
K3s hardening — Kubernetes gives you Pod Security Standards, which replaced the old PodSecurityPolicy. Here’s how I enforce them per-namespace, plus a NetworkPolicy that locks things down:
# Label the namespace to enforce restricted security standard
kubectl label namespace production \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/audit=restricted
# NetworkPolicy: only allow specific ingress/egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: strict-app-policy
namespace: production
spec:
podSelector:
matchLabels:
app: web-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-system
- podSelector:
matchLabels:
app: traefik
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: api-backend
ports:
- protocol: TCP
port: 3000
- to: # Allow DNS resolution
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
That NetworkPolicy says: my web frontend can only receive traffic from Traefik on port 8080, can only talk to the API backend on port 3000, and can resolve DNS. Everything else is blocked. If someone compromises the frontend container, they can’t reach the database, can’t reach other namespaces, can’t phone home to an external server.
For secrets management on K3s, I use SOPS with age encryption. The workflow looks like this:
# Encrypt a Kubernetes secret with SOPS + age
sops --encrypt --age age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p \
secret.yaml > secret.enc.yaml
# Decrypt and apply in one step
sops --decrypt secret.enc.yaml | kubectl apply -f -
# In your git repo, .sops.yaml configures which files get encrypted
creation_rules:
- path_regex: .*\.secret\.yaml$
age: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
This means secrets are encrypted at rest in your git repo—no more plaintext passwords in .env files that accidentally get committed. The age key lives only on the nodes that need to decrypt, never in version control.
Side-by-side comparison:
- Least privilege: Compose uses
cap_drop: ALL + seccomp profiles. K3s uses Pod Security Standards with restricted enforcement. - Network isolation: Compose uses
internal: true bridge networks. K3s uses NetworkPolicy with explicit allow rules. - Secrets: Compose relies on Docker secrets or .env files (weak). K3s uses SOPS-encrypted secrets in git (strong).
- Resource limits: Both support CPU/memory limits, but K3s adds namespace-level ResourceQuotas for multi-tenant isolation.
- Runtime protection: Both benefit from Falco, but K3s integrates it as a DaemonSet with richer audit context.
Monitoring and Incident Response
I run Prometheus + Grafana on my homelab, and it’s caught three misconfigurations that would have been security holes. One was a container running with --privileged that I’d forgotten to clean up after debugging. Another was a port binding on 0.0.0.0 instead of 127.0.0.1—exposing an admin interface to my entire LAN. The third was a container that had been restarting every 90 seconds for two weeks without anyone noticing.
Monitoring isn’t just dashboards—it’s your early warning system. Here’s how I set it up differently for Compose vs K3s.
Docker Compose: healthchecks and restart policies. Every service in my Compose files has a healthcheck. If a service fails its health check three times, Docker restarts it automatically. But I also alert on it, because a service that keeps restarting is usually a symptom of something worse:
# Prometheus alert rule: container restarting too often
groups:
- name: container-alerts
rules:
- alert: ContainerRestartLoop
expr: |
increase(container_restart_count{name!=""}[1h]) > 5
for: 10m
labels:
severity: warning
annotations:
summary: "Container {{ $labels.name }} restarted {{ $value }} times in 1h"
description: "Possible crash loop or misconfiguration. Check logs with: docker logs {{ $labels.name }}"
- alert: ContainerHighMemory
expr: |
container_memory_usage_bytes / container_spec_memory_limit_bytes > 0.9
for: 5m
labels:
severity: critical
annotations:
summary: "Container {{ $labels.name }} using >90% of memory limit"
- alert: UnusualOutboundTraffic
expr: |
rate(container_network_transmit_bytes_total[5m]) > 10485760
for: 2m
labels:
severity: critical
annotations:
summary: "Container {{ $labels.name }} sending >10MB/s outbound — possible exfiltration"
That last alert—unusual outbound traffic—has been the most valuable. If a container suddenly starts pushing data out at high volume, something is very wrong. Either it’s been compromised, or there’s a misconfigured backup job hammering your bandwidth.
Kubernetes: liveness/readiness probes and audit logging. K3s gives you more granular health checks. Liveness probes restart unhealthy pods. Readiness probes remove pods from service endpoints until they’re ready to handle traffic. I also enable the Kubernetes audit log, which records every API call—who did what, when, to which resource:
# K3s audit policy — log all write operations
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
verbs: ["create", "update", "patch", "delete"]
resources:
- group: ""
resources: ["secrets", "configmaps", "pods"]
- level: Metadata
verbs: ["get", "list", "watch"]
- level: None
resources:
- group: ""
resources: ["events"]
Log aggregation is the other piece. For Compose, I use Loki with Promtail—it’s lightweight and integrates natively with Grafana. For K3s, I’ve tried both the EFK stack (Elasticsearch, Fluentd, Kibana) and Loki. Honestly, Loki wins for homelabs. EFK is powerful but resource-hungry—Elasticsearch alone wants 2GB+ of RAM. Loki runs on a fraction of that and the LogQL query language is good enough for homelab-scale debugging.
The key insight: don’t just collect logs, alert on patterns. A container that suddenly starts logging errors at 10x its normal rate is telling you something. Set up Grafana alert rules on log frequency, not just metrics.
The Migration Path: My Experience
I started with Docker Compose on a single Synology NAS running 8 containers. Jellyfin, Gitea, Vaultwarden, Traefik, a couple of monitoring tools. Everything lived in one docker-compose.yml, and life was simple. Backups were just ZFS snapshots of the Docker volumes directory.
Over about 18 months, I added services. A lot of services. By the time I hit 20+ containers, I was running into real problems. The NAS was out of RAM. I added a second machine and tried to coordinate Compose files across both using SSH and a janky deploy script. It sort of worked, but secrets were duplicated in .env files on both machines, there was no service discovery between nodes, and when one machine rebooted, half the stack broke because of hard-coded dependencies.
That’s when I set up K3s on three nodes: my TrueNAS box as the server node, plus two lightweight worker nodes (old mini PCs I picked up for cheap). The migration took a weekend and broke things in ways I didn’t expect:
- DNS resolution changed completely. In Compose, container names resolve automatically within the same network. In K3s, you need proper Service definitions and namespace-aware DNS (
service.namespace.svc.cluster.local). Half my apps had hardcoded container names. - Persistent storage was the biggest pain. Docker volumes “just work” on a single machine. In K3s across nodes, I needed a storage provisioner. I went with Longhorn, which replicates volumes across nodes. The initial sync took hours and I lost one volume because I didn’t set up the StorageClass correctly.
- Traefik config had to be completely rewritten. Compose labels don’t work in K8s. I had to switch to IngressRoute CRDs. Took me a full evening to get TLS working again.
- Resource usage went up. K3s itself, plus Longhorn, plus the CoreDNS and metrics-server components—my baseline overhead went from ~200MB to ~1.2GB before running any actual workloads.
But once it was running, the benefits were immediate. I could drain a node for maintenance and all pods migrated automatically. Secrets were managed centrally with SOPS. Network policies gave me microsegmentation I couldn’t achieve with Compose. And Longhorn meant I had replicated storage—if a disk failed, my data was on two other nodes.
My current setup is a hybrid approach, and I think this is the pragmatic answer for most homelabbers. Simple, single-purpose services that don’t need HA—like my ad blocker or a local DNS cache—still run on Docker Compose on the TrueNAS host. Anything that needs high availability, multi-node scheduling, or strict network isolation runs on K3s. The K3s cluster handles about 30 pods across the three nodes, while Compose manages another 6-7 lightweight services.
If I were starting over today, I’d still begin with Compose. The learning curve is gentler, the debugging is easier, and you’ll learn the fundamentals of container networking and security without fighting Kubernetes abstractions. But I’d plan for K3s from day one—keep your configs clean, use environment variables consistently, and document your service dependencies. When you’re ready to migrate, it’ll be a weekend project instead of a week-long ordeal.
My recommendation: start Compose, graduate to K3s
If you have fewer than 15 containers on one machine, stick with Docker Compose. Apply the security hardening above, scan your images, segment your networks. You’ll be fine.
Once you hit multiple nodes, need proper secrets management (not .env files), or want network-policy-level isolation, move to K3s. Not full Kubernetes—K3s. The learning curve is steep for a week, then it clicks.
I’d also recommend adding Falco for runtime monitoring regardless of which tool you pick. It watches syscalls and alerts on suspicious behavior—like a container suddenly spawning a shell or reading /etc/shadow. Worth the 5 minutes to set up.
The tools I keep coming back to for this:
Related posts you might find useful:
Get daily AI-powered market intelligence. Join
Alpha Signal — free market briefs, security alerts, and dev tool recommendations.
Frequently Asked Questions
What is Docker Compose vs Kubernetes: Secure Homelab Choices about?
Last year I moved my homelab from a single Docker Compose stack to a K3s cluster. It took a weekend, broke half my services, and taught me more about container security than any course I’ve taken.
Who should read this article about Docker Compose vs Kubernetes: Secure Homelab Choices?
Anyone interested in learning about Docker Compose vs Kubernetes: Secure Homelab Choices and related topics will find this article useful.
What are the key takeaways from Docker Compose vs Kubernetes: Secure Homelab Choices?
Here’s what I learned about when each tool actually makes sense—and the security traps in both. The real question: how big is your homelab? I ran Docker Compose for two years.
References
- Docker — “Compose File Reference”
- Kubernetes — “K3s Documentation”
- OWASP — “Docker Security Cheat Sheet”
- NIST — “Application Container Security Guide”
- Kubernetes — “Securing a Cluster”
📦 Disclosure: Some links above are affiliate links. If you buy through them, I earn a small commission at no extra cost to you. I only recommend stuff I actually use. This helps keep orthogonal.info running.