Tag: homelab security tools

  • TrueNAS Setup Guide: Enterprise Security for Your Homelab

    TrueNAS Setup Guide: Enterprise Security for Your Homelab

    Last month I rebuilt my TrueNAS server from scratch after a drive failure. What started as a simple disk replacement turned into a full security audit — and I realized my homelab storage had been running with basically no access controls, no encryption, and SSH root login enabled. Not great.

    Here’s how I set up TrueNAS SCALE with actual security practices borrowed from enterprise environments — without the enterprise complexity.

    Why TrueNAS for Homelab Storage

    📌 TL;DR: This guide explains how to set up a secure TrueNAS SCALE system for a homelab, incorporating enterprise-grade practices like ZFS snapshots, ECC RAM, VLAN network isolation, and dataset encryption. It emphasizes critical hardware choices and network configurations to protect data integrity and prevent unauthorized access.
    🎯 Quick Answer: Secure a TrueNAS SCALE homelab by enabling ZFS dataset encryption, using ECC RAM to prevent silent data corruption, isolating services with VLANs, and scheduling automatic ZFS snapshots for rollback protection.

    TrueNAS runs on ZFS, which handles data integrity better than anything else I’ve used at home. The killer features for me:

    • ZFS snapshots — I accidentally deleted an entire media folder last year. Restored it in 30 seconds from a snapshot. That alone justified the setup.
    • Built-in checksumming — ZFS detects and repairs silent data corruption (bit rot). Your photos from 2015 will still be intact in 2035.
    • Replication — automated offsite backups over encrypted channels.

    I went with TrueNAS SCALE over Core because I wanted Linux underneath — it lets me run Docker containers (Plex, Home Assistant, Nextcloud) alongside the storage. If you don’t need containers, Core on FreeBSD works fine too.

    Hardware: What Actually Matters

    You don’t need server-grade hardware, but a few things are non-negotiable:

    • ECC RAM — ZFS benefits enormously from error-correcting memory. I run 32GB of ECC. If your board supports it, use it. 16GB is the minimum for ZFS caching to work well.
    • CPU with AES-NI — any modern AMD Ryzen or Intel chip has this. You need it for dataset encryption without tanking performance.
    • NAS-rated drives — I run WD Red Plus 8TB drives in RAID-Z1. Consumer drives aren’t designed for 24/7 operation and will fail faster. CMR (not SMR) matters here.
    • A UPS — ZFS hates unexpected power loss. An APC 1500VA UPS with NUT integration gives you automatic clean shutdowns. I wrote about setting up NUT on TrueNAS separately.

    My current build: AMD Ryzen 5 5600G, 32GB Crucial ECC SODIMM, three 8TB WD Reds in RAID-Z1, and a 500GB NVMe as SLOG cache. Total cost around $800 — not cheap, but cheaper than losing irreplaceable data.

    Network Isolation First

    Before you even install TrueNAS, get your network right. Your NAS has all your data on it — it shouldn’t sit on the same flat network as your kids’ tablets and smart bulbs.

    I use OPNsense with VLANs to isolate my homelab. The NAS lives on VLAN 10, IoT devices on VLAN 30, and my workstation has cross-VLAN access via firewall rules. If an IoT device gets compromised (and they will eventually), it can’t reach my storage.

    The firewall rule is simple — only allow specific subnets to hit the TrueNAS web UI on port 443:

    # OPNsense/pfSense rule example
    pass in on vlan10 proto tcp from 192.168.10.0/24 to 192.168.10.100 port 443

    If you’re running a Protectli Vault or similar appliance for your firewall, this takes maybe 20 minutes to set up. No excuses.

    Installation and Initial Lockdown

    The install itself is straightforward — download the ISO, flash a USB with Etcher, boot, follow the wizard. Use a separate SSD or USB for the boot device; don’t waste pool drives on the OS.

    Once you’re in the web UI, immediately:

    1. Change the admin password to something generated by your password manager. Not “admin123”.
    2. Enable 2FA — TrueNAS supports TOTP. Set it up before you do anything else.
    3. Disable SSH root login:
    # In /etc/ssh/sshd_config
    PermitRootLogin no

    Create a non-root user for SSH access instead. I use key-based auth only — password SSH is disabled entirely.

    Create Your Storage Pool

    # RAID-Z1 with three drives
    zpool create mypool raidz1 /dev/sda /dev/sdb /dev/sdc

    RAID-Z1 gives you one drive of redundancy. For more critical data, RAID-Z2 (two-drive redundancy) is worth the capacity trade-off. I run Z1 because I replicate offsite daily — the real backup is the replication, not the RAID.

    Enterprise Security Practices, Scaled Down

    Access Controls That Actually Work

    Don’t give everyone admin access. Create separate users with specific dataset permissions:

    # Create a limited user for media access
    adduser --home /mnt/mypool/media --shell /bin/bash mediauser
    chmod 750 /mnt/mypool/media

    My wife has read-only access to the photo datasets. The kids’ Plex account can only read the media dataset. Nobody except my admin account can touch the backup datasets. This takes 10 minutes to set up and prevents the “oops I deleted everything” scenario.

    Encrypt Sensitive Datasets

    TrueNAS makes encryption easy — you enable it during dataset creation. I encrypt anything with personal documents, financial records, or credentials. The performance hit with AES-NI hardware is negligible (under 5% in my benchmarks).

    For offsite backups, I use rsync over SSH with forced encryption:

    # Encrypted backup to remote server
    rsync -avz --progress -e "ssh -i ~/.ssh/backup_key" \
      /mnt/mypool/critical/ backup@remote:/mnt/backup/

    VPN for Remote Access

    Never expose your TrueNAS web UI to the internet. I use WireGuard through OPNsense — when I need to check on things remotely, I VPN in first. The firewall blocks everything else. I covered secure remote access patterns in detail before.

    Ongoing Maintenance

    Setup is maybe 20% of the work. The rest is keeping it running reliably:

    • ZFS scrubs — I run weekly scrubs on Sunday nights. They catch silent corruption before it becomes a problem. Schedule this in the TrueNAS UI under Tasks → Scrub Tasks.
    • Updates — check for TrueNAS updates monthly. Don’t auto-update a NAS; read the release notes first.
    • Monitoring — I pipe TrueNAS metrics into Grafana via Prometheus. SMART data, pool health, CPU/RAM usage. When a drive starts showing pre-failure indicators, I know before it dies.
    • Snapshot rotation — keep hourly snapshots for 48 hours, daily for 30 days, weekly for 6 months. Automate this in the TrueNAS snapshot policies.

    Test your backups. Seriously. I do a full restore test every quarter — pull a snapshot, restore it to a test dataset, verify the files are intact. An untested backup is not a backup.

    Where to Go From Here

    Once your TrueNAS box is running securely, you can start adding services. I run Plex, Nextcloud, Home Assistant, and a Gitea instance all on the same SCALE box using Docker. Each service gets its own dataset with isolated permissions.

    If you want to go deeper on the networking side, I’d start with full network segmentation with OPNsense. For monitoring, check out my post on open-source security monitoring.

    Frequently Asked Questions

    Why choose TrueNAS for a homelab?

    TrueNAS uses ZFS, which offers superior data integrity features like snapshots, checksumming, and automated replication. It also supports additional functionality like Docker containers on TrueNAS SCALE.

    What hardware is recommended for TrueNAS?

    Key recommendations include ECC RAM (16GB minimum), a CPU with AES-NI for encryption, NAS-rated drives (e.g., WD Red Plus), and a UPS to prevent data corruption during power loss.

    How can I secure my TrueNAS setup?

    Use VLANs to isolate your NAS from other devices, configure strict firewall rules, disable root SSH login, and enable dataset encryption. These steps help protect your data from unauthorized access and potential network threats.

    What are the benefits of ZFS in TrueNAS?

    ZFS provides features like snapshots for quick data recovery, built-in checksumming to prevent silent data corruption, and replication for secure offsite backups.

    References

    1. TrueNAS Documentation — “TrueNAS SCALE User Guide”
    2. OpenZFS — “ZFS Overview and Features”
    3. OWASP Foundation — “OWASP Secure Configuration Guide”
    4. NIST — “Guide to Storage Encryption Technologies for End User Devices (NIST SP 800-111)”
    5. GitHub — “TrueNAS SCALE GitHub Repository”
    📋 Disclosure: Some links in this article are affiliate links. If you purchase through them, I earn a small commission at no extra cost to you. I only recommend gear I actually run in my own homelab.
    Get daily AI-powered market intelligence. Join Alpha Signal — free market briefs, security alerts, and dev tool recommendations.

  • Docker Compose vs Kubernetes: Secure Homelab Choices

    Docker Compose vs Kubernetes: Secure Homelab Choices

    Last year I moved my homelab from a single Docker Compose stack to a K3s cluster. It took a weekend, broke half my services, and taught me more about container security than any course I’ve taken. Here’s what I learned about when each tool actually makes sense—and the security traps in both.

    The real question: how big is your homelab?

    📌 TL;DR: Last year I moved my homelab from a single Docker Compose stack to a K3s cluster. It took a weekend, broke half my services, and taught me more about container security than any course I’ve taken. Here’s what I learned about when each tool actually makes sense—and the security traps in both.
    🎯 Quick Answer: Use Docker Compose for homelabs with fewer than 10 containers—it’s simpler and has a smaller attack surface. Switch to K3s when you need multi-node scheduling, automatic failover, or network policies for workload isolation.

    I ran Docker Compose for two years. Password manager, Jellyfin, Gitea, a reverse proxy, some monitoring. Maybe 12 containers. It worked fine. The YAML was readable, docker compose up -d got everything running in seconds, and I could debug problems by reading one file.

    Then I hit ~25 containers across three machines. Compose started showing cracks—no built-in way to schedule across nodes, no health-based restarts that actually worked reliably, and secrets management was basically “put it in an .env file and hope nobody reads it.”

    That’s when I looked at Kubernetes seriously. Not because it’s trendy, but because I needed workload isolation, proper RBAC, and network policies that Docker’s bridge networking couldn’t give me.

    Docker Compose security: what most people miss

    Compose is great for getting started, but it has security defaults that will bite you. The biggest one: containers run as root by default. Most people never change this.

    Here’s the minimum I run on every Compose service now:

    version: '3.8'
    services:
      app:
        image: my-app:latest
        user: "1000:1000"
        read_only: true
        security_opt:
          - no-new-privileges:true
        cap_drop:
          - ALL
        deploy:
          resources:
            limits:
              memory: 512M
              cpus: '0.5'
        networks:
          - isolated
        logging:
          driver: json-file
          options:
            max-size: "10m"
    
    networks:
      isolated:
        driver: bridge

    The key additions most tutorials skip: read_only: true prevents containers from writing to their filesystem (mount specific writable paths if needed), no-new-privileges blocks privilege escalation, and cap_drop: ALL removes Linux capabilities you almost certainly don’t need.

    Other things I do with Compose that aren’t optional anymore:

    • Network segmentation. Separate Docker networks for databases, frontend services, and monitoring. My Postgres container can’t talk to Traefik directly—it goes through the app layer only.
    • Image scanning. I run Trivy on every image before deploying. One trivy image my-app:latest catches CVEs that would otherwise sit there for months.
    • TLS everywhere. Even internal services get certificates via Let’s Encrypt and Traefik’s ACME resolver.

    Scan your images before they run—it takes 10 seconds and catches the obvious stuff:

    # Quick scan
    trivy image my-app:latest
    
    # Fail CI if HIGH/CRITICAL vulns found
    trivy image --exit-code 1 --severity HIGH,CRITICAL my-app:latest

    Kubernetes: when the complexity pays off

    I use K3s specifically because full Kubernetes is absurd for a homelab. K3s strips out the cloud-provider bloat and runs the control plane in a single binary. My cluster runs on a TrueNAS box with 32GB RAM—plenty for ~40 pods.

    The security features that actually matter for homelabs:

    RBAC — I can give my partner read-only access to monitoring dashboards without exposing cluster admin. Here’s a minimal read-only role:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      namespace: monitoring
      name: dashboard-viewer
    rules:
    - apiGroups: [""]
      resources: ["pods", "services"]
      verbs: ["get", "list", "watch"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: viewer-binding
      namespace: monitoring
    subjects:
    - kind: User
      name: reader
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: Role
      name: dashboard-viewer
      apiGroup: rbac.authorization.k8s.io

    Network policies — This is the killer feature. In Compose, network isolation is coarse (whole networks). In Kubernetes, I can say “this pod can only talk to that pod on port 5432, nothing else.” If a container gets compromised, lateral movement is blocked.

    Namespaces — I run separate namespaces for media, security tools, monitoring, and databases. Each namespace has its own resource quotas and network policies. A runaway Jellyfin transcode can’t starve my password manager.

    The tradeoff is real though. I spent a full day debugging a network policy that was silently dropping traffic between my app and its database. The YAML looked right. Turned out I had a label mismatch—app: postgres vs app: postgresql. Kubernetes won’t warn you about this. It just drops packets.

    Networking: the part everyone gets wrong

    Whether you’re on Compose or Kubernetes, your reverse proxy config matters more than most security settings. I use Traefik for both setups. Here’s my Compose config for automatic TLS:

    version: '3.8'
    services:
      traefik:
        image: traefik:v3.0
        command:
          - "--entrypoints.web.address=:80"
          - "--entrypoints.websecure.address=:443"
          - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
          - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
          - "[email protected]"
          - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
        volumes:
          - "./letsencrypt:/letsencrypt"
        ports:
          - "80:80"
          - "443:443"

    Key detail: that HTTP-to-HTTPS redirect on the web entrypoint. Without it, you’ll have services accessible over plain HTTP and not realize it until someone sniffs your traffic.

    For storage, encrypt volumes at rest. If you’re on ZFS (like my TrueNAS setup), native encryption handles this. For Docker volumes specifically:

    # Create a volume backed by encrypted storage
    docker volume create --driver local \
      --opt type=none \
      --opt o=bind \
      --opt device=/mnt/encrypted/app-data \
      my_secure_volume

    My Homelab Security Hardening Checklist

    After running both Docker Compose and K3s in production for over a year, I’ve distilled my security hardening into a checklist I apply to every new service. The specifics differ between the two platforms, but the principles are the same: minimize attack surface, enforce least privilege, and assume every container will eventually be compromised.

    Docker Compose hardening — here’s my battle-tested template with every security flag I use. This goes beyond the basics I showed earlier:

    version: '3.8'
    services:
      secure-app:
        image: my-app:latest
        user: "1000:1000"
        read_only: true
        security_opt:
          - no-new-privileges:true
          - seccomp:seccomp-profile.json
        cap_drop:
          - ALL
        cap_add:
          - NET_BIND_SERVICE    # Only if binding to ports below 1024
        tmpfs:
          - /tmp:size=64M,noexec,nosuid
          - /run:size=32M,noexec,nosuid
        deploy:
          resources:
            limits:
              memory: 512M
              cpus: '0.5'
            reservations:
              memory: 128M
              cpus: '0.1'
        healthcheck:
          test: ["CMD", "wget", "--spider", "-q", "http://localhost:8080/health"]
          interval: 30s
          timeout: 5s
          retries: 3
          start_period: 10s
        restart: unless-stopped
        networks:
          - app-tier
        volumes:
          - app-data:/data    # Only specific paths are writable
        logging:
          driver: json-file
          options:
            max-size: "10m"
            max-file: "3"
    
    volumes:
      app-data:
        driver: local
    
    networks:
      app-tier:
        driver: bridge
        internal: true        # No direct internet access

    The key additions here: seccomp:seccomp-profile.json loads a custom seccomp profile that restricts which syscalls the container can make. The default Docker seccomp profile blocks about 44 syscalls, but you can tighten it further for specific workloads. The tmpfs mounts with noexec prevent anything written to temp directories from being executed—this blocks a whole class of container escape techniques. And internal: true on the network means the container can only reach other containers on the same network, not the internet directly.

    K3s hardening — Kubernetes gives you Pod Security Standards, which replaced the old PodSecurityPolicy. Here’s how I enforce them per-namespace, plus a NetworkPolicy that locks things down:

    # Label the namespace to enforce restricted security standard
    kubectl label namespace production \
      pod-security.kubernetes.io/enforce=restricted \
      pod-security.kubernetes.io/warn=restricted \
      pod-security.kubernetes.io/audit=restricted
    
    # NetworkPolicy: only allow specific ingress/egress
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: strict-app-policy
      namespace: production
    spec:
      podSelector:
        matchLabels:
          app: web-frontend
      policyTypes:
        - Ingress
        - Egress
      ingress:
        - from:
            - namespaceSelector:
                matchLabels:
                  name: ingress-system
            - podSelector:
                matchLabels:
                  app: traefik
          ports:
            - protocol: TCP
              port: 8080
      egress:
        - to:
            - podSelector:
                matchLabels:
                  app: api-backend
          ports:
            - protocol: TCP
              port: 3000
        - to:                            # Allow DNS resolution
            - namespaceSelector: {}
              podSelector:
                matchLabels:
                  k8s-app: kube-dns
          ports:
            - protocol: UDP
              port: 53

    That NetworkPolicy says: my web frontend can only receive traffic from Traefik on port 8080, can only talk to the API backend on port 3000, and can resolve DNS. Everything else is blocked. If someone compromises the frontend container, they can’t reach the database, can’t reach other namespaces, can’t phone home to an external server.

    For secrets management on K3s, I use SOPS with age encryption. The workflow looks like this:

    # Encrypt a Kubernetes secret with SOPS + age
    sops --encrypt --age age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p \
      secret.yaml > secret.enc.yaml
    
    # Decrypt and apply in one step
    sops --decrypt secret.enc.yaml | kubectl apply -f -
    
    # In your git repo, .sops.yaml configures which files get encrypted
    creation_rules:
      - path_regex: .*\.secret\.yaml$
        age: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p

    This means secrets are encrypted at rest in your git repo—no more plaintext passwords in .env files that accidentally get committed. The age key lives only on the nodes that need to decrypt, never in version control.

    Side-by-side comparison:

    • Least privilege: Compose uses cap_drop: ALL + seccomp profiles. K3s uses Pod Security Standards with restricted enforcement.
    • Network isolation: Compose uses internal: true bridge networks. K3s uses NetworkPolicy with explicit allow rules.
    • Secrets: Compose relies on Docker secrets or .env files (weak). K3s uses SOPS-encrypted secrets in git (strong).
    • Resource limits: Both support CPU/memory limits, but K3s adds namespace-level ResourceQuotas for multi-tenant isolation.
    • Runtime protection: Both benefit from Falco, but K3s integrates it as a DaemonSet with richer audit context.

    Monitoring and Incident Response

    I run Prometheus + Grafana on my homelab, and it’s caught three misconfigurations that would have been security holes. One was a container running with --privileged that I’d forgotten to clean up after debugging. Another was a port binding on 0.0.0.0 instead of 127.0.0.1—exposing an admin interface to my entire LAN. The third was a container that had been restarting every 90 seconds for two weeks without anyone noticing.

    Monitoring isn’t just dashboards—it’s your early warning system. Here’s how I set it up differently for Compose vs K3s.

    Docker Compose: healthchecks and restart policies. Every service in my Compose files has a healthcheck. If a service fails its health check three times, Docker restarts it automatically. But I also alert on it, because a service that keeps restarting is usually a symptom of something worse:

    # Prometheus alert rule: container restarting too often
    groups:
      - name: container-alerts
        rules:
          - alert: ContainerRestartLoop
            expr: |
              increase(container_restart_count{name!=""}[1h]) > 5
            for: 10m
            labels:
              severity: warning
            annotations:
              summary: "Container {{ $labels.name }} restarted {{ $value }} times in 1h"
              description: "Possible crash loop or misconfiguration. Check logs with: docker logs {{ $labels.name }}"
    
          - alert: ContainerHighMemory
            expr: |
              container_memory_usage_bytes / container_spec_memory_limit_bytes > 0.9
            for: 5m
            labels:
              severity: critical
            annotations:
              summary: "Container {{ $labels.name }} using >90% of memory limit"
    
          - alert: UnusualOutboundTraffic
            expr: |
              rate(container_network_transmit_bytes_total[5m]) > 10485760
            for: 2m
            labels:
              severity: critical
            annotations:
              summary: "Container {{ $labels.name }} sending >10MB/s outbound — possible exfiltration"

    That last alert—unusual outbound traffic—has been the most valuable. If a container suddenly starts pushing data out at high volume, something is very wrong. Either it’s been compromised, or there’s a misconfigured backup job hammering your bandwidth.

    Kubernetes: liveness/readiness probes and audit logging. K3s gives you more granular health checks. Liveness probes restart unhealthy pods. Readiness probes remove pods from service endpoints until they’re ready to handle traffic. I also enable the Kubernetes audit log, which records every API call—who did what, when, to which resource:

    # K3s audit policy — log all write operations
    apiVersion: audit.k8s.io/v1
    kind: Policy
    rules:
      - level: RequestResponse
        verbs: ["create", "update", "patch", "delete"]
        resources:
          - group: ""
            resources: ["secrets", "configmaps", "pods"]
      - level: Metadata
        verbs: ["get", "list", "watch"]
      - level: None
        resources:
          - group: ""
            resources: ["events"]

    Log aggregation is the other piece. For Compose, I use Loki with Promtail—it’s lightweight and integrates natively with Grafana. For K3s, I’ve tried both the EFK stack (Elasticsearch, Fluentd, Kibana) and Loki. Honestly, Loki wins for homelabs. EFK is powerful but resource-hungry—Elasticsearch alone wants 2GB+ of RAM. Loki runs on a fraction of that and the LogQL query language is good enough for homelab-scale debugging.

    The key insight: don’t just collect logs, alert on patterns. A container that suddenly starts logging errors at 10x its normal rate is telling you something. Set up Grafana alert rules on log frequency, not just metrics.

    The Migration Path: My Experience

    I started with Docker Compose on a single Synology NAS running 8 containers. Jellyfin, Gitea, Vaultwarden, Traefik, a couple of monitoring tools. Everything lived in one docker-compose.yml, and life was simple. Backups were just ZFS snapshots of the Docker volumes directory.

    Over about 18 months, I added services. A lot of services. By the time I hit 20+ containers, I was running into real problems. The NAS was out of RAM. I added a second machine and tried to coordinate Compose files across both using SSH and a janky deploy script. It sort of worked, but secrets were duplicated in .env files on both machines, there was no service discovery between nodes, and when one machine rebooted, half the stack broke because of hard-coded dependencies.

    That’s when I set up K3s on three nodes: my TrueNAS box as the server node, plus two lightweight worker nodes (old mini PCs I picked up for cheap). The migration took a weekend and broke things in ways I didn’t expect:

    • DNS resolution changed completely. In Compose, container names resolve automatically within the same network. In K3s, you need proper Service definitions and namespace-aware DNS (service.namespace.svc.cluster.local). Half my apps had hardcoded container names.
    • Persistent storage was the biggest pain. Docker volumes “just work” on a single machine. In K3s across nodes, I needed a storage provisioner. I went with Longhorn, which replicates volumes across nodes. The initial sync took hours and I lost one volume because I didn’t set up the StorageClass correctly.
    • Traefik config had to be completely rewritten. Compose labels don’t work in K8s. I had to switch to IngressRoute CRDs. Took me a full evening to get TLS working again.
    • Resource usage went up. K3s itself, plus Longhorn, plus the CoreDNS and metrics-server components—my baseline overhead went from ~200MB to ~1.2GB before running any actual workloads.

    But once it was running, the benefits were immediate. I could drain a node for maintenance and all pods migrated automatically. Secrets were managed centrally with SOPS. Network policies gave me microsegmentation I couldn’t achieve with Compose. And Longhorn meant I had replicated storage—if a disk failed, my data was on two other nodes.

    My current setup is a hybrid approach, and I think this is the pragmatic answer for most homelabbers. Simple, single-purpose services that don’t need HA—like my ad blocker or a local DNS cache—still run on Docker Compose on the TrueNAS host. Anything that needs high availability, multi-node scheduling, or strict network isolation runs on K3s. The K3s cluster handles about 30 pods across the three nodes, while Compose manages another 6-7 lightweight services.

    If I were starting over today, I’d still begin with Compose. The learning curve is gentler, the debugging is easier, and you’ll learn the fundamentals of container networking and security without fighting Kubernetes abstractions. But I’d plan for K3s from day one—keep your configs clean, use environment variables consistently, and document your service dependencies. When you’re ready to migrate, it’ll be a weekend project instead of a week-long ordeal.

    My recommendation: start Compose, graduate to K3s

    If you have fewer than 15 containers on one machine, stick with Docker Compose. Apply the security hardening above, scan your images, segment your networks. You’ll be fine.

    Once you hit multiple nodes, need proper secrets management (not .env files), or want network-policy-level isolation, move to K3s. Not full Kubernetes—K3s. The learning curve is steep for a week, then it clicks.

    I’d also recommend adding Falco for runtime monitoring regardless of which tool you pick. It watches syscalls and alerts on suspicious behavior—like a container suddenly spawning a shell or reading /etc/shadow. Worth the 5 minutes to set up.

    The tools I keep coming back to for this:

    Related posts you might find useful:

    Get daily AI-powered market intelligence. Join Alpha Signal — free market briefs, security alerts, and dev tool recommendations.

    Frequently Asked Questions

    What is Docker Compose vs Kubernetes: Secure Homelab Choices about?

    Last year I moved my homelab from a single Docker Compose stack to a K3s cluster. It took a weekend, broke half my services, and taught me more about container security than any course I’ve taken.

    Who should read this article about Docker Compose vs Kubernetes: Secure Homelab Choices?

    Anyone interested in learning about Docker Compose vs Kubernetes: Secure Homelab Choices and related topics will find this article useful.

    What are the key takeaways from Docker Compose vs Kubernetes: Secure Homelab Choices?

    Here’s what I learned about when each tool actually makes sense—and the security traps in both. The real question: how big is your homelab? I ran Docker Compose for two years.

    References

    1. Docker — “Compose File Reference”
    2. Kubernetes — “K3s Documentation”
    3. OWASP — “Docker Security Cheat Sheet”
    4. NIST — “Application Container Security Guide”
    5. Kubernetes — “Securing a Cluster”
    📦 Disclosure: Some links above are affiliate links. If you buy through them, I earn a small commission at no extra cost to you. I only recommend stuff I actually use. This helps keep orthogonal.info running.
  • Secure Remote Access for Your Homelab

    Secure Remote Access for Your Homelab

    I manage my homelab remotely every day—30+ Docker containers on TrueNAS SCALE, accessed from coffee shops, airports, and hotel Wi-Fi. After finding brute-force attempts in my logs within hours of opening SSH to the internet, I locked everything down. Here’s exactly how I secure remote access now.

    Introduction to Secure Remote Access

    📌 TL;DR: Learn how to adapt enterprise-grade security practices for safe and efficient remote access to your homelab, ensuring strong protection against modern threats. Introduction to Secure Remote Access Picture this: You’ve spent weeks meticulously setting up your homelab.
    🎯 Quick Answer: Secure remote homelab access using WireGuard VPN with mTLS, OPNsense firewall rules, and Crowdsec intrusion prevention. This setup safely manages 30+ Docker containers remotely while blocking unauthorized access at multiple layers.

    🏠 My setup: TrueNAS SCALE · 64GB ECC RAM · dual 10GbE NICs · WireGuard VPN on OPNsense · Authelia for SSO · all services behind reverse proxy with TLS.

    Picture this: You’ve spent weeks meticulously setting up your homelab. Virtual machines are humming, your Kubernetes cluster is running smoothly, and you’ve finally configured that self-hosted media server you’ve been dreaming about. Then, you decide to access it remotely while traveling, only to realize your setup is wide open to the internet. A few days later, you notice strange activity on your server logs—someone has brute-forced their way in. The dream has turned into a nightmare.

    Remote access is a cornerstone of homelab setups. Whether you’re managing virtual machines, hosting services, or experimenting with new technologies, the ability to securely access your resources from anywhere is invaluable. However, unsecured remote access can leave your homelab vulnerable to attacks, ranging from brute force attempts to more sophisticated exploits.

    we’ll explore how you can scale down enterprise-grade security practices to protect your homelab. The goal is to strike a balance between strong security and practical usability, ensuring your setup is safe without becoming a chore to manage.

    Homelabs are often a playground for tech enthusiasts, but they can also serve as critical infrastructure for personal or small business projects. This makes securing remote access even more important. Attackers often target low-hanging fruit, and an unsecured homelab can quickly become a victim of ransomware, cryptojacking, or data theft.

    By implementing the strategies outlined you’ll not only protect your homelab but also gain valuable experience in cybersecurity practices that can be applied to larger-scale environments. Whether you’re a beginner or an experienced sysadmin, there’s something here for everyone.

    💡 Pro Tip: Always start with a security audit of your homelab. Identify services exposed to the internet and prioritize securing those first.

    Key Principles of Enterprise Security

    Before diving into the technical details, let’s talk about the foundational principles of enterprise security and how they apply to homelabs. These practices might sound intimidating, but they’re surprisingly adaptable to smaller-scale environments.

    Zero Trust Architecture

    Zero Trust is a security model that assumes no user or device is trustworthy by default, even if they’re inside your network. Every access request is verified, and permissions are granted based on strict policies. For homelabs, this means implementing controls like authentication, authorization, and network segmentation to ensure only trusted users and devices can access your resources.

    For example, you can use VLANs (Virtual LANs) to segment your network into isolated zones. This prevents devices in one zone from accessing resources in another zone unless explicitly allowed. Combine this with strict firewall rules to enforce access policies.

    Another practical application of Zero Trust is to use role-based access control (RBAC). Assign specific permissions to users based on their roles. For instance, your media server might only be accessible to family members, while your Kubernetes cluster is restricted to your personal devices.

    Multi-Factor Authentication (MFA)

    MFA is a simple yet powerful way to secure remote access. By requiring a second form of verification—like a one-time code from an app or hardware token—you add an additional layer of security that makes it significantly harder for attackers to gain access, even if they manage to steal your password.

    Consider using apps like Google Authenticator or Authy for MFA. For homelabs, you can integrate MFA with services like SSH, VPNs, or web applications using tools like Authelia or Duo. These tools are lightweight and easy to configure for personal use.

    Hardware-based MFA, such as YubiKeys, offers even greater security. These devices generate one-time codes or act as physical keys that must be present to authenticate. They’re particularly useful for securing critical services like SSH or admin dashboards.

    Encryption and Secure Tunneling

    Encryption ensures that data transmitted between your device and homelab is unreadable to anyone who intercepts it. Secure tunneling protocols like WireGuard or OpenVPN create encrypted channels for remote access, protecting your data from prying eyes.

    For example, WireGuard is known for its simplicity and performance. It uses modern cryptographic algorithms to establish secure connections quickly. Here’s a sample configuration for a WireGuard client:

    # WireGuard client configuration
    [Interface]
    PrivateKey = <client-private-key>
    Address = 10.0.0.2/24
    
    [Peer]
    PublicKey = <server-public-key>
    Endpoint = your-homelab-ip:51820
    AllowedIPs = 0.0.0.0/0
    

    By using encryption and secure tunneling, you can safely access your homelab even on public Wi-Fi networks.

    💡 Pro Tip: Always use strong encryption algorithms like AES-256 or ChaCha20 for secure communications. Avoid outdated protocols like PPTP.
    ⚠️ What went wrong for me: I once left an SSH port exposed with password auth “just for testing.” Within 6 hours, my Wazuh dashboard lit up with thousands of brute-force attempts from IPs across three continents. I immediately switched to key-only auth and moved SSH behind my WireGuard VPN. Now nothing is directly exposed to the internet—every service goes through the tunnel.

    Practical Patterns for Homelab Security

    Now that we’ve covered the principles, let’s get into practical implementations. These are tried-and-true methods that can significantly improve the security of your homelab without requiring enterprise-level budgets or infrastructure.

    Using VPNs for Secure Access

    A VPN (Virtual Private Network) allows you to securely connect to your homelab as if you were on the local network. Tools like WireGuard are lightweight, fast, and easy to set up. Here’s a basic WireGuard configuration:

    # Install WireGuard
    sudo apt update && sudo apt install wireguard
    
    # Generate keys
    wg genkey | tee privatekey | wg pubkey > publickey
    
    # Configure the server
    sudo nano /etc/wireguard/wg0.conf
    
    # Example configuration
    [Interface]
    PrivateKey = <your-private-key>
    Address = 10.0.0.1/24
    ListenPort = 51820
    
    [Peer]
    PublicKey = <client-public-key>
    AllowedIPs = 10.0.0.2/32
    

    Once configured, you can connect securely to your homelab from anywhere.

    VPNs are particularly useful for accessing services that don’t natively support encryption or authentication. By routing all traffic through a secure tunnel, you can protect even legacy applications.

    💡 Pro Tip: Use dynamic DNS services like DuckDNS or No-IP to maintain access to your homelab even if your public IP changes.

    Setting Up SSH with Public Key Authentication

    SSH is a staple for remote access, but using passwords is a recipe for disaster. Public key authentication is far more secure. Here’s how you can set it up:

    # Generate SSH keys on your local machine
    ssh-keygen -t rsa -b 4096 -C "[email protected]"
    
    # Copy the public key to your homelab server
    ssh-copy-id user@homelab-ip
    
    # Disable password authentication for SSH
    sudo nano /etc/ssh/sshd_config
    
    # Update the configuration
    PasswordAuthentication no
    

    Public key authentication eliminates the risk of brute force attacks on SSH passwords. Also, you can use tools like Fail2Ban to block IPs after repeated failed login attempts.

    💡 Pro Tip: Use SSH jump hosts to securely access devices behind your homelab firewall without exposing them directly to the internet.

    Implementing Firewall Rules and Network Segmentation

    Firewalls and network segmentation are essential for limiting access to your homelab. Tools like UFW (Uncomplicated Firewall) make it easy to set up basic rules:

    # Install UFW
    sudo apt update && sudo apt install ufw
    
    # Allow SSH and VPN traffic
    sudo ufw allow 22/tcp
    sudo ufw allow 51820/udp
    
    # Deny all other traffic by default
    sudo ufw default deny incoming
    sudo ufw default allow outgoing
    
    # Enable the firewall
    sudo ufw enable
    

    Network segmentation can be achieved using VLANs or separate subnets. For example, you can isolate your IoT devices from your critical infrastructure to reduce the risk of lateral movement in case of a breach.

    Tools and Technologies for Homelab Security

    There’s no shortage of tools to help secure your homelab. Here are some of the most effective and homelab-friendly options:

    Open-Source VPN Solutions

    WireGuard and OpenVPN are excellent choices for creating secure tunnels to your homelab. WireGuard is particularly lightweight and fast, making it ideal for resource-constrained environments.

    Reverse Proxies for Secure Web Access

    Reverse proxies like Traefik and NGINX can serve as a gateway to your web services, providing SSL termination, authentication, and access control. For example, Traefik can automatically issue and renew Let’s Encrypt certificates:

    # Traefik configuration
    entryPoints:
     web:
     address: ":80"
     websecure:
     address: ":443"
    
    certificatesResolvers:
     letsencrypt:
     acme:
     email: [email protected]
     storage: acme.json
     httpChallenge:
     entryPoint: web
    

    Reverse proxies also allow you to expose multiple services on a single IP address, simplifying access management.

    Homelab-Friendly MFA Tools

    For MFA, tools like Authelia or Duo can integrate with your homelab services, adding an extra layer of security. Pair them with password managers like Bitwarden to manage credentials securely.

    Monitoring and Continuous Improvement

    Security isn’t a one-and-done deal—it’s an ongoing process. Regular monitoring and updates are crucial to maintaining a secure homelab.

    Logging and Monitoring

    Set up logging for all remote access activity. Tools like Fail2Ban can analyze logs and block suspicious IPs automatically. Pair this with centralized logging solutions like ELK Stack or Grafana for better visibility.

    Monitoring tools can also alert you to unusual activity, such as repeated login attempts or unexpected traffic patterns. This allows you to respond quickly to potential threats.

    Regular Updates

    Outdated software is a common entry point for attackers. Make it a habit to update your operating system, applications, and firmware regularly. Automate updates where possible to reduce manual effort.

    ⚠️ Warning: Never skip updates for critical software like VPNs or SSH servers. Vulnerabilities in these tools can expose your entire homelab.

    Advanced Security Techniques

    For those looking to take their homelab security to the next level, here are some advanced techniques to consider:

    Intrusion Detection Systems (IDS)

    IDS tools like Snort or Suricata can monitor network traffic for suspicious activity. These tools are particularly useful for detecting and responding to attacks in real time.

    Hardware Security Modules (HSM)

    HSMs are physical devices that securely store cryptographic keys. While typically used in enterprise environments, affordable options like YubiHSM can be used in homelabs to protect sensitive keys.

    💡 Pro Tip: Combine IDS with firewall rules to automatically block malicious traffic based on detected patterns.
    🛠️ Recommended Resources:

    Tools and books mentioned in (or relevant to) this article:

    Conclusion and Next Steps

    Start with WireGuard. It took me 30 minutes to set up on OPNsense and it immediately eliminated my entire external attack surface. Every service—SSH, web UIs, dashboards—now lives behind the VPN tunnel. Add key-only SSH auth and Authelia for MFA, and you’ve got enterprise-grade remote access for your homelab in an afternoon.

    Here’s what to remember:

    • Always use VPNs or SSH with public key authentication for remote access.
    • Implement MFA wherever possible to add an extra layer of security.
    • Regularly monitor logs and update software to stay ahead of vulnerabilities.
    • Use tools like reverse proxies and firewalls to control access to your services.

    Start small—secure one service at a time, and iterate on your setup as you learn. Security is a journey, not a destination.

    Have questions or tips about securing homelabs? Drop a comment or reach out to me on Twitter. Next week, we’ll explore advanced network segmentation techniques—because a segmented network is a secure network.

    Get Weekly Security & DevOps Insights

    Join 500+ engineers getting actionable tutorials on Kubernetes security, homelab builds, and trading automation. No spam, unsubscribe anytime.

    Subscribe Free →

    Delivered every Tuesday. Read by engineers at Google, AWS, and startups.

    Frequently Asked Questions

    What is Secure Remote Access for Your Homelab about?

    Learn how to adapt enterprise-grade security practices for safe and efficient remote access to your homelab, ensuring strong protection against modern threats. Introduction to Secure Remote Access Pic

    Who should read this article about Secure Remote Access for Your Homelab?

    Anyone interested in learning about Secure Remote Access for Your Homelab and related topics will find this article useful.

    What are the key takeaways from Secure Remote Access for Your Homelab?

    Virtual machines are humming, your Kubernetes cluster is running smoothly, and you’ve finally configured that self-hosted media server you’ve been dreaming about. Then, you decide to access it remotel

    📋 Disclosure: Some links are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I’ve personally used or thoroughly evaluated. This helps support orthogonal.info and keeps the content free.

    References

Also by us: StartCaaS — AI Company OS · Hype2You — AI Tech Trends