Tag: Linux

  • How to Fix Docker Memory Leaks: Master cgroups and Container Memory Management

    # How to Fix Docker Memory Leaks: A Practical Guide to cgroups for DevOps Engineers

    If you’ve ever encountered memory leaks in Docker containers within a production environment, you know how frustrating and disruptive they can be. Applications crash unexpectedly, services become unavailable, and troubleshooting often leads to dead ends—forcing you to restart containers as a temporary fix. But have you ever stopped to consider why memory leaks happen in the first place? More importantly, how can you address them effectively and prevent them from recurring?

    In this guide, I’ll walk you through the fundamentals of container memory management using **cgroups** (control groups), a powerful Linux kernel feature that Docker relies on to allocate and limit resources. Whether you’re new to Docker or a seasoned DevOps engineer, this practical guide will help you identify, diagnose, and resolve memory leaks with confidence. By the end, you’ll have a clear understanding of how to safeguard your production environment against these silent disruptors.

    ## Understanding Docker Memory Leaks: Symptoms and Root Causes

    Memory leaks in Docker containers can be a silent killer for production environments. As someone who has managed containerized applications, I’ve seen firsthand how elusive these issues can be. To tackle them effectively, it’s essential to understand what constitutes a memory leak, recognize the symptoms, and identify the root causes.

    ### What Is a Memory Leak in Docker Containers?

    A memory leak occurs when an application or process fails to release memory that is no longer needed, causing memory usage to grow over time. In the context of Docker containers, this can happen due to poorly written application code, misconfigured libraries, or improper container memory management.

    Docker uses **cgroups** to allocate and enforce resource limits, including memory, for containers. However, if an application inside a container continuously consumes memory without releasing it, the container may eventually hit its memory limit or degrade in performance. This is especially relevant on modern Linux systems that use **cgroups v2**, which introduces updated parameters for memory management. For example, `memory.max` replaces `memory.limit_in_bytes`, and `memory.current` replaces `memory.usage_in_bytes`. Familiarity with these changes is crucial for effective memory management.

    ### Common Symptoms of Memory Leaks in Containerized Applications

    Detecting memory leaks isn’t always straightforward, but there are a few telltale signs to watch for:

    1. **Gradual Increase in Memory Usage**: If you monitor container metrics and notice a steady rise in memory consumption over time, it’s a strong indicator of a leak.
    2. **Container Restarts**: Docker’s Out of Memory (OOM) killer may restart containers when they exceed their memory limits. Frequent restarts are a red flag.
    3. **Degraded Application Performance**: Memory leaks can lead to slower response times or even application crashes as the system struggles to allocate resources.
    4. **Host System Instability**: In extreme cases, memory leaks in containers can affect the host machine, causing system-wide issues.

    ### How Memory Leaks Impact Production Environments

    In production, memory leaks can be catastrophic. Containers running critical services may become unresponsive, leading to downtime. Worse, if multiple containers on the same host experience leaks, the host itself may run out of memory, affecting all applications deployed on it.

    Proactive monitoring and testing are key to mitigating these risks. Tools like **Prometheus**, **Grafana**, and Docker’s built-in `docker stats` command can help you identify abnormal memory usage patterns early. Additionally, setting memory limits for containers using Docker’s `–memory` flag and pairing it with `–memory-swap` prevents leaks from spiraling out of control and reduces excessive swap usage, which can degrade host performance.

    ## Introduction to cgroups: The Foundation of Docker Memory Management

    Efficient memory management is critical when working with containerized applications. Containers share the host system’s resources, and without proper control, a single container can monopolize memory, leading to instability or crashes. This is where **cgroups** come into play. As a DevOps engineer or backend developer, understanding cgroups is essential for preventing Docker memory leaks and ensuring robust container memory management.

    Cgroups are a Linux kernel feature that allows you to allocate, limit, and monitor resources such as CPU, memory, and I/O for processes. Docker leverages cgroups to enforce resource limits on containers, ensuring they don’t exceed predefined thresholds. For memory management, cgroups provide fine-grained control through parameters like `memory.max` (cgroups v2) or `memory.limit_in_bytes` (cgroups v1) and `memory.current` (cgroups v2) or `memory.usage_in_bytes` (cgroups v1).

    ### Key cgroup Parameters for Memory Management

    Here are some essential cgroup parameters you should be familiar with:

    1. **memory.max (cgroups v2)**: Defines the maximum amount of memory a container can use. For example, setting this to `512M` ensures the container cannot exceed 512 MB of memory usage, preventing memory overuse.

    2. **memory.current (cgroups v2)**: Displays the current memory usage of a container. Monitoring this value helps identify containers consuming excessive memory, which could indicate a memory leak.

    3. **memory.failcnt (cgroups v1)**: Tracks the number of times a container’s memory usage exceeded the limit set by `memory.limit_in_bytes`. A high fail count signals that the container is consistently hitting its memory limit.

    ### How cgroups Enforce Memory Limits

    Cgroups enforce memory limits by actively monitoring container memory usage and restricting access once the limit is reached. If a container attempts to allocate more memory than allowed, the kernel intervenes and denies the allocation, resulting in an Out of Memory (OOM) error within the container. This mechanism prevents containers from exhausting the host system’s memory and ensures fair resource distribution across all running containers.

    By leveraging cgroups effectively, you can mitigate the risk of Docker memory leaks and maintain stable application performance. Whether you’re troubleshooting memory issues or optimizing resource allocation, cgroups provide the foundation for reliable container memory management.

    ## Diagnosing Memory Leaks in Docker Containers: Tools and Techniques

    Diagnosing memory leaks in Docker containers requires a systematic approach. In this section, I’ll introduce practical tools and techniques to monitor and analyze memory usage, helping you pinpoint the source of leaks and resolve them effectively.

    ### Monitoring Memory Usage with `docker stats`

    The simplest way to start diagnosing memory leaks is by using Docker’s built-in `docker stats` command. It provides real-time metrics for container resource usage, including memory consumption.

    “`bash
    docker stats
    “`

    This command outputs a table with columns like `MEM USAGE / LIMIT`, showing how much memory a container is using compared to its allocated limit. If you notice a container’s memory usage steadily increasing over time without releasing memory, it’s a strong indicator of a memory leak.

    For example, if a container starts at 100 MB and grows to 1 GB within a few hours without significant workload changes, further investigation is warranted.

    ### Analyzing cgroup Metrics for Memory Consumption

    For deeper insights, you can analyze cgroup metrics directly. Navigate to the container’s cgroup directory to access memory-related files. For example:

    “`bash
    cat /sys/fs/cgroup/memory/docker//memory.current
    “`

    This file shows the current memory usage in bytes (cgroups v2). You can also check `memory.stat` for detailed statistics like cache usage and RSS (resident set size):

    “`bash
    cat /sys/fs/cgroup/memory/docker//memory.stat
    “`

    Look for fields like `total_rss` and `total_cache`. If `total_rss` is growing uncontrollably, the application inside the container may not be releasing memory properly.

    ### Advanced Tools for Memory Monitoring: `cAdvisor`, `Prometheus`, and `Grafana`

    While `docker stats` and cgroup metrics are useful for immediate diagnostics, long-term monitoring and visualization require more advanced tools. I recommend integrating **cAdvisor**, **Prometheus**, and **Grafana** for comprehensive memory management.

    #### Setting Up `cAdvisor`

    `cAdvisor` is a container monitoring tool developed by Google. It provides detailed resource usage statistics, including memory metrics, for all containers running on a host. You can run `cAdvisor` as a Docker container:

    “`bash
    docker run \
    –volume=/var/run/docker.sock:/var/run/docker.sock \
    –volume=/sys:/sys \
    –volume=/var/lib/docker/:/var/lib/docker/ \
    –publish=8080:8080 \
    –detach=true \
    –name=cadvisor \
    google/cadvisor:latest
    “`

    Access the `cAdvisor` dashboard at `http://:8080` to identify trends and pinpoint containers with abnormal memory growth.

    #### Integrating Prometheus and Grafana

    For long-term monitoring and alerting, use Prometheus and Grafana. Prometheus collects metrics from `cAdvisor`, while Grafana visualizes them in customizable dashboards. Here’s a basic setup:

    1. Run Prometheus and configure it to scrape metrics from `cAdvisor`.
    2. Use Grafana to create dashboards displaying memory usage trends.
    3. Set alerts in Grafana to notify you when a container’s memory usage exceeds a threshold or grows unexpectedly.

    By combining proactive monitoring, effective use of cgroups, and advanced tools like `cAdvisor`, Prometheus, and Grafana, you can diagnose and resolve Docker memory leaks with confidence. With these strategies, you’ll not only protect your production environment but also ensure consistent application performance.

  • How to install python pip on CentoOS Core Enterprise

    Imagine this: You’ve just spun up a fresh CentOS Core Enterprise server for your next big project. You’re ready to automate, deploy, or analyze—but the moment you try pip install, you hit a wall. No pip. No Python package manager. Frustrating, right?

    CentOS Core Enterprise keeps things lean and secure, but that means pip isn’t available out of the box. If you want to install Python packages, you’ll need to unlock the right repositories first. Let’s walk through the process, step by step, and I’ll share some hard-earned tips so you don’t waste time on common pitfalls.

    Step 1: Enable EPEL Repository

    The Extra Packages for Enterprise Linux (EPEL) repository is your gateway to modern Python tools on CentOS. Without EPEL, pip is nowhere to be found.

    sudo yum install epel-release

    Tip: If you’re running on a minimal install, make sure your network is configured and yum is working. EPEL is maintained by Fedora and is safe for enterprise use.

    Step 2: Install pip for Python 2 (Legacy)

    With EPEL enabled, you can now install pip for Python 2. But let’s be real: Python 2 is obsolete. Only use this if you’re stuck maintaining legacy code.

    sudo yum install python-pip

    Gotcha: This will install pip for Python 2.x. Most modern packages require Python 3. If you’re starting fresh, skip ahead.

    Step 3: Install Python 3 and pip (Recommended)

    For new projects, Python 3 is the only sane choice. Here’s how to get both Python 3 and its pip:

    sudo yum install python3-pip
    sudo pip3 install --upgrade pip

    Pro Tip: Always upgrade pip after installing. The default version from yum is often outdated and may not support the latest Python packages.

    Final Thoughts

    CentOS Core Enterprise is rock-solid, but it makes you work for modern Python tooling. Enable EPEL, choose Python 3, and always keep pip up to date. If you run into dependency errors or missing packages, double-check your repositories and consider using virtualenv for isolated environments.

    Now you’re ready to install anything from requests to flask—and get back to building something awesome.

  • Setup latest Elastic Search and Kibana on CentOS7 in April 2022

    Imagine this: your boss walks in and says, “We need real-time search and analytics. Yesterday.” You’ve got a CentOS 7 box, and you need Elasticsearch and Kibana running—fast, stable, and secure. Sound familiar? Good. Let’s get straight to business.

    Step 1: Prerequisites—Don’t Skip These!

    Before you touch Elasticsearch, make sure your server is ready. These steps aren’t optional; skipping them will cost you hours later.

    • Set a static IP:

      sudo vi /etc/sysconfig/network-scripts/ifcfg-ens3

      Tip: Double-check your network config. A changing IP will break your cluster.

    • Set a hostname:

      sudo vi /etc/hostname

      Opinion: Use meaningful hostnames. “node1” is better than “localhost”.

    • (Optional) Disable the firewall:

      sudo systemctl disable firewalld --now

      Gotcha: Only do this in a trusted environment. Otherwise, configure your firewall properly.

    • Install Java (Elasticsearch needs it):

      sudo yum install java-1.8.0-openjdk.x86_64 -y

      Tip: Elasticsearch 8.x bundles its own JVM, but installing Java never hurts for troubleshooting.

    Step 2: Install Elasticsearch 8.x

    Ready for the main event? Let’s get Elasticsearch installed and configured.

    1. Import the Elasticsearch GPG key:

      sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    2. Add the Elasticsearch repo:

      sudo vi /etc/yum.repos.d/elasticsearch.repo
      [elasticsearch]
      name=Elasticsearch repository for 8.x packages
      baseurl=https://artifacts.elastic.co/packages/8.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=0
      autorefresh=1
      type=rpm-md

      Tip: Set enabled=0 so you only use this repo when you want to. Avoid accidental upgrades.

    3. Install Elasticsearch:

      sudo yum install --enablerepo=elasticsearch elasticsearch -y
    4. Configure Elasticsearch:

      sudo vi /etc/elasticsearch/elasticsearch.yml
      node.name: "es1"
      cluster.name: cluster1
      script.allowed_types: none

      Opinion: Always set node.name and cluster.name. Defaults are for amateurs.

    5. Set JVM heap size (optional, but recommended for tuning):

      sudo vi /etc/elasticsearch/jvm.options
      -Xms4g
      -Xmx4g

      Tip: Set heap to half your available RAM, max 32GB. Too much heap = slow GC.

    6. Enable and start Elasticsearch:

      sudo systemctl enable elasticsearch.service
      sudo systemctl start elasticsearch.service
    7. Test your installation:

      curl -X GET 'http://localhost:9200'

      Gotcha: If you get a permission error, check SELinux or your firewall.

    Step 3: Install and Configure Kibana

    Kibana is your window into Elasticsearch. Let’s get it running.

    1. Add the Kibana repo:

      sudo vi /etc/yum.repos.d/kibana.repo
      [kibana-8.x]
      name=Kibana repository for 8.x packages
      baseurl=https://artifacts.elastic.co/packages/8.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=1
      autorefresh=1
      type=rpm-md

      Tip: Keep enabled=1 for Kibana. You’ll want updates.

    2. Install Kibana:

      sudo yum install kibana -y
    3. Generate the enrollment token (for secure setup):

      bin/elasticsearch-create-enrollment-token -s kibana

      Gotcha: Save this token! You’ll need it when you first access Kibana.

    4. Reload systemd and start Kibana:

      sudo systemctl daemon-reload
      sudo systemctl enable kibana.service
      sudo systemctl restart kibana.service

      Tip: Use restart instead of start to pick up config changes.

    Final Thoughts: Don’t Get Burned

    • Security: Elasticsearch 8.x is secure by default. Don’t disable TLS unless you know exactly what you’re doing.
    • Memory: Monitor your heap usage. Elasticsearch loves RAM, but hates swap.
    • Upgrades: Always test upgrades in a staging environment. Elasticsearch upgrades can be breaking.

    If you followed these steps, you’re ready to build powerful search and analytics solutions. Don’t settle for defaults—tune, secure, and monitor your stack. Any questions? I’m Max L, and I don’t believe in half-measures.

  • How to move files around with scp

    Picture this: it’s 3 AM, and you’re debugging an issue on a remote server. Logs are piling up, and you need to download a massive file to analyze it locally. Or maybe you’re deploying a quick patch to a production server and need to upload a configuration file. In moments like these, scp (Secure Copy) is your best friend. It’s simple, reliable, and gets the job done without unnecessary complexity. But like any tool, using it effectively requires more than just knowing the basic syntax.

    In this guide, we’ll go beyond the basics of scp. You’ll learn how to securely transfer files, optimize performance, avoid common pitfalls, and even troubleshoot issues when things go sideways. Whether you’re a seasoned sysadmin or a developer just getting started with remote servers, this article will arm you with the knowledge to wield scp like a pro.

    What is scp?

    scp stands for Secure Copy, and it’s a command-line utility that allows you to transfer files between local and remote systems over an SSH connection. It’s built on top of SSH, which means your data is encrypted during transfer, making it a secure choice for moving sensitive files.

    Unlike modern tools like rsync, scp is straightforward and doesn’t require additional setup. If you have SSH access to a remote machine, you can use scp immediately. However, this simplicity comes with trade-offs, which we’ll discuss later in the article.

    Downloading Files from a Remote Server

    Let’s start with the most common use case: downloading a file from a remote server to your local machine. Here’s the basic syntax:

    scp -i conn.pem [email protected]:/home/azureuser/output.gz ./output.gz

    Here’s what’s happening in this command:

    • -i conn.pem: Specifies the private key file for SSH authentication.
    • [email protected]: The username and IP address of the remote server.
    • :/home/azureuser/output.gz: The absolute path to the file on the remote server.
    • ./output.gz: The destination path on your local machine.

    After running this command, the file output.gz will be downloaded to your current working directory.

    💡 Pro Tip: Use absolute paths on the remote server to avoid confusion, especially when dealing with complex directory structures.

    Real-World Example: Downloading Logs

    Imagine you’re troubleshooting an issue on a remote server, and you need to analyze the logs locally:

    scp -i ~/.ssh/id_rsa admin@prod-server:/var/log/nginx/access.log ./access.log

    This command downloads the Nginx access log to your local machine. If the file is large, consider using the -C option to compress it during transfer:

    scp -C -i ~/.ssh/id_rsa admin@prod-server:/var/log/nginx/access.log ./access.log

    Compression can significantly speed up transfers, especially for text-heavy files like logs.

    Uploading Files to a Remote Server

    Uploading files is just as straightforward. The syntax is almost identical, but the source and destination paths are reversed:

    scp -i conn.pem ./config.yaml [email protected]:/etc/myapp/config.yaml

    In this example:

    • ./config.yaml: The file on your local machine that you want to upload.
    • [email protected]:/etc/myapp/config.yaml: The destination path on the remote server.
    ⚠️ Gotcha: Ensure the destination directory on the remote server exists and has the correct permissions. Otherwise, the command will fail.

    Real-World Example: Deploying Configuration Files

    Let’s say you’re deploying a new configuration file to a production server:

    scp -i ~/.ssh/id_rsa ./nginx.conf admin@prod-server:/etc/nginx/nginx.conf

    After uploading, don’t forget to reload or restart the service to apply the changes:

    ssh -i ~/.ssh/id_rsa admin@prod-server "sudo systemctl reload nginx"

    Advanced scp Options

    scp comes with several options that can make your life easier. Here are some of the most useful ones:

    • -C: Compresses files during transfer, which can speed up the process for large files.
    • -P: Specifies the SSH port if it’s not the default port 22.
    • -r: Recursively copies directories and their contents.
    • -p: Preserves the original access and modification times of the files.

    Example: Copying an Entire Directory

    To copy a directory and all its contents, use the -r option:

    scp -r -i conn.pem ./my_project [email protected]:/home/azureuser/

    This command uploads the entire my_project directory to the remote server.

    🔐 Security Note: Avoid using scp with password-based authentication. Always use SSH keys for better security.

    Common Pitfalls and Troubleshooting

    While scp is generally reliable, you may encounter issues. Here are some common problems and how to solve them:

    1. Permission Denied

    If you see a “Permission denied” error, check the following:

    • Ensure your SSH key has the correct permissions: chmod 600 ~/.ssh/id_rsa.
    • Verify that your user account has write permissions on the remote server.

    2. Connection Timeout

    If the connection times out, confirm that:

    • The remote server’s SSH service is running.
    • You’re using the correct IP address and port.

    3. Slow Transfers

    For slow transfers, try enabling compression with the -C option. If the issue persists, consider using rsync, which is more efficient for large or incremental transfers.

    When to Use scp (and When Not To)

    scp is great for quick, one-off file transfers. However, it’s not always the best choice:

    • For large datasets or incremental backups, use rsync.
    • For automated workflows, consider tools like sftp or ansible.

    That said, scp remains a valuable tool in your arsenal, especially for its simplicity and ubiquity.

    Key Takeaways

    • scp is a simple and secure way to transfer files over SSH.
    • Use options like -C, -r, and -p to enhance functionality.
    • Always use SSH keys for authentication to improve security.
    • Be mindful of permissions and directory structures to avoid errors.
    • For large or complex transfers, consider alternatives like rsync.

    Now it’s your turn: What’s your favorite scp trick or tip? Share it in the comments below!

  • How to execute a command(s) or a script via SSH

    Imagine this: you’re sipping coffee at your desk, and you suddenly need to check the status of a remote server. Do you really want to fire up a full-blown remote desktop or wrestle with clunky web dashboards? No way. With SSH, you can execute commands remotely—fast, simple, and scriptable. If you’re not using this technique yet, you’re missing out on one of the best productivity hacks in the sysadmin and developer toolkit.

    Running a Single Command Over SSH

    Want to check the uptime of a remote machine? Just send the command directly and get the output instantly:

    ssh [email protected] 'uptime'

    Tip: The command inside single quotes runs on the remote host, and its output comes right back to your terminal. This is perfect for quick checks or automation scripts.

    Executing Multiple Commands

    Sometimes, you need to run a sequence of commands. You don’t have to SSH in and type them one by one. Use a here document for multi-command execution:

    ssh [email protected] << EOF
    COMMAND1
    COMMAND2
    COMMAND3
    EOF

    Gotcha: Make sure your EOF delimiter is at the start of the line—no spaces! Also, remember that environment variables and shell settings may differ on the remote host.

    Running a Local Script Remotely

    Have a script on your local machine that you want to run remotely? You don’t need to copy it over first. Just stream it to the remote shell:

    ssh [email protected] 'bash -s' < myscript.sh

    Pro Tip: This pipes your local myscript.sh directly to bash on the remote machine. If your script needs arguments, you can pass them after bash -s like this:

    ssh [email protected] 'bash -s' -- arg1 arg2 < myscript.sh

    Best Practices and Pitfalls

    • Use SSH keys for authentication—never hardcode passwords in scripts.
    • Quote your commands properly to avoid shell interpretation issues.
    • Test locally before running destructive commands remotely. A misplaced rm -rf can ruin your day.
    • Check exit codes if you’re automating deployments. SSH will return the exit status of the remote command.

    Why This Matters

    SSH command execution is a game-changer for deployment, automation, and troubleshooting. It’s fast, scriptable, and—when used wisely—secure. So next time you need to automate a remote task, skip the manual steps and use these SSH tricks. Your future self will thank you.

  • Setup k3s on CentOS 7

    Imagine this: you need a lightweight Kubernetes cluster up and running today—no drama, no endless YAML, no “what did I forget?” moments. That’s where k3s shines, especially on CentOS 7. I’ll walk you through the setup, toss in some hard-earned tips, and call out gotchas that can trip up even seasoned pros.

    Step 1: Prerequisites—Get Your House in Order

    Before you touch k3s, make sure your CentOS 7 box is ready. Trust me, skipping this step leads to pain later.

    • Set a static IP and hostname (don’t rely on DHCP for servers!):

      vi /etc/sysconfig/network-scripts/ifcfg-eth0
      vi /etc/hostname
      

      Tip: After editing, restart networking or reboot to apply changes.

    • Optional: Disable the firewall (for labs or trusted networks only):

      systemctl disable firewalld --now
      

      Gotcha: If you keep the firewall, open ports 6443 (Kubernetes API), 10250, and 8472 (Flannel VXLAN).

    Step 2: (Optional) Install Rancher RKE2

    If you want Rancher’s full power, set up RKE2 first. Otherwise, skip to k3s install.

    1. Create config directory:

      mkdir -p /etc/rancher/rke2
      
    2. Edit config.yaml:

      token: somestringforrancher
      tls-san:
        - 192.168.1.128
      

      Tip: Replace 192.168.1.128 with your server’s IP. The tls-san entry is critical for SSL and HA setups.

    3. Install Rancher:

      curl -sfL https://get.rancher.io | sh -
      
    4. Enable and start the Rancher service:

      systemctl enable rancherd-server.service
      systemctl start rancherd-server.service
      
    5. Check startup status:

      journalctl -eu rancherd-server.service -f
      

      Tip: Look for “Ready” messages. Errors here usually mean a misconfigured config.yaml or missing ports.

    6. Reset Rancher admin password (for UI login):

      rancherd reset-admin
      

    Step 3: Install k3s—The Main Event

    Master Node Setup

    curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -
    
    • Tip: K3S_KUBECONFIG_MODE="644" makes /etc/rancher/k3s/k3s.yaml world-readable. Good for quick access, but not for production security!
    • Get your cluster token (needed for workers):

      sudo cat /var/lib/rancher/k3s/server/node-token
      

    Worker Node Setup

    curl -sfL https://get.k3s.io | \
      K3S_URL="https://<MASTER_IP>:6443" \
      K3S_TOKEN="<TOKEN>" \
      K3S_NODE_NAME="<NODE_NAME>" \
      sh -
    
    • Replace <MASTER_IP> with your master’s IP, <TOKEN> with the value from node-token, and <NODE_NAME> with a unique name for the node.
    • Gotcha: If you see “permission denied” or “failed to connect,” double-check your firewall and SELinux settings. CentOS 7 can be picky.

    Final Thoughts: What’s Next?

    You’ve got a blazing-fast Kubernetes cluster. Next, try kubectl get nodes (grab the kubeconfig from /etc/rancher/k3s/k3s.yaml), deploy a test workload, and—if you’re feeling brave—secure your setup for production. If you hit a snag, don’t waste time: check logs, verify IPs, and make sure your token matches.

    I’m Max L, and I never trust a cluster until I’ve rebooted every node at least once. Happy hacking!