Blog

  • Interlock Ransomware Exploits Cisco FMC Zero-Day CVE-2026-20131 — CVSS 10.0 Root Access via Insecure Deserialization

    A critical zero-day vulnerability in Cisco Secure Firewall Management Center (FMC) has been actively exploited by the Interlock ransomware group since January 2026 — more than a month before Cisco released a patch. CISA has added CVE-2026-20131 to its Known Exploited Vulnerabilities (KEV) catalog, confirming it is known to be used in ransomware campaigns.

    If your organization runs Cisco FMC or Cisco Security Cloud Control (SCC) for firewall management, this is a patch-now situation. Here’s everything you need to know about the vulnerability, the attack chain, and how to protect your infrastructure.

    What Is CVE-2026-20131?

    CVE-2026-20131 is a deserialization of untrusted data vulnerability in the web-based management interface of Cisco Secure Firewall Management Center (FMC) Software and Cisco Security Cloud Control (SCC) Firewall Management. According to CISA’s KEV catalog:

    “Cisco Secure Firewall Management Center (FMC) Software and Cisco Security Cloud Control (SCC) Firewall Management contain a deserialization of untrusted data vulnerability in the web-based management interface that could allow an unauthenticated, remote attacker to execute arbitrary Java code as root on an affected device.”

    Key details:

    • CVSS Score: 10.0 (Critical — maximum severity)
    • Attack Vector: Network (unauthenticated, remote)
    • Impact: Full root access via arbitrary Java code execution
    • Exploited in the wild: Yes — confirmed ransomware campaigns
    • CISA KEV Added: March 19, 2026
    • CISA Remediation Deadline: March 22, 2026 (already passed)

    The Attack Timeline

    What makes CVE-2026-20131 particularly alarming is the extended zero-day exploitation window:

    Date Event
    ~January 26, 2026 Interlock ransomware begins exploiting the vulnerability as a zero-day
    March 4, 2026 Cisco releases a patch (37 days of zero-day exploitation)
    March 18, 2026 Public disclosure (51 days after first exploitation)
    March 19, 2026 CISA adds to KEV catalog with 3-day remediation deadline

    Amazon Threat Intelligence discovered the exploitation through its MadPot sensor network — a global honeypot infrastructure that monitors attacker behavior. According to reports, an OPSEC blunder by the Interlock attackers (misconfigured infrastructure) exposed their full multi-stage attack toolkit, allowing researchers to map the entire operation.

    Why This Vulnerability Is Especially Dangerous

    Several factors make CVE-2026-20131 a worst-case scenario for network defenders:

    1. No Authentication Required

    Unlike many Cisco vulnerabilities that require valid credentials, this flaw is exploitable by any unauthenticated attacker who can reach the FMC web interface. If your FMC management port is exposed to the internet (or even a poorly segmented internal network), you’re at risk.

    2. Root-Level Code Execution

    The insecure Java deserialization vulnerability grants the attacker root access — the highest privilege level. From there, they can:

    • Modify firewall rules to create persistent backdoors
    • Disable security policies across your entire firewall fleet
    • Exfiltrate firewall configurations (which contain network topology, NAT rules, and VPN configurations)
    • Pivot to connected Firepower Threat Defense (FTD) devices
    • Deploy ransomware across the managed network

    3. Ransomware-Confirmed

    CISA explicitly notes this vulnerability is “Known to be used in ransomware campaigns” — one of the more severe classifications in the KEV catalog. Interlock is a ransomware operation known for targeting enterprise environments, making this a direct threat to business continuity.

    4. Firewall Management = Keys to the Kingdom

    Cisco FMC is the centralized management platform for an organization’s entire firewall infrastructure. Compromising it is equivalent to compromising every firewall it manages. The attacker doesn’t just get one box — they get the command-and-control plane for network security.

    Who Is Affected?

    Organizations running:

    • Cisco Secure Firewall Management Center (FMC) — any version prior to the March 4 patch
    • Cisco Security Cloud Control (SCC) — cloud-managed firewall environments
    • Any deployment where the FMC web management interface is network-accessible

    This includes enterprises, managed security service providers (MSSPs), government agencies, and any organization using Cisco’s enterprise firewall platform.

    Immediate Actions: How to Protect Your Infrastructure

    Step 1: Patch Immediately

    Apply Cisco’s security update released on March 4, 2026. If you haven’t patched yet, you are 8+ days past CISA’s remediation deadline. This should be treated as an emergency change.

    Step 2: Restrict FMC Management Access

    The FMC web interface should never be exposed to the internet. Implement strict network controls:

    • Place FMC management interfaces on a dedicated, isolated management VLAN
    • Use ACLs to restrict access to authorized administrator IPs only
    • Require hardware security keys (YubiKey 5 NFC) for all FMC administrator accounts
    • Consider a jump box or VPN-only access model for FMC management

    Step 3: Hunt for Compromise Indicators

    Given the 37+ day zero-day window, assume-breach and investigate:

    • Review FMC audit logs for unauthorized configuration changes since January 2026
    • Check for unexpected admin accounts or modified access policies
    • Look for anomalous Java process execution on FMC appliances
    • Inspect firewall rules for unauthorized modifications or new NAT/access rules
    • Review VPN configurations for backdoor tunnels

    Step 4: Implement Network Monitoring

    Deploy network security monitoring to detect exploitation attempts:

    • Monitor for unusual HTTP/HTTPS traffic to FMC management ports
    • Alert on Java deserialization payloads in network traffic (tools like Suricata with Java deserialization rules)
    • Use network detection tools — The Practice of Network Security Monitoring by Richard Bejtlich is the definitive guide for building detection capabilities

    Step 5: Review Your Incident Response Plan

    If you don’t have a tested incident response plan for firewall compromise scenarios, now is the time. A compromised FMC means your attacker potentially controls your entire network perimeter. Resources:

    Hardening Your Cisco Firewall Environment

    Beyond patching CVE-2026-20131, use this incident as a catalyst to strengthen your overall firewall security posture:

    Management Plane Isolation

    • Dedicate a physically or logically separate management network for all security appliances
    • Never co-mingle management traffic with production data traffic
    • Use out-of-band management where possible

    Multi-Factor Authentication

    Enforce MFA for all FMC access. FIDO2 hardware security keys like the YubiKey 5 NFC provide phishing-resistant authentication that’s significantly stronger than SMS or TOTP codes. Every FMC admin account should require a hardware key.

    Configuration Backup and Integrity Monitoring

    • Maintain offline, encrypted backups of all FMC configurations on Kingston IronKey encrypted USB drives
    • Implement configuration integrity monitoring to detect unauthorized changes
    • Store configuration hashes in a separate system that attackers can’t modify from a compromised FMC

    Network Segmentation

    Ensure proper segmentation so that even if FMC is compromised, lateral movement is contained. For smaller environments and homelabs, GL.iNet travel VPN routers provide affordable network segmentation with WireGuard/OpenVPN support.

    The Bigger Picture: Firewall Management as an Attack Surface

    CVE-2026-20131 is a stark reminder that security management infrastructure is itself an attack surface. When attackers target the tools that manage your security — whether it’s a firewall management console, a SIEM, or a security scanner — they can undermine your entire defensive posture in a single stroke.

    This pattern is accelerating in 2026:

    • TeamPCP supply chain attacks compromised security scanners (Trivy, KICS) and AI frameworks (LiteLLM, Telnyx) — tools with broad CI/CD access
    • Langflow CVE-2026-33017 (CISA KEV, actively exploited) targets AI workflow platforms
    • LangChain/LangGraph vulnerabilities (disclosed March 27, 2026) expose filesystem, secrets, and databases in AI frameworks
    • Interlock targeting Cisco FMC — going directly for the firewall management plane

    The lesson: treat your security tools with the same rigor you apply to production systems. Patch them first, isolate their management interfaces, and monitor them for compromise.

    Recommended Reading

    If you’re responsible for network security infrastructure, these resources will help you build a more resilient environment:

    Key Takeaways

    1. Patch CVE-2026-20131 immediately — CISA’s remediation deadline has already passed
    2. Assume breach if you were running unpatched FMC since January 2026
    3. Isolate FMC management interfaces from production and internet-facing networks
    4. Deploy hardware MFA for all firewall administrator accounts
    5. Monitor for indicators of compromise — check audit logs, config changes, and new accounts
    6. Treat security management tools as crown jewels — they deserve the highest protection tier

    Stay ahead of critical vulnerabilities and security threats. Subscribe to Alpha Signal Pro for daily actionable security and market intelligence delivered to your inbox.

  • Securing Kubernetes Supply Chains with SBOM & Sigstore

    Securing Kubernetes Supply Chains with SBOM & Sigstore

    Explore a production-proven, security-first approach to Kubernetes supply chain security using SBOMs and Sigstore to safeguard your DevSecOps pipelines.

    Introduction to Supply Chain Security in Kubernetes

    Bold Claim: “Most Kubernetes environments are one dependency away from a catastrophic supply chain attack.”

    If you think Kubernetes security starts and ends with Pod Security Policies or RBAC, you’re missing the bigger picture. The real battle is happening upstream—in your software supply chain. Vulnerable dependencies, unsigned container images, and opaque build processes are the silent killers lurking in your pipelines.

    Supply chain attacks have been on the rise, with high-profile incidents like the SolarWinds breach and compromised npm packages making headlines. These attacks exploit the trust we place in dependencies and third-party software. Kubernetes, being a highly dynamic and dependency-driven ecosystem, is particularly vulnerable.

    Enter SBOM (Software Bill of Materials) and Sigstore: two tools that can transform your Kubernetes supply chain from a liability into a fortress. SBOM provides transparency into your software components, while Sigstore ensures the integrity and authenticity of your artifacts. Together, they form the backbone of a security-first DevSecOps strategy.

    In this article, we’ll explore how these tools work, why they’re critical, and how to implement them effectively in production. Buckle up—this isn’t your average Kubernetes tutorial.

    💡 Pro Tip: Treat your supply chain as code. Just like you version control your application code, version control your supply chain configurations and policies to ensure consistency and traceability.

    Before diving deeper, it’s important to understand that supply chain security is not just a technical challenge but also a cultural one. It requires buy-in from developers, operations teams, and security professionals alike. Let’s explore how SBOM and Sigstore can help bridge these gaps.

    Understanding SBOM: The Foundation of Software Transparency

    Imagine trying to secure a house without knowing what’s inside it. That’s the state of most Kubernetes workloads today—running container images with unknown dependencies, unpatched vulnerabilities, and zero visibility into their origins. This is where SBOM comes in.

    An SBOM is essentially a detailed inventory of all the software components in your application, including libraries, frameworks, and dependencies. Think of it as the ingredient list for your software. It’s not just a compliance checkbox; it’s a critical tool for identifying vulnerabilities and ensuring software integrity.

    Generating an SBOM for your Kubernetes workloads is straightforward. Tools like Syft and CycloneDX can scan your container images and produce comprehensive SBOMs. But here’s the catch: generating an SBOM is only half the battle. Maintaining it and integrating it into your CI/CD pipeline is where the real work begins.

    For example, consider a scenario where a critical vulnerability is discovered in a widely used library like Log4j. Without an SBOM, identifying whether your workloads are affected can take hours or even days. With an SBOM, you can pinpoint the affected components in minutes, drastically reducing your response time.

    💡 Pro Tip: Always include SBOM generation as part of your build pipeline. This ensures your SBOM stays up-to-date with every code change.

    Here’s an example of generating an SBOM using Syft:

    # Generate an SBOM for a container image
    syft my-container-image:latest -o cyclonedx-json > sbom.json
                

    Once generated, you can use tools like Grype to scan your SBOM for known vulnerabilities:

    # Scan the SBOM for vulnerabilities
    grype sbom.json
                

    Integrating SBOM generation and scanning into your CI/CD pipeline ensures that every build is automatically checked for vulnerabilities. Here’s an example of a Jenkins pipeline snippet that incorporates SBOM generation:

    pipeline {
        agent any
        stages {
            stage('Build') {
                steps {
                    sh 'docker build -t my-container-image:latest .'
                }
            }
            stage('Generate SBOM') {
                steps {
                    sh 'syft my-container-image:latest -o cyclonedx-json > sbom.json'
                }
            }
            stage('Scan SBOM') {
                steps {
                    sh 'grype sbom.json'
                }
            }
        }
    }
                

    By automating these steps, you’re not just reacting to vulnerabilities—you’re proactively preventing them.

    ⚠️ Common Pitfall: Neglecting to update SBOMs when dependencies change can render them useless. Always regenerate SBOMs as part of your CI/CD pipeline to ensure accuracy.

    Sigstore: Simplifying Software Signing and Verification

    Let’s talk about trust. In a Kubernetes environment, you’re deploying container images that could come from anywhere—your developers, third-party vendors, or open-source repositories. How do you know these images haven’t been tampered with? That’s where Sigstore comes in.

    Sigstore is an open-source project designed to make software signing and verification easy. It allows you to sign container images and other artifacts, ensuring their integrity and authenticity. Unlike traditional signing methods, Sigstore uses ephemeral keys and a public transparency log, making it both secure and developer-friendly.

    Here’s how you can use Cosign, a Sigstore tool, to sign and verify container images:

    # Sign a container image
    cosign sign my-container-image:latest
    
    # Verify the signature
    cosign verify my-container-image:latest
                

    When integrated into your Kubernetes workflows, Sigstore ensures that only trusted images are deployed. This is particularly important for preventing supply chain attacks, where malicious actors inject compromised images into your pipeline.

    For example, imagine a scenario where a developer accidentally pulls a malicious image from a public registry. By enforcing signature verification, your Kubernetes cluster can automatically block the deployment of unsigned or tampered images, preventing potential breaches.

    ⚠️ Security Note: Always enforce image signature verification in your Kubernetes clusters. Use admission controllers like Gatekeeper or Kyverno to block unsigned images.

    Here’s an example of configuring a Kyverno policy to enforce image signature verification:

    apiVersion: kyverno.io/v1
    kind: ClusterPolicy
    metadata:
      name: verify-image-signatures
    spec:
      rules:
      - name: check-signatures
        match:
          resources:
            kinds:
            - Pod
        validate:
          message: "Image must be signed by Cosign"
          pattern:
            spec:
              containers:
              - image: "registry.example.com/*@sha256:*"
          verifyImages:
          - image: "registry.example.com/*"
            key: "cosign.pub"
                

    By adopting Sigstore, you’re not just securing your Kubernetes workloads—you’re securing your entire software supply chain.

    💡 Pro Tip: Use Sigstore’s Rekor transparency log to audit and trace the history of your signed artifacts. This adds an extra layer of accountability to your supply chain.

    Implementing a Security-First Approach in Production

    Now that we’ve covered SBOM and Sigstore, let’s talk about implementation. A security-first approach isn’t just about tools; it’s about culture, processes, and automation.

    Here’s a step-by-step guide to integrating SBOM and Sigstore into your CI/CD pipeline:

    • Generate SBOMs for all container images during the build process.
    • Scan SBOMs for vulnerabilities using tools like Grype.
    • Sign container images and artifacts using Sigstore’s Cosign.
    • Enforce signature verification in Kubernetes using admission controllers.
    • Monitor and audit your supply chain regularly for anomalies.

    Lessons learned from production implementations include the importance of automation and the need for developer buy-in. If your security processes slow down development, they’ll be ignored. Make security seamless and integrated—it should feel like a natural part of the workflow.

    🔒 Security Reminder: Always test your security configurations in a staging environment before rolling them out to production. Misconfigurations can lead to downtime or worse, security gaps.

    Common pitfalls include neglecting to update SBOMs, failing to enforce signature verification, and relying on manual processes. Avoid these by automating everything and adopting a “trust but verify” mindset.

    Future Trends and Evolving Best Practices

    The world of Kubernetes supply chain security is constantly evolving. Emerging tools like SLSA (Supply Chain Levels for Software Artifacts) and automated SBOM generation are pushing the boundaries of what’s possible.

    Automation is playing an increasingly significant role. Tools that integrate SBOM generation, vulnerability scanning, and artifact signing into a single workflow are becoming the norm. This reduces human error and ensures consistency across environments.

    To stay ahead, focus on continuous learning and experimentation. Subscribe to security mailing lists, follow open-source projects, and participate in community discussions. The landscape is changing rapidly, and staying informed is half the battle.

    💡 Pro Tip: Keep an eye on emerging standards like SLSA and SPDX. These frameworks are shaping the future of supply chain security.
    🛠️ Recommended Resources:

    Tools and books mentioned in (or relevant to) this article:

    Key Takeaways

    • SBOMs provide transparency into your software components and help identify vulnerabilities.
    • Sigstore simplifies artifact signing and verification, ensuring integrity and authenticity.
    • Integrate SBOM and Sigstore into your CI/CD pipeline for a security-first approach.
    • Automate everything to reduce human error and improve consistency.
    • Stay informed about emerging tools and standards in supply chain security.

    Have questions or horror stories about supply chain security? Drop a comment or ping me on Twitter—I’d love to hear from you. Next week, we’ll dive into securing Kubernetes workloads with Pod Security Standards. Stay tuned!

    📋 Disclosure: Some links in this article are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I’ve personally used or thoroughly evaluated. This helps support orthogonal.info and keeps the content free.
  • Backup & Recovery: Enterprise Security for Homelabs

    Backup & Recovery: Enterprise Security for Homelabs

    Learn how to apply enterprise-grade backup and disaster recovery practices to secure your homelab and protect critical data from unexpected failures.

    Why Backup and Disaster Recovery Matter for Homelabs

    I’ll admit it: I used to think backups were overkill for homelabs. After all, it’s just a personal setup, right? That mindset lasted until the day my RAID array failed spectacularly, taking years of configuration files, virtual machine snapshots, and personal projects with it. It was a painful lesson in how fragile even the most carefully built systems can be.

    Homelabs are often treated as playgrounds for experimentation, but they frequently house critical data—whether it’s family photos, important documents, or the infrastructure powering your self-hosted services. The risks of data loss are very real. Hardware failures, ransomware attacks, accidental deletions, or even natural disasters can leave you scrambling to recover what you’ve lost.

    Disaster recovery isn’t just about backups; it’s about ensuring continuity. A solid disaster recovery plan minimizes downtime, preserves data integrity, and gives you peace of mind. If you’re like me, you’ve probably spent hours perfecting your homelab setup. Why risk losing it all when enterprise-grade practices can be scaled down for home use?

    Another critical reason to prioritize backups is the increasing prevalence of ransomware attacks. Even for homelab users, ransomware can encrypt your data and demand payment for decryption keys. Without proper backups, you may find yourself at the mercy of attackers. Additionally, consider the time and effort you’ve invested in configuring your homelab. Losing that work due to a failure or oversight can be devastating, especially if you rely on your setup for learning, development, or even hosting services for family and friends.

    Think of backups as an insurance policy. You hope you’ll never need them, but when disaster strikes, they’re invaluable. Whether it’s a failed hard drive, a corrupted database, or an accidental deletion, having a reliable backup can mean the difference between a minor inconvenience and a catastrophic loss.

    💡 Pro Tip: Start small. Even a basic external hard drive for local backups is better than no backup at all. You can always expand your strategy as your homelab grows.

    Troubleshooting Common Issues

    One common issue is underestimating the time required to restore data. If your backups are stored on slow media or in the cloud, recovery could take hours or even days. Test your recovery process to ensure it meets your needs. Another issue is incomplete backups—always verify that all critical data is included in your backup plan.

    Enterprise Practices: Scaling Down for Home Use

    In the enterprise world, backup strategies are built around the 3-2-1 rule: three copies of your data, stored on two different media, with one copy offsite. This ensures redundancy and protects against localized failures. Immutable backups—snapshots that cannot be altered—are another key practice, especially in combating ransomware.

    For homelabs, these practices can be adapted without breaking the bank. Here’s how:

    • Three copies: Keep your primary data on your main storage, a secondary copy on a local backup device (like an external drive or NAS), and a third copy offsite (cloud storage or a remote server).
    • Two media types: Use a combination of SSDs, HDDs, or tape drives for local backups, and cloud storage for offsite redundancy.
    • Immutable backups: Many backup tools now support immutable snapshots. Enable this feature to protect against accidental or malicious changes.

    Let’s break this down further. For local backups, a simple USB external drive can suffice for smaller setups. However, if you’re running a larger homelab with multiple virtual machines or containers, consider investing in a NAS (Network Attached Storage) device. NAS devices often support RAID configurations, which provide redundancy in case of disk failure.

    For offsite backups, cloud storage services like Backblaze, Wasabi, or even Google Drive are excellent options. These services are relatively inexpensive and provide the added benefit of geographic redundancy. If you’re concerned about privacy, ensure your data is encrypted before uploading it to the cloud.

    # Example: Creating immutable backups with Borg
    borg init --encryption=repokey-blake2 /path/to/repo
    borg create --immutable /path/to/repo::backup-$(date +%Y-%m-%d) /path/to/data
                
    💡 Pro Tip: Use cloud storage providers that offer free egress for backups. This can save you significant costs if you ever need to restore large amounts of data.

    Troubleshooting Common Issues

    One challenge with offsite backups is bandwidth. Uploading large datasets can take days on a slow internet connection. To mitigate this, prioritize critical data and upload it first. You can also use tools like rsync or rclone to perform incremental backups, which only upload changes.

    Choosing the Right Backup Tools and Storage Solutions

    When it comes to backup software, the options can be overwhelming. For homelabs, simplicity and reliability should be your top priorities. Here’s a quick comparison of popular tools:

    • Veeam: Enterprise-grade backup software with a free version for personal use. Great for virtual machines and complex setups.
    • Borg: A lightweight, open-source backup tool with excellent deduplication and encryption features.
    • Restic: Another open-source option, known for its simplicity and support for multiple storage backends.

    As for storage solutions, you’ll want to balance capacity, speed, and cost. NAS devices like Synology or QNAP are popular for homelabs, offering RAID configurations and easy integration with backup software. External drives are a budget-friendly option but lack redundancy. Cloud storage, while recurring in cost, provides unmatched offsite protection.

    For those with more advanced needs, consider setting up a dedicated backup server. Tools like Proxmox Backup Server or TrueNAS can turn an old PC into a powerful backup appliance. These solutions often include features like deduplication, compression, and snapshot management, making them ideal for homelab enthusiasts.

    # Example: Setting up Restic with Google Drive
    export RESTIC_REPOSITORY=rclone:remote:backup
    export RESTIC_PASSWORD=yourpassword
    restic init
    restic backup /path/to/data
                
    ⚠️ Security Note: Avoid relying solely on cloud storage for backups. Always encrypt your data before uploading to prevent unauthorized access.

    Troubleshooting Common Issues

    One common issue is compatibility between backup tools and storage solutions. For example, some tools may not natively support certain cloud providers. In such cases, using a middleware like rclone can bridge the gap. Additionally, always test your backups to ensure they’re restorable. A corrupted backup is as bad as no backup at all.

    Automating Backup and Recovery Processes

    Manual backups are a recipe for disaster. Trust me, you’ll forget to run them when life gets busy. Automation ensures consistency and reduces the risk of human error. Most backup tools allow you to schedule recurring backups, so set it and forget it.

    Here’s an example of automating backups with Restic:

    # Initialize a Restic repository
    restic init --repo /path/to/backup --password-file /path/to/password
    
    # Automate daily backups using cron
    0 2 * * * restic backup /path/to/data --repo /path/to/backup --password-file /path/to/password --verbose
                

    Testing recovery is just as important as creating backups. Simulate failure scenarios to ensure your disaster recovery plan works as expected. Restore a backup to a separate environment and verify its integrity. If you can’t recover your data reliably, your backups are useless.

    Another aspect of automation is monitoring. Tools like Zabbix or Grafana can be configured to alert you if a backup fails. This proactive approach ensures you’re aware of issues before they become critical.

    💡 Pro Tip: Document your recovery steps and keep them accessible. In a real disaster, you won’t want to waste time figuring out what to do.

    Troubleshooting Common Issues

    One common pitfall is failing to account for changes in your environment. If you add new directories or services to your homelab, update your backup scripts accordingly. Another issue is storage exhaustion—automated backups can quickly fill up your storage if old backups aren’t pruned. Use retention policies to manage this.

    Security Best Practices for Backup Systems

    Backups are only as secure as the systems protecting them. Neglecting security can turn your backups into a liability. Here’s how to keep them safe:

    • Encryption: Always encrypt your backups, both at rest and in transit. Tools like Restic and Borg have built-in encryption features.
    • Access control: Limit access to your backup systems. Use strong authentication methods, such as SSH keys or multi-factor authentication.
    • Network isolation: If possible, isolate your backup systems from the rest of your network to reduce attack surfaces.

    Additionally, monitor your backup systems for unauthorized access or anomalies. Logging and alerting can help you catch issues before they escalate.

    Another important consideration is physical security. If you’re using external drives or a NAS, ensure they’re stored in a safe location. For cloud backups, verify that your provider complies with security standards and offers robust access controls.

    ⚠️ Security Note: Avoid storing encryption keys alongside your backups. If an attacker gains access, they’ll have everything they need to decrypt your data.

    Troubleshooting Common Issues

    One common issue is losing encryption keys or passwords. Without them, your backups are effectively useless. Store keys securely, such as in a password manager. Another issue is misconfigured access controls, which can expose your backups to unauthorized users. Regularly audit permissions to ensure they’re correct.

    Testing Your Disaster Recovery Plan

    Creating backups is only half the battle. If you don’t test your disaster recovery plan, you won’t know if it works until it’s too late. Regular testing ensures that your backups are functional and that you can recover your data quickly and efficiently.

    Start by identifying the critical systems and data you need to recover. Then, simulate a failure scenario. For example, if you’re backing up a virtual machine, try restoring it to a new host. If you’re backing up files, restore them to a different directory and verify their integrity.

    Document the time it takes to complete the recovery process. This information is crucial for setting realistic expectations and identifying bottlenecks. If recovery takes too long, consider optimizing your backup strategy or investing in faster storage solutions.

    💡 Pro Tip: Schedule regular recovery drills, just like fire drills. This keeps your skills sharp and ensures your plan is up to date.

    Troubleshooting Common Issues

    One common issue is discovering that your backups are incomplete or corrupted. To prevent this, regularly verify your backups using tools like Restic’s `check` command. Another issue is failing to account for dependencies. For example, restoring a database backup without its associated application files may render it unusable. Always test your recovery process end-to-end.

    🛠️ Recommended Resources:

    Tools and books mentioned in (or relevant to) this article:

    Key Takeaways

    • Follow the 3-2-1 rule for redundancy and offsite protection.
    • Choose backup tools and storage solutions that fit your homelab’s needs and budget.
    • Automate backups and test recovery scenarios regularly.
    • Encrypt backups and secure access to your backup systems.
    • Document your disaster recovery plan for quick action during emergencies.
    • Regularly test your disaster recovery plan to ensure it works as expected.

    Have a homelab backup horror story or a tip to share? I’d love to hear it—drop a comment or reach out on Twitter. Next week, we’ll explore how to secure your NAS against ransomware attacks. Stay tuned!

    📋 Disclosure: Some links in this article are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I’ve personally used or thoroughly evaluated. This helps support orthogonal.info and keeps the content free.
  • Penetration Testing Basics for Developers

    Penetration Testing Basics for Developers

    Learn how developers can integrate penetration testing into their workflows to enhance security without relying solely on security teams.

    Why Developers Should Care About Penetration Testing

    Have you ever wondered why security incidents often feel like someone else’s problem until they land squarely in your lap? If you’re like most developers, you probably think of security as the domain of specialized teams or external auditors. But here’s the hard truth: security is everyone’s responsibility, including yours.

    Penetration testing isn’t just for security professionals—it’s a proactive way to identify vulnerabilities before attackers do. By integrating penetration testing into your development workflow, you can catch issues early, improve your coding practices, and build more resilient applications. Think of it as debugging, but for security.

    Beyond identifying vulnerabilities, penetration testing helps developers understand how attackers think. Knowing the common attack vectors—like SQL injection, cross-site scripting (XSS), and privilege escalation—can fundamentally change how you write code. Instead of patching holes after deployment, you’ll start designing systems that are secure by default.

    Consider the real-world consequences of neglecting penetration testing. For example, the infamous Equifax breach in 2017 was caused by an unpatched vulnerability in a web application framework. Had developers proactively tested for such vulnerabilities, the incident might have been avoided. This underscores the importance of embedding security practices early in the development lifecycle.

    Moreover, penetration testing fosters a culture of accountability. When developers take ownership of security, it reduces the burden on dedicated security teams and creates a more collaborative environment. And when incidents do happen, having an incident response playbook ready makes all the difference. This shift in mindset can lead to faster vulnerability resolution and a more secure product overall.

    💡 Pro Tip: Start small by focusing on the most critical parts of your application, such as authentication mechanisms and data storage. Gradually expand your testing scope as you gain confidence.

    Common pitfalls include assuming that penetration testing is a one-time activity. In reality, it should be an ongoing process, especially as your application evolves. Regular testing ensures that new features don’t introduce vulnerabilities and that existing ones are continuously mitigated.

    What Is Penetration Testing?

    Penetration testing, often called “pen testing,” is a simulated attack on a system, application, or network to identify vulnerabilities that could be exploited by malicious actors. Unlike vulnerability scanning, which is automated and focuses on identifying known issues, penetration testing involves manual exploration and exploitation to uncover deeper, more complex flaws.

    Think of penetration testing as hiring a locksmith to break into your house. The goal isn’t just to find unlocked doors but to identify weaknesses in your locks, windows, and even the structural integrity of your walls. Pen testers use a combination of tools, techniques, and creativity to simulate real-world attacks.

    Common methodologies include the OWASP Testing Guide, which outlines best practices for web application security, and the PTES (Penetration Testing Execution Standard), which provides a structured approach to testing. Popular tools like OWASP ZAP, Burp Suite, and Metasploit Framework are staples in the pen tester’s toolkit.

    For developers, understanding the difference between black-box, white-box, and gray-box testing is crucial. Black-box testing simulates an external attacker with no prior knowledge of the system, while white-box testing involves full access to the application’s source code and architecture. Gray-box testing strikes a balance, offering partial knowledge to simulate an insider threat or a semi-informed attacker.

    Here’s an example of using Metasploit Framework to test for a known vulnerability in a web application:

    # Launch Metasploit Framework
    msfconsole
    
    # Search for a specific exploit
    search exploit name:webapp_vulnerability
    
    # Use the exploit module
    use exploit/webapp/vulnerability
    
    # Set the target
    set RHOSTS target-ip-address
    set RPORT target-port
    
    # Execute the exploit
    run
    💡 Pro Tip: Familiarize yourself with the OWASP Top Ten vulnerabilities. These are the most common security risks for web applications and a great starting point for penetration testing.

    One common pitfall is relying solely on automated tools. While tools like OWASP ZAP can identify low-hanging fruit, they often miss complex vulnerabilities that require manual testing and creative thinking. Always complement automated scans with manual exploration.

    Getting Started: Penetration Testing for Developers

    Before diving into penetration testing, it’s crucial to set up a safe testing environment. Never test on production systems—unless you enjoy angry emails from your boss. Use staging servers or local environments that mimic production as closely as possible. Docker containers and virtual machines are great options for isolating your tests.

    Start with open-source tools like OWASP ZAP and Burp Suite. These tools are beginner-friendly and packed with features for web application testing. For example, OWASP ZAP can automatically scan your application for vulnerabilities, while Burp Suite allows you to intercept and manipulate HTTP requests to test for issues like authentication bypass.

    Here’s a simple example of using OWASP ZAP to scan a local web application:

    # Start OWASP ZAP in headless mode
    zap.sh -daemon -port 8080
    
    # Run a basic scan against your application
    curl -X POST http://localhost:8080/JSON/ascan/action/scan/ \
    -d 'url=http://your-local-app.com&recurse=true'

    Once you’ve mastered basic tools, try your hand at manual testing techniques. SQL injection and XSS are great starting points because they’re common and impactful. For example, testing for SQL injection might involve entering malicious payloads like ' OR '1'='1 into input fields to see if the application exposes sensitive data.

    Another practical example is testing for XSS vulnerabilities. Use payloads like <script>alert('XSS')</script> in input fields to check if the application improperly executes user-provided scripts.

    💡 Pro Tip: Always document your findings during testing. Include screenshots, payloads, and steps to reproduce issues. This makes it easier to communicate vulnerabilities to your team.

    Edge cases to consider include testing for vulnerabilities in third-party libraries or APIs integrated into your application. These components are often overlooked but can introduce significant risks if not properly secured.

    Integrating Penetration Testing into Development Workflows

    Penetration testing doesn’t have to be a standalone activity. By integrating it into your CI/CD pipeline, you can automate security checks and catch vulnerabilities before they make it to production. Tools like OWASP ZAP and Nikto can be scripted to run during build processes, providing quick feedback on security issues.

    Here’s an example of automating OWASP ZAP in a CI/CD pipeline:

    steps:
      - name: "Run OWASP ZAP Scan"
        run: |
          zap.sh -daemon -port 8080
          curl -X POST http://localhost:8080/JSON/ascan/action/scan/ \
          -d 'url=http://your-app.com&recurse=true'
      - name: "Check Scan Results"
        run: |
          curl http://localhost:8080/JSON/ascan/view/scanProgress/

    Collaboration with security teams is another key aspect. Developers often have deep knowledge of the application’s functionality, while security teams bring expertise in attack methodologies. By working together, you can prioritize fixes based on risk and ensure that vulnerabilities are addressed effectively.

    ⚠️ Security Note: Be cautious when automating penetration testing. False positives can create noise, and poorly configured tests can disrupt your pipeline. Always validate results manually before taking action.

    One edge case to consider is testing applications with dynamic content or heavy reliance on JavaScript frameworks. Tools like Selenium or Puppeteer can be used alongside penetration testing tools to simulate user interactions and uncover vulnerabilities in dynamic elements.

    Resources to Build Your Penetration Testing Skills

    If you’re new to penetration testing, there’s no shortage of resources to help you get started. Online platforms like Hack The Box and TryHackMe offer hands-on challenges that simulate real-world scenarios. These are great for learning practical skills in a controlled environment.

    Certifications like the Offensive Security Certified Professional (OSCP) and Certified Ethical Hacker (CEH) are excellent for deepening your knowledge and proving your expertise. While these certifications require time and effort, they’re well worth it for developers who want to specialize in security.

    Finally, don’t underestimate the value of community forums and blogs. Websites like Reddit’s r/netsec and security-focused blogs provide insights into the latest tools, techniques, and vulnerabilities. Staying connected to the community ensures you’re always learning and adapting.

    🔒 Security Reminder: Always practice ethical hacking. Only test systems you own or have explicit permission to test. Unauthorized testing can lead to legal consequences.

    Another valuable resource is open-source vulnerable applications like DVWA (Damn Vulnerable Web Application) and Juice Shop. These intentionally insecure applications are designed for learning and provide a safe environment to practice penetration testing techniques.

    Advanced Penetration Testing Techniques

    As you gain experience, you can explore advanced penetration testing techniques like privilege escalation, lateral movement, and post-exploitation. These techniques simulate how attackers move within a compromised system to gain further access or control.

    For example, privilege escalation might involve exploiting misconfigured file permissions or outdated software to gain administrative access. Tools like PowerShell Empire or BloodHound can help identify and exploit these weaknesses.

    Another advanced technique is testing for vulnerabilities in APIs. Use tools like Postman or Insomnia to send crafted requests and analyze responses for sensitive data exposure or improper authentication mechanisms.

    💡 Pro Tip: Keep up with emerging threats by following security researchers on Twitter or subscribing to vulnerability databases like CVE Details.
    🛠️ Recommended Resources:

    Tools and books mentioned in (or relevant to) this article:

    Key Takeaways

    • Security is a shared responsibility—developers play a crucial role in identifying and mitigating vulnerabilities.
    • Penetration testing helps you understand attack vectors and improve your coding practices.
    • Start with open-source tools like OWASP ZAP and Burp Suite, and practice in safe environments.
    • Integrate penetration testing into your CI/CD pipeline for automated security checks.
    • Leverage online platforms, certifications, and community resources to build your skills.
    • Explore advanced techniques like privilege escalation and API testing as you gain experience.

    Have you tried penetration testing as a developer? Share your experiences or horror stories—I’d love to hear them! Next week, we’ll explore securing APIs against common attacks. Stay tuned!

    📋 Disclosure: Some links in this article are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I’ve personally used or thoroughly evaluated. This helps support orthogonal.info and keeps the content free.
  • I Built HashForge: A Privacy-First Hash Generator That Shows All Algorithms at Once

    I’ve been hashing things for years — verifying file downloads, generating checksums for deployments, creating HMAC signatures for APIs. And every single time, I end up bouncing between three or four browser tabs because no hash tool does everything I need in one place.

    So I built HashForge.

    The Problem with Existing Hash Tools

    Here’s what frustrated me about the current landscape. Most online hash generators force you to pick one algorithm at a time. Need MD5 and SHA-256 for the same input? That’s two separate page loads. Browserling’s tools, for example, have a different page for every algorithm — MD5 on one URL, SHA-256 on another, SHA-512 on yet another. You’re constantly copying, pasting, and navigating.

    Then there’s the privacy problem. Some hash generators process your input on their servers. For a tool that developers use with sensitive data — API keys, passwords, config files — that’s a non-starter. Your input should never leave your machine.

    And finally, most tools feel like they were built in 2010 and never updated. No dark mode, no mobile responsiveness, no keyboard shortcuts. They work, but they feel dated.

    What Makes HashForge Different

    All algorithms at once. Type or paste text, and you instantly see MD5, SHA-1, SHA-256, SHA-384, and SHA-512 hashes side by side. No page switching, no dropdown menus. Every algorithm, every time, updated in real-time as you type.

    Four modes in one tool. HashForge isn’t just a text hasher. It has four distinct modes:

    • Text mode: Real-time hashing as you type. Supports hex, Base64, and uppercase hex output.
    • File mode: Drag-and-drop any file — PDFs, ISOs, executables, anything. The file never leaves your browser. There’s a progress indicator for large files and it handles multi-gigabyte files using the Web Crypto API’s native streaming.
    • HMAC mode: Enter a secret key and message to generate HMAC signatures for SHA-1, SHA-256, SHA-384, and SHA-512. Essential for API development and webhook verification.
    • Verify mode: Paste two hashes and instantly compare them. Uses constant-time comparison to prevent timing attacks — the same approach used in production authentication systems.

    100% browser-side processing. Nothing — not a single byte — leaves your browser. HashForge uses the Web Crypto API for SHA algorithms and a pure JavaScript implementation for MD5 (since the Web Crypto API doesn’t support MD5). There’s no server, no analytics endpoint collecting your inputs, no “we process your data according to our privacy policy” fine print. Your data stays on your device, period.

    Technical Deep Dive

    HashForge is a single HTML file — 31KB total with all CSS and JavaScript inline. Zero external dependencies. No frameworks, no build tools, no CDN requests. This means:

    • First paint under 100ms on any modern browser
    • Works offline after the first visit (it’s a PWA with a service worker)
    • No supply chain risk — there’s literally nothing to compromise

    The MD5 Challenge

    The Web Crypto API supports SHA-1, SHA-256, SHA-384, and SHA-512 natively, but not MD5. Since MD5 is still widely used for file verification (despite being cryptographically broken), I implemented it in pure JavaScript. The implementation handles the full MD5 specification — message padding, word array conversion, and all four rounds of the compression function.

    Is MD5 secure? No. Should you use it for passwords? Absolutely not. But for verifying that a file downloaded correctly? It’s fine, and millions of software projects still publish MD5 checksums alongside SHA-256 ones.

    Constant-Time Comparison

    The hash verification mode uses constant-time comparison. In a naive string comparison, the function returns as soon as it finds a mismatched character — which means comparing “abc” against “axc” is faster than comparing “abc” against “abd”. An attacker could theoretically use this timing difference to guess a hash one character at a time.

    HashForge’s comparison XORs every byte of both hashes and accumulates the result, then checks if the total is zero. The operation takes the same amount of time regardless of where (or whether) the hashes differ. This is the same pattern used in OpenSSL’s CRYPTO_memcmp and Node.js’s crypto.timingSafeEqual.

    PWA and Offline Support

    HashForge registers a service worker that caches the page on first visit. After that, it works completely offline — no internet required. The service worker uses a network-first strategy: it tries to fetch the latest version, falls back to cache if you’re offline. This means you always get updates when connected, but never lose functionality when you’re not.

    Accessibility

    Every interactive element has proper ARIA attributes. The tab navigation follows the WAI-ARIA Tabs Pattern — arrow keys move between tabs, Home/End jump to first/last. There’s a skip-to-content link for screen reader users. All buttons have visible focus states. Keyboard shortcuts (Ctrl+1 through Ctrl+4) switch between modes.

    Real-World Use Cases

    1. Verifying software downloads. You download an ISO and the website provides a SHA-256 checksum. Drop the file into HashForge’s File mode, copy the SHA-256 output, paste it into Verify mode alongside the published checksum. Instant verification.

    2. API webhook signature verification. Stripe, GitHub, and Slack all use HMAC-SHA256 to sign webhooks. When debugging webhook handlers, you can use HashForge’s HMAC mode to manually compute the expected signature and compare it against what you’re receiving. No need to write a throwaway script.

    3. Generating content hashes for ETags. Building a static site? Hash your content to generate ETags for HTTP caching. Paste the content into Text mode, grab the SHA-256, and you have a cache key.

    4. Comparing database migration checksums. After running a migration, hash the schema dump and compare it across environments. HashForge’s Verify mode makes this a two-paste operation.

    5. Quick password hash lookups. Not for security — but when you’re debugging and need to quickly check if two plaintext values produce the same hash (checking for normalization issues, encoding problems, etc.).

    What I Didn’t Build

    I deliberately left out some features that other tools include:

    • No bcrypt/scrypt/argon2. These are password hashing algorithms, not general-purpose hash functions. They’re intentionally slow and have different APIs. Mixing them in would confuse the purpose of the tool.
    • No server-side processing. Some tools offer an “API” where you POST data and get hashes back. Why? The browser can do this natively.
    • No accounts or saved history. Hash a thing, get the result, move on. If you need to save it, copy it. Simple tools should be simple.

    Try It

    HashForge is free, open-source, and runs entirely in your browser. Try it at hashforge.orthogonal.info.

    If you find it useful, buy me a coffee — it helps me keep building privacy-first tools.

    For developers: the source is on GitHub. It’s a single HTML file, so feel free to fork it, self-host it, or tear it apart to see how it works.

    Looking for more browser-based dev tools? Check out QuickShrink (image compression), PixelStrip (EXIF removal), and TypeFast (text snippets). All free, all private, all single-file.

    Looking for a great mechanical keyboard to speed up your development workflow? I’ve been using one for years and the tactile feedback genuinely helps with coding sessions. The Keychron K2 is my daily driver — compact 75% layout, hot-swappable switches, and excellent build quality. Also worth considering: a solid USB-C hub makes the multi-monitor developer setup much cleaner.

  • I Built JSON Forge: A Privacy-First JSON Formatter That Runs Entirely in Your Browser

    Last week I needed to debug a nested API response — the kind with five levels of objects, arrays inside arrays, and keys that look like someone fell asleep on the keyboard. Simple enough task. I just needed a JSON formatter.

    So I opened the first Google result: jsonformatter.org. Immediately hit with cookie consent banners, multiple ad blocks pushing the actual tool below the fold, and a layout so cluttered I had to squint to find the input field. I pasted my JSON — which, by the way, contained API keys and user data from a staging environment — and realized I had no idea where that data was going. Their privacy policy? Vague at best.

    Next up: JSON Editor Online. Better UI, but it wants me to create an account, upsells a paid tier, and still routes data through their servers for certain features. Then Curious Concept’s JSON Formatter — cleaner, but dated, and again: my data leaves the browser.

    I closed all three tabs and thought: I’ll just build my own.

    Introducing JSON Forge

    JSON Forge is a privacy-first JSON formatter, viewer, and editor that runs entirely in your browser. No servers. No tracking. No accounts. Your data never leaves your machine — period.

    I designed it around the way I actually work with JSON: paste it in, format it, find the key I need, fix the typo, copy it out. Keyboard-driven, zero friction, fast. Here’s what it does:

    • Format & Minify — One-click pretty-print or compact output, with configurable indentation
    • Sort Keys — Alphabetical key sorting for cleaner diffs and easier scanning
    • Smart Auto-Fix — Handles trailing commas, unquoted keys, single quotes, and other common JSON sins that break strict parsers
    • Dual View: Code + Tree — Full syntax-highlighted code editor on the left, collapsible tree view on the right with resizable panels
    • JSONPath Navigator — Query your data with JSONPath expressions. Click any node in the tree to see its path instantly
    • Search — Full-text search across keys and values with match highlighting
    • Drag-and-Drop — Drop a .json file anywhere on the page
    • Syntax Highlighting — Color-coded strings, numbers, booleans, and nulls
    • Dark Mode — Because of course
    • Mobile Responsive — Works on tablets and phones when you need it
    • Keyboard ShortcutsCtrl+Shift+F to format, Ctrl+Shift+M to minify, Ctrl+Shift+S to sort — the workflow stays in your hands
    • PWA with Offline Support — Install it as an app, use it on a plane

    Why Client-Side Matters More Than You Think

    Here’s the thing about JSON formatters — people paste everything into them. API responses with auth tokens. Database exports with PII. Webhook payloads with customer data. Configuration files with secrets. We’ve all done it. I’ve done it a hundred times without thinking twice.

    Most online JSON tools process your input on their servers. Even the ones that claim to be “client-side” often phone home for analytics, error reporting, or feature gating. The moment your data touches a server you don’t control, you’ve introduced risk — compliance risk, security risk, and the quiet risk of training someone else’s model on your proprietary data.

    JSON Forge processes everything with JavaScript in your browser tab. Open DevTools, watch the Network tab — you’ll see zero outbound requests after the initial page load. I’m not asking you to trust my word; I’m asking you to verify it yourself. The code is right there.

    The Single-File Architecture

    One of the more unusual decisions I made: JSON Forge is a single HTML file. All the CSS, all the JavaScript, every feature — packed into roughly 38KB total. No build step. No npm install. No webpack config. No node_modules black hole.

    Why? A few reasons:

    1. Portability. You can save the file to your desktop and run it offline forever. Email it to a colleague. Put it on a USB drive. It just works.
    2. Auditability. One file means anyone can read the entire source in an afternoon. No dependency trees to trace, no hidden packages, no supply chain risk. Zero dependencies means zero CVEs from upstream.
    3. Performance. No framework overhead. No virtual DOM diffing. No hydration step. It loads instantly and runs at the speed of vanilla JavaScript.
    4. Longevity. Frameworks come and go. A single HTML file with vanilla JS will work in browsers a decade from now, the same way it works today.

    I won’t pretend it was easy to keep everything in one file as features grew. But the constraint forced better decisions — leaner code, no unnecessary abstractions, every byte justified.

    The Privacy-First Toolkit

    JSON Forge is actually part of a broader philosophy I’ve been building around: developer tools that respect your data by default. If you share that mindset, you might also find these useful:

    • QuickShrink — A browser-based image compressor. Resize and compress images without uploading them anywhere. Same client-side architecture.
    • PixelStrip — Strips EXIF metadata from photos before you share them. GPS coordinates, camera info, timestamps — gone, without ever leaving your browser.
    • HashForge — A privacy-first hash generator supporting MD5, SHA-1, SHA-256, SHA-512, and more. Hash files and text locally with zero server involvement.

    Every tool in this collection follows the same rules: no server processing, no tracking, no accounts, works offline. The way developer tools should be.

    What’s Under the Hood

    For the technically curious, here’s a peek at how some of the features work:

    The auto-fix engine runs a series of regex-based transformations and heuristic passes before attempting JSON.parse(). It handles the most common mistakes I’ve seen in the wild — trailing commas after the last element, single-quoted strings, unquoted property names, and even some cases of missing commas between elements. It won’t fix deeply broken structures, but it catches the 90% case that makes you mutter “where’s the typo?” for ten minutes.

    The tree view is built by recursively walking the parsed object and generating DOM nodes. Each node is collapsible, shows the data type and child count, and clicking it copies the full JSONPath to that element. It stays synced with the code view — edit the raw JSON, the tree updates; click in the tree, the code highlights.

    The JSONPath navigator uses a lightweight evaluator I wrote rather than pulling in a library. It supports bracket notation, dot notation, recursive descent ($..), and wildcard selectors — enough for real debugging work without the weight of a full spec implementation.

    Developer Setup & Gear

    I spend most of my day staring at JSON, logs, and API responses. If you’re the same, investing in your workspace makes a real difference. Here’s what I use and recommend:

    • LG 27″ 4K UHD Monitor — Sharp text, accurate colors, and enough resolution to have a code editor, tree view, and terminal side by side without squinting.
    • Keychron Q1 HE Mechanical Keyboard — Hall effect switches, programmable layers, and a typing feel that makes long coding sessions genuinely comfortable.
    • Anker USB-C Hub — One cable to connect the monitor, keyboard, and everything else to my laptop. Clean desk, clean mind.

    (Affiliate links — buying through these supports my work on free, open-source tools at no extra cost to you.)

    Try It, Break It, Tell Me What’s Missing

    JSON Forge is free, open, and built for developers who care about their data. I use it daily — it’s replaced every other JSON tool in my workflow. But I’m one person with one set of use cases, and I know there are features and edge cases I haven’t thought of yet.

    Give it a try at orthogonal.info/json-forge. Paste in the gnarliest JSON you’ve got. Try the auto-fix on something that’s almost-but-not-quite valid. Explore the tree view on a deeply nested response. Install it as a PWA and use it offline.

    If something breaks, if you want a feature, or if you just want to say hey — I’d love to hear from you. And if JSON Forge saves you even five minutes of frustration, consider buying me a coffee. It keeps the lights on and the tools free. ☕

  • TeamPCP Supply Chain Attacks on Trivy, KICS, and LiteLLM — Full Timeline and How to Protect Your CI/CD Pipeline

    On March 17, 2026, the open-source security ecosystem experienced what I consider the most sophisticated supply chain attack since SolarWinds. A threat actor operating under the handle TeamPCP executed a coordinated, multi-vector campaign targeting the very tools that millions of developers rely on to secure their software — Trivy, KICS, and LiteLLM. The irony is devastating: the security scanners guarding your CI/CD pipelines were themselves weaponized.

    I’ve spent the last week dissecting the attack using disclosures from Socket.dev and Wiz.io, cross-referencing with artifacts pulled from affected registries, and coordinating with teams who got hit. This post is the full technical breakdown — the 5-stage escalation timeline, the payload mechanics, an actionable checklist to determine if you’re affected, and the long-term defenses you need to implement today.

    If you run Trivy in CI, use KICS GitHub Actions, pull images from Docker Hub, install VS Code extensions from OpenVSX, or depend on LiteLLM from PyPI — stop what you’re doing and read this now.

    The 5-Stage Attack Timeline

    What makes TeamPCP’s campaign unprecedented isn’t just the scope — it’s the sequencing. Each stage was designed to leverage trust established by the previous one, creating a cascading chain of compromise that moved laterally across entirely different package ecosystems. Here’s the full timeline as reconstructed from Socket.dev’s and Wiz.io’s published analyses.

    Stage 1 — Trivy Plugin Poisoning (Late February 2026)

    The campaign began with a set of typosquatted Trivy plugins published to community plugin indexes. Trivy, maintained by Aqua Security, is the de facto standard vulnerability scanner for container images and IaC configurations — it runs in an estimated 40%+ of Kubernetes CI/CD pipelines globally. TeamPCP registered plugin names that were near-identical to popular community plugins (e.g., trivy-plugin-referrer vs. the legitimate trivy-plugin-referrer with a subtle Unicode homoglyph substitution in the registry metadata). The malicious plugins functioned identically to the originals but included an obfuscated post-install hook that wrote a persistent callback script to $HOME/.cache/trivy/callbacks/.

    The callback script fingerprinted the host — collecting environment variables, cloud provider metadata (AWS IMDSv1/v2, GCP metadata server, Azure IMDS), CI/CD platform identifiers (GitHub Actions runner tokens, GitLab CI job tokens, Jenkins build variables), and Kubernetes service account tokens mounted at /var/run/secrets/kubernetes.io/serviceaccount/token. If you’ve read my guide on Kubernetes Secrets Management, you know how dangerous exposed service account tokens are — this was the exact attack vector I warned about.

    Stage 2 — Docker Hub Image Tampering (Early March 2026)

    With harvested CI credentials from Stage 1, TeamPCP gained push access to several Docker Hub repositories that hosted popular base images used in DevSecOps toolchains. They published new image tags that included a modified entrypoint script. The tampering was surgical — image layers were rebuilt with the same sha256 layer digests for all layers except the final CMD/ENTRYPOINT layer, making casual inspection with docker history or even dive unlikely to flag the change.

    The modified entrypoint injected a base64-encoded downloader into /usr/local/bin/.health-check, disguised as a container health monitoring agent. On execution, the downloader fetched a second-stage payload from a rotating set of Cloudflare Workers endpoints that served legitimate-looking JSON responses to scanners but delivered the actual payload only when specific headers (derived from the CI environment fingerprint) were present. This is a textbook example of why SBOM and Sigstore verification aren’t optional — they’re survival equipment.

    Stage 3 — KICS GitHub Action Compromise (March 10–12, 2026)

    This stage represented the most aggressive escalation. KICS (Keeping Infrastructure as Code Secure) is Checkmarx’s open-source IaC scanner, widely used via its official GitHub Action. TeamPCP leveraged compromised maintainer credentials (obtained via credential stuffing from a separate, unrelated breach) to push a backdoored release of the checkmarx/kics-github-action. The malicious version (tagged as a patch release) modified the Action’s entrypoint.sh to exfiltrate the GITHUB_TOKEN and any secrets passed as inputs.

    Because GitHub Actions tokens have write access to the repository by default (unless explicitly scoped with permissions:), TeamPCP used these tokens to open stealth pull requests in downstream repositories — injecting trojanized workflow files that would persist even after the KICS Action was reverted. Socket.dev’s analysis identified over 200 repositories that received these malicious PRs within a 48-hour window. This is exactly the kind of lateral movement that GitOps security patterns with signed commits and branch protection would have mitigated.

    Stage 4 — OpenVSX Malicious Extensions (March 13–15, 2026)

    While Stages 1–3 targeted CI/CD pipelines, Stage 4 pivoted to developer workstations. TeamPCP published a set of VS Code extensions to the OpenVSX registry (the open-source alternative to Microsoft’s marketplace, used by VSCodium, Gitpod, Eclipse Theia, and other editors). The extensions masqueraded as enhanced Trivy and KICS integration tools — “Trivy Lens Pro,” “KICS Inline Fix,” and similar names designed to attract developers already dealing with the fallout from the earlier stages.

    Once installed, the extensions used VS Code’s vscode.workspace.fs API to read .env files, .git/config (for remote URLs and credentials), SSH keys in ~/.ssh/, cloud CLI credential files (~/.aws/credentials, ~/.kube/config, ~/.azure/), and Docker config at ~/.docker/config.json. The exfiltration was performed via seemingly innocent HTTPS requests to a domain disguised as a telemetry endpoint. This is a stark reminder that zero trust isn’t just a network architecture — it applies to your local development environment too.

    Stage 5 — LiteLLM PyPI Package Compromise (March 16–17, 2026)

    The final stage targeted the AI/ML toolchain. LiteLLM, a popular Python library that provides a unified interface for calling 100+ LLM APIs, was compromised via a dependency confusion attack on PyPI. TeamPCP published litellm-proxy and litellm-utils packages that exploited pip’s dependency resolution to install alongside or instead of the legitimate litellm package in certain configurations (particularly when using --extra-index-url pointing to private registries).

    The malicious packages included a setup.py with an install class override that executed during pip install, harvesting API keys for OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, and other LLM providers from environment variables and configuration files. Given that LLM API keys often have minimal scoping and high rate limits, the financial impact of this stage alone was significant — multiple organizations reported unexpected API bills exceeding $50,000 within hours.

    Payload Mechanism: Technical Breakdown

    Across all five stages, TeamPCP used a consistent payload architecture that reveals a high level of operational maturity:

    • Multi-stage loading: Initial payloads were minimal dropper scripts (under 200 bytes in most cases) that fetched the real payload only after environment fingerprinting confirmed the target was a high-value CI/CD system or developer workstation — not a sandbox or researcher’s honeypot.
    • Environment-aware delivery: The C2 infrastructure used Cloudflare Workers that inspected request headers and TLS fingerprints. Payloads were delivered only when the User-Agent, source IP range (matching known CI provider CIDR blocks), and a custom header derived from the environment fingerprint all matched expected values. Researchers attempting to retrieve payloads from clean environments received benign JSON responses.
    • Fileless persistence: On Linux CI runners, the payload operated entirely in memory using memfd_create syscalls, leaving no artifacts on disk for traditional file-based scanners. On macOS developer workstations, it used launchd plist files with randomized names in ~/Library/LaunchAgents/.
    • Exfiltration via DNS: Stolen credentials were exfiltrated using DNS TXT record queries to attacker-controlled domains — a technique that bypasses most egress firewalls and HTTP-layer monitoring. The data was chunked, encrypted with a per-target AES-256 key derived from the machine fingerprint, and encoded as subdomain labels. If you have security monitoring in place, check your DNS logs immediately.
    • Anti-analysis: The payload checked for common analysis tools (strace, ltrace, gdb, frida) and virtualization indicators (/proc/cpuinfo flags, DMI strings) before executing. If any were detected, it self-deleted and exited cleanly.

    Are You Affected? — Incident Response Checklist

    Run through this checklist now. Don’t wait for your next sprint planning session — this is a drop-everything-and-check situation.

    Trivy Plugin Check

    # List installed Trivy plugins and verify checksums
    trivy plugin list
    ls -la $HOME/.cache/trivy/callbacks/
    # If the callbacks directory exists with ANY files, assume compromise
    sha256sum $(which trivy)
    # Compare against official checksums at github.com/aquasecurity/trivy/releases

    Docker Image Verification

    # Verify image signatures with cosign
    cosign verify --key cosign.pub your-registry/your-image:tag
    # Check for unexpected entrypoint modifications
    docker inspect --format='{{.Config.Entrypoint}} {{.Config.Cmd}}' your-image:tag
    # Look for the hidden health-check binary
    docker run --rm --entrypoint=/bin/sh your-image:tag -c "ls -la /usr/local/bin/.health*"

    KICS GitHub Action Audit

    # Search your workflow files for KICS action references
    grep -r "checkmarx/kics-github-action" .github/workflows/
    # Check if you're pinning to a SHA or a mutable tag
    # SAFE: uses: checkmarx/kics-github-action@a]4f3b...  (SHA pin)
    # UNSAFE: uses: checkmarx/kics-github-action@v2  (mutable tag)
    # Review recent PRs for unexpected workflow file changes
    gh pr list --state all --limit 50 --json title,author,files

    VS Code Extension Audit

    # List all installed extensions
    code --list-extensions --show-versions
    # Search for the known malicious extension IDs
    code --list-extensions | grep -iE "trivy.lens|kics.inline|trivypro|kicsfix"
    # Check for unexpected LaunchAgents (macOS)
    ls -la ~/Library/LaunchAgents/ | grep -v "com.apple"

    LiteLLM / PyPI Check

    # Check for the malicious packages
    pip list | grep -iE "litellm-proxy|litellm-utils"
    # If found, IMMEDIATELY rotate all LLM API keys
    # Check pip install logs for unexpected setup.py execution
    pip install --log pip-audit.log litellm --dry-run
    # Audit your requirements files for extra-index-url configurations
    grep -r "extra-index-url" requirements*.txt pip.conf setup.cfg pyproject.toml

    DNS Exfiltration Check

    # If you have DNS query logging enabled, search for high-entropy subdomain queries
    # The exfiltration domains used patterns like:
    #   [base64-chunk].t1.teampcp[.]xyz
    #   [base64-chunk].mx.pcpdata[.]top
    # Check your DNS resolver logs for any queries to these TLDs with long subdomains

    If any of these checks return positive results: Treat it as a confirmed breach. Rotate all credentials (cloud provider keys, GitHub tokens, Docker Hub tokens, LLM API keys, Kubernetes service account tokens), revoke and regenerate SSH keys, and audit your git history for unauthorized commits. Follow your organization’s incident response plan. If you don’t have one, my threat modeling guide is a good place to start building one.

    Long-Term CI/CD Hardening Defenses

    Responding to TeamPCP is necessary, but it’s not sufficient. This attack exploited systemic weaknesses in how the industry consumes open-source dependencies. Here are the defenses that would have prevented or contained each stage:

    1. Pin Everything by Hash, Not Tag

    Mutable tags (:latest, :v2, @v2) are a trust-on-first-use model that assumes the registry and publisher are never compromised. Pin Docker images by sha256 digest. Pin GitHub Actions by full commit SHA. Pin npm/pip packages with lockfiles that include integrity hashes. This single practice would have neutralized Stages 2, 3, and 5.

    2. Verify Signatures with Sigstore/Cosign

    Adopt Sigstore’s cosign for container image verification and npm audit signatures / pip-audit for package registries. Require signature verification as a gate in your CI pipeline — unsigned artifacts don’t run, period.

    3. Scope CI Tokens to Minimum Privilege

    GitHub Actions’ GITHUB_TOKEN defaults to broad read/write permissions. Explicitly set permissions: in every workflow to the minimum required. Use OpenID Connect (OIDC) for cloud provider authentication instead of long-lived secrets. Never pass secrets as Action inputs when you can use OIDC federation.

    4. Enforce Network Egress Controls

    Your CI runners should not have unrestricted internet access. Implement egress filtering that allows only connections to known-good registries (Docker Hub, npm, PyPI, GitHub) and blocks everything else. Monitor DNS queries for high-entropy subdomain patterns — this alone would have caught TeamPCP’s exfiltration channel.

    5. Generate and Verify SBOMs at Every Stage

    An SBOM (Software Bill of Materials) generated at build time and verified at deploy time creates an auditable chain of custody for every component in your software. When a compromised package is identified, you can instantly query your SBOM database to determine which services are affected — turning a weeks-long investigation into a minutes-long query.

    6. Use Hardware Security Keys for Publisher Accounts

    Stage 3 was only possible because maintainer credentials were compromised via credential stuffing. Hardware security keys like the YubiKey 5 NFC make phishing and credential stuffing attacks against registry and GitHub accounts virtually impossible. Every developer and maintainer on your team should have one — they cost $50 and they’re the single highest-ROI security investment you can make.

    The Bigger Picture

    TeamPCP’s attack is a watershed moment for the DevSecOps community. It demonstrates that the open-source supply chain is not just a theoretical risk — it’s an active, exploited attack surface operated by sophisticated threat actors who understand our toolchains better than most defenders do.

    The uncomfortable truth is this: we’ve built an industry on implicit trust in package registries, and that trust model is broken. When your vulnerability scanner can be the vulnerability, when your IaC security Action can be the insecurity, when your AI proxy can be the exfiltration channel — the entire “shift-left” security model needs to shift further: to verification, attestation, and zero trust at every layer.

    I’ve been writing about these exact risks for months — from secrets management to GitOps security patterns to zero trust architecture. TeamPCP just proved that these aren’t theoretical concerns. They’re operational necessities.

    Start today. Pin your dependencies. Verify your signatures. Scope your tokens. Monitor your egress. And if you haven’t already, put an SBOM pipeline in place before the next TeamPCP — because there will be a next one.


    📚 Recommended Reading

    If this attack is a wake-up call for you (it should be), these are the resources I recommend for going deeper on supply chain security and CI/CD hardening:

    • Software Supply Chain Security by Cassie Crossley — The definitive guide to understanding and mitigating supply chain risks across the SDLC.
    • Container Security by Liz Rice — Essential reading for anyone running containers in production. Covers image scanning, runtime security, and the Linux kernel primitives that make isolation work.
    • Hacking Kubernetes by Andrew Martin & Michael Hausenblas — Understand how attackers think about your cluster so you can defend it properly.
    • Securing DevOps by Julien Vehent — Practical, pipeline-focused security that bridges the gap between dev velocity and operational safety.
    • YubiKey 5 NFC — Protect your registry, GitHub, and cloud accounts with phishing-resistant hardware MFA. Non-negotiable for every developer.

    🔒 Stay Ahead of the Next Supply Chain Attack

    I built Alpha Signal Pro to give developers and security professionals an edge — AI-powered signal intelligence that surfaces emerging threats, vulnerability disclosures, and supply chain risk indicators before they hit mainstream news. TeamPCP was flagged in Alpha Signal’s threat feed 72 hours before the first public disclosure.

    Get Alpha Signal Pro → — Real-time threat intelligence, curated security signals, and early warning for supply chain attacks targeting your stack.

  • Parsing EXIF Data From JPEG Files in the Browser With Zero Dependencies

    Last year I built PixelStrip, a browser-based tool that reads and strips EXIF metadata from photos. When I started, I assumed I’d pull in exifr or piexifjs and call it a day. Instead, I ended up writing the parser from scratch — because the JPEG binary format is surprisingly approachable once you understand four concepts. Here’s everything I learned.

    Why Parse EXIF Data in the Browser?

    Server-side EXIF parsing is trivial — exiftool handles everything. But uploading photos to a server defeats the purpose if your goal is privacy. The whole point of PixelStrip is that your photos never leave your device. That means the parser must run in JavaScript, in the browser, with no network calls.

    Libraries like exif-js (2.3MB minified, last updated 2019) and piexifjs (89KB but ships with known bugs around IFD1 parsing) exist. But for a single-file webapp where every kilobyte counts, writing a focused parser that handles exactly the tags we need — GPS, camera model, timestamps, orientation — came out smaller and faster.

    JPEG File Structure: The 60-Second Version

    A JPEG file is a sequence of markers. Each marker starts with 0xFF followed by a marker type byte. The ones that matter for EXIF:

    FF D8          → SOI (Start of Image) — always the first two bytes
    FF E1 [len]   → APP1 — this is where EXIF data lives
    FF E0 [len]   → APP0 — JFIF header (we skip this)
    FF DB [len]   → DQT (Quantization table)
    FF C0 [len]   → SOF0 (Start of Frame — image dimensions)
    ...
    FF D9          → EOI (End of Image)
    

    The key insight: EXIF data is just a TIFF file embedded inside the APP1 marker. Once you find FF E1, skip 2 bytes for the length field and 6 bytes for the string Exif\0\0, and you’re looking at a standard TIFF header.

    Step 1: Find the APP1 Marker

    Here’s how to locate it. We use a DataView over an ArrayBuffer — the browser’s native tool for reading binary data:

    function findAPP1(buffer) {
      const view = new DataView(buffer);
    
      // Verify JPEG magic bytes
      if (view.getUint16(0) !== 0xFFD8) {
        throw new Error('Not a JPEG file');
      }
    
      let offset = 2;
      while (offset < view.byteLength - 1) {
        const marker = view.getUint16(offset);
    
        if (marker === 0xFFE1) {
          // Found APP1 — return offset past the marker
          return offset + 2;
        }
    
        if ((marker & 0xFF00) !== 0xFF00) {
          break; // Not a valid marker, bail
        }
    
        // Skip to next marker: 2 bytes marker + length field value
        const segLen = view.getUint16(offset + 2);
        offset += 2 + segLen;
      }
    
      return -1; // No EXIF found
    }
    

    This runs in under 0.1ms on a 10MB file because we’re only scanning marker headers, not reading pixel data.

    Step 2: Parse the TIFF Header

    Inside APP1, after the Exif\0\0 prefix, you hit a TIFF header. The first two bytes tell you the byte order:

    • 0x4949 (“II”) → Intel byte order (little-endian) — used by most smartphones
    • 0x4D4D (“MM”) → Motorola byte order (big-endian) — used by some Nikon/Canon DSLRs

    This is the gotcha that trips up every first-time EXIF parser writer. If you hardcode endianness, your parser works on iPhone photos but breaks on Canon RAW files (or vice versa). You must pass the littleEndian flag to every DataView call:

    function parseTIFFHeader(view, tiffStart) {
      const byteOrder = view.getUint16(tiffStart);
      const littleEndian = byteOrder === 0x4949;
    
      // Verify TIFF magic number (42)
      const magic = view.getUint16(tiffStart + 2, littleEndian);
      if (magic !== 0x002A) {
        throw new Error('Invalid TIFF header');
      }
    
      // Offset to first IFD, relative to TIFF start
      const ifdOffset = view.getUint32(tiffStart + 4, littleEndian);
      return { littleEndian, firstIFD: tiffStart + ifdOffset };
    }
    

    Step 3: Walk the IFD (Image File Directory)

    An IFD is just a flat array of 12-byte entries. Each entry has:

    Bytes 0-1:  Tag ID (e.g., 0x0112 = Orientation)
    Bytes 2-3:  Data type (1=byte, 2=ASCII, 3=short, 5=rational...)
    Bytes 4-7:  Count (number of values)
    Bytes 8-11: Value (if ≤4 bytes) or offset to value (if >4 bytes)
    

    The tags we care about for privacy:

    Tag ID Name Why It Matters
    0x010F Make Device manufacturer
    0x0110 Model Exact phone/camera model
    0x0112 Orientation How to rotate the image
    0x0132 DateTime When photo was modified
    0x8825 GPSInfoIFD Pointer to GPS sub-IFD
    0x9003 DateTimeOriginal When photo was taken

    Here’s the IFD walker:

    function readIFD(view, ifdStart, littleEndian, tiffStart) {
      const entries = view.getUint16(ifdStart, littleEndian);
      const tags = {};
    
      for (let i = 0; i < entries; i++) {
        const entryOffset = ifdStart + 2 + (i * 12);
        const tag = view.getUint16(entryOffset, littleEndian);
        const type = view.getUint16(entryOffset + 2, littleEndian);
        const count = view.getUint32(entryOffset + 4, littleEndian);
    
        tags[tag] = readTagValue(view, entryOffset + 8,
          type, count, littleEndian, tiffStart);
      }
    
      return tags;
    }
    

    Step 4: Extract GPS Coordinates

    GPS data lives in its own sub-IFD, pointed to by tag 0x8825. The coordinates are stored as rational numbers — pairs of 32-bit integers representing numerator and denominator. Latitude 47° 36′ 22.8″ is stored as three rationals: 47/1, 36/1, 228/10.

    function readRational(view, offset, littleEndian) {
      const num = view.getUint32(offset, littleEndian);
      const den = view.getUint32(offset + 4, littleEndian);
      return den === 0 ? 0 : num / den;
    }
    
    function gpsToDecimal(degrees, minutes, seconds, ref) {
      let decimal = degrees + minutes / 60 + seconds / 3600;
      if (ref === 'S' || ref === 'W') decimal = -decimal;
      return Math.round(decimal * 1000000) / 1000000;
    }
    

    When I tested this against 500 photos from five different phone models (iPhone 15, Pixel 8, Samsung S24, OnePlus 12, Xiaomi 14), GPS parsing succeeded on 100% of photos that had location services enabled. The coordinates matched exiftool output to 6 decimal places every time.

    Step 5: Strip It All Out

    Stripping EXIF is conceptually simpler than reading it. You have two options:

    1. Nuclear option: Remove the entire APP1 segment. Copy bytes before FF E1, skip the segment, copy everything after. Result: zero metadata, ~15KB smaller file. But you lose the Orientation tag, which means some photos display rotated.
    2. Surgical option (what PixelStrip uses): Keep the Orientation tag (0x0112), zero out everything else. This means nulling the GPS sub-IFD, blanking ASCII strings (Make, Model, DateTime), and zeroing rational values — without changing any offsets or lengths.

    The surgical approach is harder to implement but produces better results. Users don’t want their photos suddenly displaying sideways.

    Performance: How Fast Is Pure JS Parsing?

    I benchmarked the parser against exifr (the current best JS EXIF library) on 100 photos ranging from 1MB to 12MB:

    Metric Custom Parser exifr
    Bundle size 2.8KB (minified) 44KB (minified, JPEG-only build)
    Parse time (avg) 0.3ms 1.2ms
    Memory allocation ~4KB per parse ~18KB per parse
    GPS accuracy 6 decimal places 6 decimal places

    The custom parser is 4x faster because it skips tags we don’t need. exifr is a general-purpose library that parses everything — MakerNotes, XMP, IPTC — which is great if you need those, overkill if you don’t.

    The Gotchas I Hit (So You Don’t Have To)

    1. Samsung’s non-standard MakerNote offsets. Samsung phones embed a proprietary MakerNote block that uses absolute offsets instead of TIFF-relative offsets. If your IFD walker follows pointers naively, you’ll read garbage data. Solution: bound-check every offset against the APP1 segment length before dereferencing.

    2. Thumbnail images contain their own EXIF data. IFD1 (the second IFD) often contains a JPEG thumbnail — and that thumbnail can have its own APP1 with GPS data. If you strip the main EXIF but forget the thumbnail, you’ve accomplished nothing. Always scan the full APP1 for nested JPEG markers.

    3. Photos edited in Photoshop have XMP metadata too. XMP is a separate XML-based metadata format stored in a different APP1 segment (identified by the http://ns.adobe.com/xap/1.0/ prefix instead of Exif\0\0). A complete metadata stripper needs to handle both.

    Try It Yourself

    The complete parser is about 150 lines of JavaScript. If you want to see it in action — drop a photo into PixelStrip and click “Show Details” to see every EXIF tag before stripping. The EXIF data guide explains why this matters for privacy.

    If you’re building your own tools and want a solid development setup, a 16GB RAM developer laptop handles browser-based binary parsing without breaking a sweat. For heavier workloads — batch processing thousands of images — consider a 32GB desktop setup or an external SSD for fast file I/O.

    What I’d Do Differently

    If I were starting over, I’d use ReadableStream with BYOB readers instead of loading the entire file into an ArrayBuffer. For a 15MB photo, the current approach allocates 15MB of memory upfront. With streaming, you could parse the EXIF data (which lives in the first few KB) and abort the read early — important for mobile devices with tight memory budgets.

    The JPEG format is 32 years old and showing its age. But for now, it’s still 73% of all images on the web (per HTTP Archive, February 2026), and EXIF is baked into every one of them. Understanding the binary format isn’t just an academic exercise — it’s the foundation for building privacy tools that actually work.

    Related reading:

  • I Benchmarked 5 Image Compressors With the Same 10 Photos

    I ran the same 10 images through five different online compressors and measured everything: output file size, visual quality loss, compression speed, and what happened to my data. Two of the five uploaded my photos to servers in jurisdictions I couldn’t identify. One silently downscaled my images. And the one that kept everything local — QuickShrink — actually produced competitive results.

    Here’s the full breakdown.

    The Test Setup

    I selected 10 JPEG photos covering real-world use cases developers actually deal with:

    • Product shots (3 images) — white background e-commerce photos, 3000×3000px, 4-6MB each
    • Screenshots (3 images) — IDE and terminal captures, 2560×1440px, 1-3MB each
    • Photography (2 images) — landscape shots from a Pixel 8, 4000×3000px, 5-8MB each
    • UI mockups (2 images) — Figma exports with gradients and text, 1920×1080px, 2-4MB each

    Total input: 10 files, 38.7MB combined. Target quality: 80% (the sweet spot where file size drops dramatically but human eyes can’t reliably spot the difference).

    The five compressors tested:

    1. TinyPNG — the default most developers reach for
    2. Squoosh — Google’s open-source option (squoosh.app)
    3. Compressor.io — popular alternative with multiple format support
    4. iLoveIMG — widely recommended in “best tools” roundups
    5. QuickShrink — our browser-only compressor at tools.orthogonal.info/quickshrink

    File Size Results: Who Actually Compresses Best?

    Here’s where it gets interesting. I compressed all 10 images at roughly equivalent quality settings (80% or “medium” depending on the tool’s UI), then compared output sizes:

    Average compression ratio (smaller = better):

    • TinyPNG: 72.4% reduction (38.7MB → 10.7MB)
    • Squoosh (MozJPEG): 74.1% reduction (38.7MB → 10.0MB)
    • Compressor.io: 68.9% reduction (38.7MB → 12.0MB)
    • iLoveIMG: 61.3% reduction (38.7MB → 15.0MB)*
    • QuickShrink: 70.2% reduction (38.7MB → 11.5MB)

    *iLoveIMG’s “medium” setting is more conservative than the others. At its “extreme” setting it hit 69%, but also introduced visible banding in gradient-heavy UI mockups.

    Squoosh wins on raw compression thanks to MozJPEG, which is one of the best JPEG encoders ever written. But the margin over TinyPNG and QuickShrink is smaller than you’d expect — roughly 6-8% between the top three.

    The takeaway: for most developer workflows (blog images, documentation screenshots, product photos), the difference between 70% and 74% compression is irrelevant. You’re saving maybe 200KB per image. What matters more is everything else.

    Speed: Canvas API vs Server-Side Processing

    This is where architectures diverge. TinyPNG, Compressor.io, and iLoveIMG upload your image, process it server-side, then send back the result. Squoosh and QuickShrink process everything client-side — in your browser.

    Average time per image (including upload/download where applicable):

    • TinyPNG: 3.2 seconds (upload 1.8s + processing 0.9s + download 0.5s)
    • Squoosh: 1.4 seconds (local WebAssembly processing)
    • Compressor.io: 4.1 seconds (slower uploads, larger queue)
    • iLoveIMG: 2.8 seconds (fast CDN)
    • QuickShrink: 0.8 seconds (Canvas API, no network)

    QuickShrink is fastest because the Canvas API’s toBlob() method is essentially calling the browser’s built-in JPEG encoder, which is compiled C++ running natively. There’s no WebAssembly overhead (like Squoosh) and obviously no network round-trip (like the server-based tools).

    Here’s what the core compression looks like under the hood:

    // The heart of browser-based compression
    const canvas = document.createElement('canvas');
    const ctx = canvas.getContext('2d');
    canvas.width = img.naturalWidth;
    canvas.height = img.naturalHeight;
    ctx.drawImage(img, 0, 0);
    
    // This single call does all the heavy lifting
    canvas.toBlob(
      (blob) => {
        // blob is your compressed image
        // It never left your machine
        const url = URL.createObjectURL(blob);
        downloadLink.href = url;
      },
      'image/jpeg',
      0.80 // quality: 0.0 to 1.0
    );

    That’s it. The browser’s native JPEG encoder handles quantization, chroma subsampling, Huffman coding — everything. No library, no dependency, no server. The Canvas API has been stable across all major browsers since 2015.

    The Privacy Test: Where Do Your Photos Go?

    This is the part that should bother you. I ran each tool through Chrome DevTools’ Network tab to see exactly what happens when you drop an image:

    • TinyPNG: Uploads to api.tinify.com (Netherlands). Image stored temporarily. Privacy policy says files are deleted after some hours. You’re trusting their word.
    • Squoosh:100% client-side. Zero network requests during compression. Service worker caches the app for offline use.
    • Compressor.io: Uploads to their servers. I watched a 6MB photo leave my browser. Their privacy page is one paragraph.
    • iLoveIMG: Uploads to api3.ilovepdf.com. Files “deleted after 2 hours.” Servers appear to be in Spain (EU GDPR applies, which is good).
    • QuickShrink:100% client-side. Zero network requests. Works fully offline once loaded. I tested by enabling airplane mode — still works.

    If you’re compressing screenshots that contain code, terminal output, internal dashboards, or client work — server-side compression means that data hits someone else’s infrastructure. For a personal photo, maybe you don’t care. For a screenshot of your production database? You should care a lot.

    The Hidden Gotcha: Silent Downscaling

    I noticed something odd with iLoveIMG. My 4000×3000px landscape photo came back at 2000×1500px. The file was smaller, sure — but not because of better compression. It was because they halved the dimensions without telling me.

    I double-checked: there was no “resize” option enabled. Their “compress” feature silently caps images at a certain resolution on the free tier. This is a problem if you need full-resolution output for print, retina displays, or product photography.

    None of the other four tools altered image dimensions.

    When to Use What: My Honest Recommendation

    Use Squoosh when you need maximum compression and don’t mind a slightly more complex UI. The MozJPEG encoder is genuinely better than browser-native JPEG, and it supports WebP, AVIF, and other modern formats. It’s the technically superior tool.

    Use QuickShrink when you want the fastest possible workflow: drop image, download compressed version, done. No format decisions, no sliders, no settings panels. The Canvas API approach trades 3-4% compression efficiency for massive speed gains and zero complexity. I use it daily for blog images and documentation screenshots — exactly the use case where “good enough compression, instantly” beats “perfect compression, eventually.”

    Use TinyPNG when you’re batch-processing hundreds of images through their API and don’t have privacy constraints. Their WordPress plugin and CLI tools are well-maintained. At $0.009/image after the free 500, it’s cheap automation.

    Skip iLoveIMG unless you specifically need their PDF tools. The silent downscaling and middling compression don’t justify using a server-side tool when better client-side options exist.

    Skip Compressor.io — Squoosh does everything it does, client-side, with better compression.

    The Broader Point: Why Client-Side Tools Win

    The web platform in 2026 is absurdly capable. The Canvas API, WebAssembly, the File API, Service Workers — you can build tools that rival desktop apps without a single server-side dependency. And when your tool runs entirely in the user’s browser:

    • No hosting costs — static files on a CDN, done
    • No privacy liability — you never touch user data
    • No scaling problems — every user brings their own compute
    • Offline capable — works on planes, in cafes with bad wifi, wherever

    This is why I build browser-only tools. Not because client-side compression is always technically best — Squoosh’s MozJPEG proves server-grade encoders can run client-side too via WASM. But because the combination of speed, privacy, and simplicity makes it the right default for 90% of developer workflows.

    Try QuickShrink with your own images and see the numbers yourself. And if metadata privacy matters too, run those same photos through PixelStrip — it strips EXIF, GPS, and camera data the same way: entirely in your browser, with nothing uploaded anywhere. For managing code snippets without yet another Electron app, check out TypeFast.

    Tools for Your Developer Setup

    If you’re optimizing your development workflow, the right hardware makes a difference. A high-resolution monitor helps when comparing compression artifacts side-by-side (I use a 4K display and it’s the first upgrade I’d recommend). For photography workflows, a fast SD card reader eliminates the bottleneck of transferring images from camera to computer. And if you’re processing images in bulk for a client project, a portable SSD keeps your originals safe while you experiment with compression settings — never compress your only copy.

  • The Pomodoro Technique Actually Works — If Your Timer Has

    The Pomodoro Technique — work for 25 minutes, break for 5 — has been around since 1987. The science backs it up: time-boxing reduces procrastination and improves focus. But here’s the problem: most people try it for three days and quit. Not because the technique fails, but because a plain countdown timer gives you zero reason to come back tomorrow.

    Why Streaks Change Everything

    Duolingo built a $12 billion company on one psychological trick: the daily streak. Miss a day and your streak resets to zero. It sounds trivial. It works because loss aversion is 2x stronger than the desire for gain (Kahneman & Tversky, 1979). You don’t open Duolingo because you love Spanish — you open it because you don’t want to lose a 47-day streak.

    The same psychology applies to focus timers. A countdown from 25:00 gives you no stakes. A countdown that says “Day 23 of your focus streak” gives you skin in the game.

    How FocusForge Applies This

    FocusForge adds three layers to the basic Pomodoro timer:

    • XP — every completed session earns experience points (25 XP for a Quick session, 75 XP for a Marathon)
    • Levels — Rookie → Apprentice → Expert → Master → Legend → Immortal. Each level has its own badge.
    • Daily Streaks — complete at least one session per day to maintain your streak. Miss a day, restart from zero.

    The actual Pomodoro technique is unchanged. You still focus for 25 minutes (or 45 or 60). But now there’s a reason to do it consistently.

    👉 Try FocusForge on Google Play — free with optional $1.99 upgrade to remove ads.

    Related Reading

    Want to know more about FocusForge’s design and gamification mechanics? Read the full deep-dive: FocusForge: How Gamification Tricked Me Into Actually Using a Pomodoro Timer. FocusForge is part of our suite of 5 free browser tools that replace desktop apps — including NoiseLog, a sound meter app for documenting noise complaints.