Tag: cybersecurity

  • Docker CVE-2026-34040: 1MB Request Bypasses AuthZ Plugin

    Docker CVE-2026-34040: 1MB Request Bypasses AuthZ Plugin

    Last week I got a Slack ping from a friend: “Did you see the Docker AuthZ thing?” I hadn’t. Twenty minutes later I was patching every Docker host I manage β€” and you should too.

    TL;DR: CVE-2026-34040 (CVSS 8.8) lets attackers bypass Docker AuthZ plugins by padding API requests over 1MB, causing the daemon to silently drop the request body. This is an incomplete fix for CVE-2024-41110 from 2024. Update to Docker Engine 29.3.1 or later immediately, and enable rootless mode or user namespace remapping as defense in depth.

    Quick Answer: Run docker version --format '{{.Server.Version}}' β€” if it shows anything below 29.3.1, you’re vulnerable. Update immediately with sudo apt-get update && sudo apt-get install docker-ce docker-ce-cli. For defense in depth, enable rootless mode or --userns-remap and restrict Docker socket access.

    CVE-2026-34040 (CVSS 8.8) is a high-severity flaw in Docker Engine that lets an attacker bypass authorization plugins by padding an API request to over 1MB. The Docker daemon silently drops the body before forwarding it to the AuthZ plugin, which then approves the request because it sees nothing to block. One HTTP request. Full host compromise.

    Here’s what makes this one particularly annoying: it’s an incomplete fix for CVE-2024-41110, a maximum-severity bug from July 2024. If you patched for that one and assumed you were safe β€” surprise, you weren’t.

    What’s Actually Happening

    Docker Engine supports AuthZ plugins β€” third-party authorization plugins that inspect API requests and decide whether to allow or deny them. Think of them as bouncers checking IDs at the door.

    The problem: when an API request body exceeds 1MB, Docker’s daemon drops the body before passing the request to the AuthZ plugin. The plugin sees an empty request, has nothing to object to, and waves it through.

    In practice, an attacker with Docker API access pads a container creation request with junk data until it crosses the 1MB threshold. The AuthZ plugin never sees the actual payload β€” which creates a privileged container with full host filesystem access.

    According to Cyera Research, this works against every AuthZ plugin in the ecosystem. Not some. All of them.

    Why Homelab Operators Should Care

    If you’re running Docker on TrueNAS or any homelab setup, you probably have containers with access to sensitive volumes β€” media libraries, config files, maybe even SSH keys or cloud credentials.

    A privileged container created through this bypass can mount your host filesystem. That means: AWS credentials, SSH keys, kubeconfig files, password databases, anything on the machine. If you’re running Docker on the same box as your NAS (common in homelab setups), that’s your entire data store exposed.

    I checked my own setup and found I was running Docker Engine 28.x β€” vulnerable. Yours probably is too if you haven’t updated in the last two weeks.

    The AI Agent Angle (This Is Wild)

    Here’s where it gets interesting. Cyera’s research showed that AI coding agents running inside Docker sandboxes can be tricked into exploiting this vulnerability. A poisoned GitHub repository with hidden prompt injection can cause an agent to craft the padded HTTP request and create a privileged container β€” all as part of what looks like a normal code review.

    Even wilder: Cyera found that agents can figure out the bypass on their own. When an agent encounters an AuthZ denial while trying to debug a legitimate issue (say, a Kubernetes out-of-memory problem), it has access to Docker API documentation and knows how HTTP works. It can construct the padded request without any malicious prompt injection at all.

    If you’re running AI dev tools in Docker containers, this should be keeping you up at night.

    How to Check If You’re Vulnerable

    Run this:

    docker version --format '{{.Server.Version}}'

    If the output is anything below 29.3.1, you’re vulnerable. The fix is straightforward:

    # On Debian/Ubuntu
    sudo apt-get update && sudo apt-get install docker-ce docker-ce-cli
    
    # On TrueNAS (if using Docker directly)
    # Check your app update mechanism or pull the latest Docker Engine
    
    # Verify the fix
    docker version --format '{{.Server.Version}}'
    # Should show 29.3.1 or later

    Mitigations If You Can’t Patch Right Now

    If immediate patching isn’t possible (maybe you’re waiting for a TrueNAS update to bundle it), here are your options ranked by effectiveness:

    1. Run Docker in rootless mode. This is the strongest mitigation. In rootless mode, even a “privileged” container’s root maps to an unprivileged host UID. The attacker gets a container, but the blast radius drops from “full host compromise” to “compromised unprivileged user.” Docker’s rootless mode docs walk through the setup.

    2. Use --userns-remap. If full rootless mode breaks your setup, user namespace remapping provides similar UID isolation without the full rootless overhead.

    3. Lock down Docker API access. If you’re exposing the Docker socket over TCP (common in Portainer setups), stop. Use Unix socket access with strict group membership. Only users who absolutely need Docker API access should have it.

    4. Don’t rely solely on AuthZ plugins. This CVE makes it clear: AuthZ plugins that inspect request bodies are fundamentally breakable. Layer your defenses β€” use network policies, AppArmor/SELinux profiles, and container runtime security on top of AuthZ.

    What I Changed on My Setup

    After reading the Cyera writeup, I made three changes to my homelab Docker hosts:

    1. Updated to Docker Engine 29.3.1 on all hosts. This was the obvious one.
    2. Enabled user namespace remapping on my TrueNAS Docker instance. I’d been meaning to do this for months β€” this CVE was the push I needed.
    3. Audited socket exposure. I had one Portainer instance with the Docker socket mounted read-write. I switched it to a read-only socket proxy (Tecnativa’s docker-socket-proxy is solid for this) that filters which API endpoints are accessible.

    The whole process took about 45 minutes across three hosts. Worth every second.

    Frequently Asked Questions

    What exactly is CVE-2026-34040 and how severe is it?

    CVE-2026-34040 is a high-severity (CVSS 8.8) authorization bypass vulnerability in Docker Engine. When an API request body exceeds 1MB, the Docker daemon silently drops the body before forwarding it to AuthZ plugins. The plugin sees an empty request, approves it, and the attacker can create privileged containers with full host filesystem access. It affects every AuthZ plugin in the ecosystem.

    How is this different from CVE-2024-41110?

    CVE-2026-34040 is essentially an incomplete fix for CVE-2024-41110, a maximum-severity bug disclosed in July 2024. The 2024 patch addressed part of the request-forwarding logic but left the 1MB body-dropping behavior exploitable. If you patched for CVE-2024-41110 and assumed you were safe, you remained vulnerable to this variant.

    Am I vulnerable if I don’t use AuthZ plugins?

    If you’re not using any Docker AuthZ plugins, this specific CVE does not directly affect you β€” the bypass targets the AuthZ plugin inspection mechanism. However, you should still update to 29.3.1 because the underlying body-dropping behavior could affect future features. Additionally, some container management tools (like Portainer with access control) may use AuthZ plugins without explicit configuration.

    Can AI coding agents really exploit this vulnerability?

    Yes. Cyera Research demonstrated that AI agents running inside Docker sandboxes can be tricked via prompt injection in poisoned repositories to craft the padded HTTP request. More concerning, agents can discover the bypass independently when troubleshooting legitimate Docker API issues β€” they understand HTTP semantics and can construct the padded request without malicious prompting. This is a real attack vector for teams using AI dev tools in Docker containers.

    What is the best mitigation if I cannot patch immediately?

    Enable Docker’s rootless mode β€” it’s the strongest mitigation. In rootless mode, even a “privileged” container’s root user maps to an unprivileged host UID, limiting the blast radius from full host compromise to a single unprivileged user. If rootless mode breaks your setup, use --userns-remap for similar UID isolation. Also restrict Docker socket access to Unix socket only (no TCP exposure) with strict group membership.

    Recommended Reading

    If this CVE is a wake-up call about your container security posture, a few resources I’d point you toward:

    • Container Security by Liz Rice β€” the single best book on container security fundamentals. Covers namespaces, cgroups, seccomp, and AppArmor from the ground up. I reference it constantly. (Full disclosure: affiliate link)
    • Docker Deep Dive by Nigel Poulton β€” if you want to actually understand how Docker’s internals work (which helps you reason about vulnerabilities like this one), Poulton’s book is the place to start. Updated for 2026. (Affiliate link)
    • Hacking Kubernetes by Andrew Martin & Michael Hausenblas β€” if you’re running Kubernetes alongside Docker (or migrating to it), this covers the threat landscape from an attacker’s perspective. Eye-opening even for experienced operators. (Affiliate link)

    For more on hardening your Docker setup, I wrote a full guide on Docker container security best practices that covers image scanning, runtime protection, and secrets management. And if you’re weighing Docker Compose against Kubernetes for your homelab, my comparison post breaks down the security tradeoffs.

    The Bigger Picture

    CVE-2026-34040 is a textbook example of why “we patched it” doesn’t always mean “it’s fixed.” The original CVE-2024-41110 was patched in 2024. The fix was incomplete. Two years later, the same attack path works with a minor variation.

    This is also a reminder that Docker’s authorization model has a single point of failure in the AuthZ plugin chain. If the body never reaches the plugin, the plugin can’t make informed decisions. It’s not a plugin bug β€” it’s an architectural weakness in how Docker forwards requests.

    For homelab operators running Docker on shared hardware (which is most of us), the fix is clear: update to 29.3.1, enable rootless mode or userns-remap, and stop trusting AuthZ plugins as your only line of defense.

    Patch today. Not tomorrow.


    πŸ”” Join Alpha Signal on Telegram for free market intelligence, security alerts, and tech analysis β€” delivered daily.

  • YubiKey SSH Authentication: Stop Trusting Key Files on Disk

    YubiKey SSH Authentication: Stop Trusting Key Files on Disk

    I stopped using SSH passwords three years ago. Switched to ed25519 keys, felt pretty good about it. Then my laptop got stolen from a coffee shop β€” lid open, session unlocked. My private key was sitting right there in ~/.ssh/, passphrase cached in the agent.

    That’s when I bought my first YubiKey.

    Why a Hardware Key Beats a Private Key File

    πŸ“Œ TL;DR: YubiKey provides secure SSH authentication by storing private keys on hardware, preventing extraction or misuse even if a device is stolen or compromised. Unlike disk-stored keys, YubiKey requires physical touch for authentication, adding an extra layer of security. It supports FIDO2/resident keys and works across devices with USB-C or NFC options.
    🎯 Quick Answer: YubiKey SSH authentication stores your private key on tamper-resistant hardware so it cannot be copied or extracted, even if your machine is compromised. Configure it via ssh-keygen -t ed25519-sk to bind SSH keys to the physical device.

    Your SSH private key lives on disk. Even if it’s passphrase-protected, once the agent unlocks it, it’s in memory. Malware can dump it. A stolen laptop might still have an active agent session. Your key file can be copied without you knowing.

    A YubiKey stores the private key on the hardware. It never leaves the device. Every authentication requires a physical touch. No touch, no auth. Someone steals your laptop? They still need the physical key plugged in and your finger on it.

    That’s the difference between “my key is encrypted” and “my key literally cannot be extracted.”

    Which YubiKey to Get

    For SSH, you want a YubiKey that supports FIDO2/resident keys. Here’s what I’d recommend:

    YubiKey 5C NFC β€” my top pick. USB-C fits modern laptops, and the NFC means you can tap it on your phone for GitHub/Google auth too. Around $55, and I genuinely think it’s the best value if you work across multiple devices. (Full disclosure: affiliate link)

    If you’re on a tighter budget, the YubiKey 5 NFC (USB-A) does the same thing for about $50, just with the older port. Still a good option if your machines have USB-A.

    One important note: buy two. Register both with every service. Keep one on your keychain, one locked in a drawer. If you lose your primary, you’re not locked out of everything. I learned this the hard way with a 2FA lockout that took three days to resolve.

    Setting Up SSH with FIDO2 Resident Keys

    You need OpenSSH 8.2+ (check with ssh -V). Most modern distros ship with this. If you’re on macOS, the built-in OpenSSH works fine since Ventura.

    First, generate a resident key stored directly on the YubiKey:

    ssh-keygen -t ed25519-sk -O resident -O verify-required -C "yubikey-primary"

    Breaking this down:

    • -t ed25519-sk β€” uses the ed25519 algorithm backed by a security key (sk = security key)
    • -O resident β€” stores the key on the YubiKey, not just a reference to it
    • -O verify-required β€” requires PIN + touch every time (not just touch)
    • -C "yubikey-primary" β€” label it so you know which key this is

    It’ll ask you to set a PIN if you haven’t already. Pick something decent β€” this is your second factor alongside the physical touch.

    You’ll end up with two files: id_ed25519_sk and id_ed25519_sk.pub. The private file is actually just a handle β€” the real private key material lives on the YubiKey. Even if someone gets this file, it’s useless without the physical hardware.

    Adding the Key to Remote Servers

    Same as any SSH key:

    ssh-copy-id -i ~/.ssh/id_ed25519_sk.pub user@your-server

    Or manually append the public key to ~/.ssh/authorized_keys on the target machine.

    When you SSH in, you’ll see:

    Confirm user presence for key ED25519-SK SHA256:...
    User presence confirmed

    That “confirm user presence” line means it’s waiting for you to physically tap the YubiKey. No tap within ~15 seconds? Connection refused. I love this β€” it’s impossible to accidentally leave a session auto-connecting in the background.

    The Resident Key Trick: Any Machine, No Key Files

    This is the feature that sold me. Because the key is resident (stored on the YubiKey itself), you can pull it onto any machine:

    ssh-keygen -K

    That’s it. Plug in your YubiKey, run that command, and it downloads the key handles to your current machine. Now you can SSH from a fresh laptop, a coworker’s machine, or a server β€” as long as you have the YubiKey plugged in.

    No more syncing ~/.ssh folders across machines. No more “I need to get my key from my other laptop.” The YubiKey is the key.

    Hardening sshd for Key-Only Auth

    Once your YubiKey is working, lock down the server. In /etc/ssh/sshd_config:

    PasswordAuthentication no
    KbdInteractiveAuthentication no
    PubkeyAuthentication yes
    AuthenticationMethods publickey

    Reload sshd (systemctl reload sshd) and test with a new terminal before closing your current session. I’ve locked myself out exactly once by reloading before testing. Don’t be me.

    If you want to go further, you can restrict to only FIDO2 keys by requiring the sk key types in your authorized_keys entries. But for most setups, just disabling passwords is the big win.

    What About Git and GitHub?

    GitHub has supported security keys for SSH since late 2021. Add your id_ed25519_sk.pub in Settings β†’ SSH Keys, same as any other key.

    Every git push and git pull now requires a physical touch. It adds maybe half a second to each operation. I was worried this would be annoying β€” it’s actually reassuring. Every push is a conscious decision.

    For your Git config, make sure you’re using the SSH URL format:

    git remote set-url origin [email protected]:username/repo.git

    Gotchas I Hit

    Agent forwarding doesn’t work with FIDO2 keys. The touch requirement is local β€” you can’t forward it through an SSH jump host. If you rely on agent forwarding, you’ll need to either set up ProxyJump or keep a regular ed25519 key for jump scenarios.

    macOS Sonoma has a quirk where the built-in SSH agent sometimes doesn’t prompt for the touch correctly. Fix: add SecurityKeyProvider internal to your ~/.ssh/config.

    WSL2 can’t see USB devices by default. You’ll need usbipd-win to pass the YubiKey through. It works fine once set up, but the initial config is a 10-minute detour.

    VMs need USB passthrough configured. In VirtualBox, add a USB filter for “Yubico YubiKey.” In QEMU/libvirt, use hostdev passthrough. This catches people off guard when they SSH from inside a VM and wonder why the key isn’t detected.

    My Setup

    I carry a YubiKey 5C NFC on my keychain and keep a backup YubiKey 5 Nano in my desk. The Nano stays semi-permanently in my desktop’s USB port β€” it’s tiny enough that it doesn’t stick out. (Full disclosure: affiliate links)

    Both keys are registered on every server, GitHub, and every service that supports FIDO2. If I lose my keychain, I walk to my desk and keep working.

    Total cost: about $80 for two keys. For context, that’s less than a month of most password manager premium plans, and it protects against a class of attacks that passwords simply can’t.

    Should You Bother?

    If you SSH into anything regularly β€” servers, homelabs, CI runners β€” yes. The setup takes 15 minutes, and the daily friction is a light tap on a USB device. The protection you get (key material that physically can’t be stolen remotely) is worth way more than the cost.

    If you’re already running a homelab with TrueNAS or managing Docker containers, this is a natural next step in locking things down. Hardware keys fill the gap between “I use SSH keys” and “my infrastructure is actually secure.”

    Start with one key, test it for a week, then buy the backup. You won’t go back.


    Join Alpha Signal for free market intelligence β€” daily briefings on tech, AI, and the markets that drive them.

    πŸ“š Related Reading

    References

    1. Yubico β€” “Using Your YubiKey with SSH”
    2. OWASP β€” “Authentication Cheat Sheet”
    3. GitHub β€” “YubiKey-SSH Configuration Guide”
    4. NIST β€” “Digital Identity Guidelines”
    5. RFC Editor β€” “RFC 4253: The Secure Shell (SSH) Transport Layer Protocol”

    Frequently Asked Questions

    Why is YubiKey more secure than storing SSH keys on disk?

    YubiKey stores the private key on hardware, ensuring it cannot be extracted or copied. Authentication requires physical touch, preventing unauthorized access even if a device is stolen or compromised.

    What type of YubiKey is recommended for SSH authentication?

    The YubiKey 5C NFC is recommended for its USB-C compatibility and NFC functionality, making it versatile for both laptops and phones. The YubiKey 5 NFC (USB-A) is a budget-friendly alternative for older devices.

    How do you set up SSH authentication with a YubiKey?

    You need OpenSSH 8.2+ to generate a resident key stored on the YubiKey using `ssh-keygen`. The key requires PIN and physical touch for authentication, and the public key can be added to remote servers for access.

    What precautions should be taken when using YubiKey for SSH?

    It’s recommended to buy two YubiKeys: one for daily use and one as a backup. Register both with all services to avoid lockouts in case of loss or damage.

  • CVE-2025-53521: F5 BIG-IP APM RCE β€” CISA Deadline 3/30

    CVE-2025-53521: F5 BIG-IP APM RCE β€” CISA Deadline 3/30

    I triaged this CVE for my own perimeter the moment it hit the KEV catalog. If you’re running F5 BIG-IP with APM, here’s what you need to know and doβ€”fast.

    CVE-2025-53521 dropped into CISA’s Known Exploited Vulnerabilities catalog on March 27, and the remediation deadline is March 30. If you’re running F5 BIG-IP with Access Policy Manager (APM), this needs your attention right now.

    Here’s what makes this one ugly: F5 originally classified CVE-2025-53521 as a denial-of-service issue. That classification has since been upgraded to remote code execution (CVSS 9.3) after active exploitation was confirmed in the wild. A vulnerability that many teams deprioritized as “just a DoS” is actually giving attackers code execution on BIG-IP appliances. If your patching decision was based on the original advisory, your risk assessment is wrong.

    The Reclassification: From DoS to Full RCE

    πŸ“Œ TL;DR: CVE-2025-53521 dropped into CISA’s Known Exploited Vulnerabilities catalog on March 27, and the remediation deadline is March 30 . If you’re running F5 BIG-IP with Access Policy Manager (APM), this needs your attention right now.
    🎯 Quick Answer
    CVE-2025-53521 dropped into CISA’s Known Exploited Vulnerabilities catalog on March 27, and the remediation deadline is March 30 . If you’re running F5 BIG-IP with Access Policy Manager (APM), this needs your attention right now.

    When F5 first published advisory K000156741, CVE-2025-53521 was described as a denial-of-service condition in BIG-IP APM. The attack vector was clear enough β€” a crafted request to the APM authentication endpoint could crash the Traffic Management Microkernel (TMM). Annoying, but many shops treated it as a lower-priority patch.

    That assessment turned out to be incomplete. Subsequent analysis revealed that the same attack primitive β€” the malformed request that triggers the TMM crash β€” can be chained with a memory corruption condition to achieve arbitrary code execution. F5 updated the advisory to reflect this, bumping the CVSS score to 9.3 and reclassifying the impact from availability-only to full confidentiality/integrity/availability compromise.

    The timing here matters. Organizations that triaged this as a medium-severity DoS during the initial disclosure window may have scheduled it for their next maintenance cycle. With active exploitation now confirmed and CISA setting a 3-day remediation deadline, “next maintenance cycle” is too late.

    What We Know About Active Exploitation

    CISA doesn’t add vulnerabilities to the KEV catalog on a whim. The KEV listing confirms that CVE-2025-53521 is being actively exploited in the wild. F5 has published indicators of compromise alongside the updated advisory.

    Based on the available intelligence, here’s what the attack chain looks like:

    1. Initial Access: Attacker sends a specially crafted request to the BIG-IP APM authentication endpoint (typically /my.policy or /f5-w-68747470733a2f2f... APM webtop URLs).
    2. Memory Corruption: The malformed input triggers a buffer handling error in TMM’s APM module, corrupting adjacent memory structures.
    3. Code Execution: The corruption is exploited to redirect execution flow, achieving arbitrary code execution in the TMM process context β€” which runs as root.
    4. Post-Exploitation: With root-level access on the BIG-IP, the attacker can intercept traffic, extract credentials from APM session tables, modify iRules, or pivot deeper into the network.

    The root-level execution context is what elevates this from bad to critical. TMM handles all data plane traffic on BIG-IP. Owning TMM means owning every connection flowing through the appliance β€” SSL termination keys, session cookies, authentication tokens, everything.

    Affected Versions and Configurations

    CVE-2025-53521 affects BIG-IP systems running Access Policy Manager. The key conditions:

    • BIG-IP APM must be provisioned and active (if you’re only running LTM without APM, you’re not directly affected)
    • The APM virtual server must be accessible to the attacker β€” which in most deployments means internet-facing
    • All BIG-IP software versions prior to the patched releases listed in K000156741 are vulnerable

    Check whether APM is provisioned on your BIG-IP:

    # Check APM provisioning status
    tmsh list sys provision apm
    
    # If you see "level nominal" or "level dedicated", APM is active
    # If you see "level none", APM is not provisioned β€” you're not affected by this specific CVE

    Check your current BIG-IP version:

    # Show running software version
    tmsh show sys version
    
    # Show all installed software images
    tmsh show sys software status

    Immediate Detection: Are You Already Compromised?

    Given that exploitation is active and the vulnerability existed before many orgs patched it, assume-breach is the right posture. For a structured approach, see our incident response playbook guide. Here’s what to look for.

    Check TMM Core Files

    Exploitation of this vulnerability typically produces TMM crash artifacts. If your BIG-IP has been restarting TMM unexpectedly, that’s a red flag:

    # Check for recent TMM core dumps
    ls -la /var/core/
    ls -la /shared/core/
    
    # Review TMM restart history
    tmsh show sys tmm-info | grep -i restart
    
    # Check system logs for TMM crashes
    grep -i "tmm.*core\|tmm.*crash\|tmm.*restart" /var/log/ltm /var/log/apm | tail -50

    Audit APM Session Logs

    Look for anomalous APM authentication patterns β€” particularly failed authentications with unusual payload sizes or malformed usernames:

    # Review APM logs for the past 72 hours
    grep -E "ERR|CRIT|WARNING" /var/log/apm | tail -100
    
    # Look for unusual APM access patterns
    awk '/access_policy/ && /ERR/' /var/log/apm
    
    # Check for oversized requests hitting APM endpoints
    grep -i "request.*too.*large\|oversized\|malform" /var/log/ltm /var/log/apm

    Inspect iRules and Configuration Changes

    Post-exploitation activity often involves modifying iRules to maintain persistence or intercept credentials:

    # List all iRules and their modification timestamps
    tmsh list ltm rule recursive | grep -E "^ltm rule|last-modified"
    
    # Check for recently modified iRules (compare against your change management records)
    find /config -name "*.tcl" -mtime -7 -ls
    
    # Look for suspicious iRule content (credential harvesting patterns)
    tmsh list ltm rule recursive | grep -iE "HTTP::header|HTTP::cookie|HTTP::password|b64encode|log local0"

    Review Network-Level IOCs

    F5’s updated advisory K000156741 includes specific network indicators. Cross-reference your firewall and IDS logs against the published IOCs. At minimum, check for:

    # On your perimeter firewall or SIEM, search for:
    # - Unusual POST requests to /my.policy endpoints with oversized payloads
    # - Connections from your BIG-IP management interface to unexpected external IPs
    # - DNS queries from BIG-IP to domains not in your known-good list
    
    # On the BIG-IP itself, check outbound connections:
    netstat -an | grep ESTABLISHED | grep -v "$(tmsh list net self all | grep address | awk '{print $2}' | cut -d/ -f1 | tr '
    ' '\|' | sed 's/|$//')"

    If your network assessment methodology needs updating, Chris McNab’s Network Security Assessment remains the standard reference for systematically auditing network infrastructure β€” including load balancers and application delivery controllers like BIG-IP. Full disclosure: affiliate link.

    Mitigation: What to Do Right Now

    Priority 1: Patch

    Apply the fixed version from F5’s advisory. This is the only complete remediation. For BIG-IP, the upgrade process:

    # Download the hotfix ISO from downloads.f5.com
    # Upload to BIG-IP:
    scp BIGIP-*.iso admin@<bigip-mgmt>:/shared/images/
    
    # Install the hotfix (from BIG-IP CLI):
    tmsh install sys software hotfix BIGIP-*.iso volume HD1.2
    
    # Verify installation
    tmsh show sys software status
    
    # Reboot to the patched volume
    tmsh reboot volume HD1.2

    Critical note: If you’re running an HA pair, follow F5’s documented rolling upgrade procedure. Don’t just reboot both units simultaneously.

    Priority 2: If You Can’t Patch Immediately

    If a maintenance window isn’t available in the next 24 hours, apply these compensating controls:

    Restrict APM endpoint access via iRule:

    # Create an iRule to restrict APM access to known IP ranges
    # Apply this to your APM virtual server
    
    when HTTP_REQUEST {
        # Only allow APM access from trusted networks
        if { [IP::client_addr] starts_with "10.0.0." ||
             [IP::client_addr] starts_with "192.168.1." ||
             [IP::client_addr] starts_with "172.16.0." } {
            # Allow β€” trusted internal range
        } else {
            # Log and reject
            log local0. "Blocked APM access from [IP::client_addr] to [HTTP::uri]"
            HTTP::respond 403 content "Access Denied"
        }
    }

    Enable APM request size limits (if not already configured):

    # Set maximum header/request sizes to limit exploitation surface
    tmsh modify sys httpd max-clients 10
    tmsh modify ltm profile http <your-http-profile> enforcement max-header-count 64 max-header-size 32768

    Monitor TMM health aggressively:

    # Set up a cron job to alert on TMM crashes
    echo '*/5 * * * * root test -f /var/core/tmm.*.core.gz && logger -p local0.crit "TMM CORE DUMP DETECTED"' >> /etc/cron.d/tmm-monitor

    Priority 3: Harden Your BIG-IP Management Plane

    This vulnerability is a reminder that BIG-IP appliances are high-value targets. Whether or not you’re affected by CVE-2025-53521 specifically, your BIG-IP management interfaces should be locked down:

    • Management port access: Restrict the management interface (typically port 443 on the MGMT interface) to a dedicated management VLAN with strict ACLs. Never expose it to the internet.
    • Self IP lockdown: Use tmsh modify net self <self-ip> allow-service none on self IPs that don’t need management access.
    • Strong authentication: Enforce MFA for all administrative access. YubiKey 5C NFC hardware keys paired with BIG-IP’s RADIUS or TACACS+ integration provide phishing-resistant MFA that doesn’t depend on SMS or TOTP apps. Full disclosure: affiliate link.
    • Audit logging: Send all BIG-IP logs to an external SIEM. If an attacker compromises the appliance, local logs can’t be trusted.

    The Bigger Picture: Why Reclassifications Catch Teams Off Guard

    πŸ”§ From my experience: Severity reclassifications like this oneβ€”from DoS to RCEβ€”are more common than people realize. I always patch for the worst plausible impact, not the vendor’s initial assessment. If a bug can read out-of-bounds memory, assume code execution is one creative exploit away.

    CVE-2025-53521 follows a pattern I’ve seen too many times. A vulnerability gets an initial severity rating, teams make patching decisions based on that rating, and then the severity gets bumped weeks later when exploitation research reveals worse impact than originally assessed. By then, the patching priority has been set and budgets have moved on.

    This is the same pattern we saw with CVE-2026-20131 in Cisco FMC β€” where the exploitation window stretched for 37 days before a patch landed. The Interlock ransomware group used that window to compromise firewall management planes across multiple organizations.

    If you’re a compliance officer or security lead, here’s what this means for your process:

    • Don’t rely solely on initial CVSS scores for patching prioritization. Track advisories for updates and reclassifications.
    • Treat “DoS” vulnerabilities in network appliances seriously. A DoS on your BIG-IP is already a high-impact event. If it gets reclassified to RCE, you’ve lost your remediation window.
    • Subscribe to vendor security advisory feeds directly β€” don’t wait for your vulnerability scanner to pick up the update in its next database sync.
    • Maintain an inventory of internet-facing appliances and their software versions. You need to know within hours β€” not days β€” when a critical advisory drops for something in your perimeter.

    For teams building out their vulnerability management and cloud security programs, Chris Dotson’s Practical Cloud Security covers the operational frameworks for handling exactly this kind of situation β€” tracking advisories across hybrid infrastructure, building escalation processes, and maintaining asset inventories that actually stay current. Full disclosure: affiliate link.

    Setting Up Proactive Detection

    Beyond the immediate response to CVE-2025-53521, this is a good opportunity to set up detection that will catch the next BIG-IP zero-day (and there will be a next one).

    Suricata/Snort Rules

    If you’re running a network IDS, add rules to monitor APM endpoints for exploitation patterns:

    # Example Suricata rule for anomalous APM requests
    # Adjust $EXTERNAL_NET and $BIGIP_APM to match your environment
    
    alert http $EXTERNAL_NET any -> $BIGIP_APM any (
        msg:"POSSIBLE F5 BIG-IP APM Exploitation Attempt - Oversized POST";
        flow:established,to_server;
        http.method; content:"POST";
        http.uri; content:"/my.policy";
        http.request_body; content:"|00|"; depth:1024;
        dsize:>8192;
        classtype:attempted-admin;
        sid:2025535210; rev:1;
    )

    SIEM Correlation

    Create correlation rules that tie BIG-IP TMM events to network anomalies:

    # Pseudocode for SIEM correlation
    IF (source = "bigip" AND message CONTAINS "tmm" AND severity >= "error")
      AND (within 5 minutes)
      (source = "firewall" AND destination = bigip_mgmt_ip AND direction = "outbound")
    THEN
      ALERT "Possible BIG-IP compromise β€” TMM error followed by outbound connection"
      PRIORITY: CRITICAL

    Understanding the attacker’s perspective is critical for building effective detection. Stuart McClure’s Hacking Exposed 7 walks through network appliance exploitation techniques in detail β€” knowing how attackers approach these devices helps you build detection that catches real attacks instead of generating noise. Full disclosure: affiliate link.

    What You Should Do Today

    Stop reading and do these, in order:

    1. Check if APM is provisioned on your BIG-IP fleet: tmsh list sys provision apm. If it’s not, you’re not directly affected β€” but still check K000156741 for related advisories.
    2. Verify your BIG-IP version against the fixed versions in F5 advisory K000156741. If you’re running a vulnerable version, escalate immediately.
    3. Run the detection commands above to check for signs of prior compromise. Pay special attention to TMM core dumps and iRule modifications.
    4. Cross-reference the IOCs from F5’s advisory against your perimeter logs and SIEM data for the past 30 days.
    5. Patch or apply compensating controls before the March 30 CISA deadline. If you’re a federal agency or contractor, BOD 22-01 makes this mandatory. If you’re private sector, treat the deadline as a strong recommendation β€” CISA set it at 3 days for a reason.
    6. Document your response for your compliance records. Whether you’re SOC 2, PCI DSS, or CMMC, you’ll want evidence that you responded to a KEV-listed vulnerability within the required timeframe.
    7. Review your network appliance patching policy. Consider building a threat model for your perimeter infrastructure. If your current process can’t turn around a critical patch in under 72 hours for perimeter devices, this incident is your evidence for getting that changed.

    The CISA KEV deadline isn’t arbitrary. Active exploitation means somebody is actively scanning for and compromising vulnerable BIG-IP instances right now. Every hour you wait is an hour an attacker might find your unpatched APM endpoint.

    Get it patched. If you want to validate your defenses after patching, our penetration testing guide covers the fundamentals. Then fix the process that let a reclassified RCE sit unpatched in your perimeter.

    References

    1. CISA β€” “Known Exploited Vulnerabilities Catalog”
    2. F5 Networks β€” “K000156741: BIG-IP APM Vulnerability CVE-2025-53521 Advisory”
    3. MITRE β€” “CVE-2025-53521”
    4. NIST β€” “National Vulnerability Database Entry for CVE-2025-53521”
    5. OWASP β€” “Remote Code Execution (RCE) Overview”

    Frequently Asked Questions

    What is CVE-2025-53521: F5 BIG-IP APM RCE β€” CISA Deadline 3/30 about?

    CVE-2025-53521 dropped into CISA’s Known Exploited Vulnerabilities catalog on March 27, and the remediation deadline is March 30 . If you’re running F5 BIG-IP with Access Policy Manager (APM), this ne

    Who should read this article about CVE-2025-53521: F5 BIG-IP APM RCE β€” CISA Deadline 3/30?

    Anyone interested in learning about CVE-2025-53521: F5 BIG-IP APM RCE β€” CISA Deadline 3/30 and related topics will find this article useful.

    What are the key takeaways from CVE-2025-53521: F5 BIG-IP APM RCE β€” CISA Deadline 3/30?

    Here’s what makes this one ugly: F5 originally classified CVE-2025-53521 as a denial-of-service issue. That classification has since been upgraded to remote code execution (CVSS 9.3) after active expl

  • CVE-2026-20131: Cisco FMC Zero-Day Exploited by Ransomware

    CVE-2026-20131: Cisco FMC Zero-Day Exploited by Ransomware

    I triaged CVE-2026-20131 for my own network the day it dropped. If you run Cisco FMC anywhere in your environment, this is a stop-what-you’re-doing moment.

    A critical zero-day vulnerability in Cisco Secure Firewall Management Center (FMC) has been actively exploited by the Interlock ransomware group since January 2026 β€” more than a month before Cisco released a patch. CISA has added CVE-2026-20131 to its Known Exploited Vulnerabilities (KEV) catalog, confirming it is known to be used in ransomware campaigns.

    If your organization runs Cisco FMC or Cisco Security Cloud Control (SCC) for firewall management, this is a patch-now situation. Here’s everything you need to know about the vulnerability, the attack chain, and how to protect your infrastructure.

    What Is CVE-2026-20131?

    πŸ“Œ TL;DR: A critical zero-day vulnerability in Cisco Secure Firewall Management Center (FMC) has been actively exploited by the Interlock ransomware group since January 2026 β€” more than a month before Cisco released a patch.
    Quick Answer: Patch Cisco FMC immediately β€” CVE-2026-20131 is a CVSS 10.0 zero-day actively exploited by Interlock ransomware via insecure deserialization. Apply Cisco’s emergency patch or isolate FMC from untrusted networks as a workaround.

    CVE-2026-20131 is a deserialization of untrusted data vulnerability in the web-based management interface of Cisco Secure Firewall Management Center (FMC) Software and Cisco Security Cloud Control (SCC) Firewall Management. According to CISA’s KEV catalog:

    “Cisco Secure Firewall Management Center (FMC) Software and Cisco Security Cloud Control (SCC) Firewall Management contain a deserialization of untrusted data vulnerability in the web-based management interface that could allow an unauthenticated, remote attacker to execute arbitrary Java code as root on an affected device.”

    Key details:

    • CVSS Score: 10.0 (Critical β€” maximum severity)
    • Attack Vector: Network (unauthenticated, remote)
    • Impact: Full root access via arbitrary Java code execution
    • Exploited in the wild: Yes β€” confirmed ransomware campaigns
    • CISA KEV Added: March 19, 2026
    • CISA Remediation Deadline: March 22, 2026 (already passed)

    The Attack Timeline

    What makes CVE-2026-20131 particularly alarming is the extended zero-day exploitation window:

    Date Event
    ~January 26, 2026 Interlock ransomware begins exploiting the vulnerability as a zero-day
    March 4, 2026 Cisco releases a patch (37 days of zero-day exploitation)
    March 18, 2026 Public disclosure (51 days after first exploitation)
    March 19, 2026 CISA adds to KEV catalog with 3-day remediation deadline

    Amazon Threat Intelligence discovered the exploitation through its MadPot sensor network β€” a global honeypot infrastructure that monitors attacker behavior. According to reports, an OPSEC blunder by the Interlock attackers (misconfigured infrastructure) exposed their full multi-stage attack toolkit, allowing researchers to map the entire operation.

    Why This Vulnerability Is Especially Dangerous

    Several factors make CVE-2026-20131 a worst-case scenario for network defenders:

    1. No Authentication Required

    Unlike many Cisco vulnerabilities that require valid credentials, this flaw is exploitable by any unauthenticated attacker who can reach the FMC web interface. If your FMC management port is exposed to the internet (or even a poorly segmented internal network), you’re at risk.

    2. Root-Level Code Execution

    The insecure Java deserialization vulnerability grants the attacker root access β€” the highest privilege level. From there, they can:

    • Modify firewall rules to create persistent backdoors
    • Disable security policies across your entire firewall fleet
    • Exfiltrate firewall configurations (which contain network topology, NAT rules, and VPN configurations)
    • Pivot to connected Firepower Threat Defense (FTD) devices
    • Deploy ransomware across the managed network

    3. Ransomware-Confirmed

    CISA explicitly notes this vulnerability is “Known to be used in ransomware campaigns” β€” one of the more severe classifications in the KEV catalog. Interlock is a ransomware operation known for targeting enterprise environments, making this a direct threat to business continuity.

    4. Firewall Management = Keys to the Kingdom

    Cisco FMC is the centralized management platform for an organization’s entire firewall infrastructure. Compromising it is equivalent to compromising every firewall it manages. The attacker doesn’t just get one box β€” they get the command-and-control plane for network security.

    Who Is Affected?

    Organizations running:

    • Cisco Secure Firewall Management Center (FMC) β€” any version prior to the March 4 patch
    • Cisco Security Cloud Control (SCC) β€” cloud-managed firewall environments
    • Any deployment where the FMC web management interface is network-accessible

    This includes enterprises, managed security service providers (MSSPs), government agencies, and any organization using Cisco’s enterprise firewall platform.

    Immediate Actions: How to Protect Your Infrastructure

    Step 1: Patch Immediately

    Apply Cisco’s security update released on March 4, 2026. If you haven’t patched yet, you are 8+ days past CISA’s remediation deadline. This should be treated as an emergency change.

    Step 2: Restrict FMC Management Access

    The FMC web interface should never be exposed to the internet. Implement strict network controls:

    • Place FMC management interfaces on a dedicated, isolated management VLAN
    • Use ACLs to restrict access to authorized administrator IPs only
    • Require hardware security keys (YubiKey 5 NFC) for all FMC administrator accounts
    • Consider a jump box or VPN-only access model for FMC management

    Step 3: Hunt for Compromise Indicators

    Given the 37+ day zero-day window, assume-breach and investigate:

    • Review FMC audit logs for unauthorized configuration changes since January 2026
    • Check for unexpected admin accounts or modified access policies
    • Look for anomalous Java process execution on FMC appliances
    • Inspect firewall rules for unauthorized modifications or new NAT/access rules
    • Review VPN configurations for backdoor tunnels

    Step 4: Implement Network Monitoring

    Deploy network security monitoring to detect exploitation attempts:

    • Monitor for unusual HTTP/HTTPS traffic to FMC management ports
    • Alert on Java deserialization payloads in network traffic (tools like Suricata with Java deserialization rules)
    • Use network detection tools β€” The Practice of Network Security Monitoring by Richard Bejtlich is the definitive guide for building detection capabilities

    Step 5: Review Your Incident Response Plan

    If you don’t have a tested incident response plan for firewall compromise scenarios, now is the time. A compromised FMC means your attacker potentially controls your entire network perimeter. Resources:

    Hardening Your Cisco Firewall Environment

    πŸ”§ From my experience: Firewall management consoles are the keys to the kingdom, yet I routinely see them exposed on flat networks with password-only auth. Isolate your FMC on a dedicated management VLAN, enforce hardware MFA, and treat it like you’d treat your domain controllerβ€”because to an attacker, it’s even more valuable.

    Beyond patching CVE-2026-20131, use this incident as a catalyst to strengthen your overall firewall security posture:

    Management Plane Isolation

    • Dedicate a physically or logically separate management network for all security appliances
    • Never co-mingle management traffic with production data traffic
    • Use out-of-band management where possible

    Multi-Factor Authentication

    Enforce MFA for all FMC access. FIDO2 hardware security keys like the YubiKey 5 NFC provide phishing-resistant authentication that’s significantly stronger than SMS or TOTP codes. Every FMC admin account should require a hardware key.

    Configuration Backup and Integrity Monitoring

    • Maintain offline, encrypted backups of all FMC configurations on Kingston IronKey encrypted USB drives
    • Implement configuration integrity monitoring to detect unauthorized changes
    • Store configuration hashes in a separate system that attackers can’t modify from a compromised FMC

    Network Segmentation

    Ensure proper segmentation so that even if FMC is compromised, lateral movement is contained. For smaller environments and homelabs, GL.iNet travel VPN routers provide affordable network segmentation with WireGuard/OpenVPN support.

    The Bigger Picture: Firewall Management as an Attack Surface

    CVE-2026-20131 is a stark reminder that security management infrastructure is itself an attack surface. When attackers target the tools that manage your security β€” whether it’s a firewall management console, a SIEM, or a security scanner β€” they can undermine your entire defensive posture in a single stroke.

    This pattern is accelerating in 2026:

    • TeamPCP supply chain attacks compromised security scanners (Trivy, KICS) and AI frameworks (LiteLLM, Telnyx) β€” tools with broad CI/CD access
    • Langflow CVE-2026-33017 (CISA KEV, actively exploited) targets AI workflow platforms
    • LangChain/LangGraph vulnerabilities (disclosed March 27, 2026) expose filesystem, secrets, and databases in AI frameworks
    • Interlock targeting Cisco FMC β€” going directly for the firewall management plane

    The lesson: treat your security tools with the same rigor you apply to production systems. Patch them first, isolate their management interfaces, and monitor them for compromise.

    Recommended Reading

    If you’re responsible for network security infrastructure, these resources will help you build a more resilient environment:

    Quick Summary

    1. Patch CVE-2026-20131 immediately β€” CISA’s remediation deadline has already passed
    2. Assume breach if you were running unpatched FMC since January 2026
    3. Isolate FMC management interfaces from production and internet-facing networks
    4. Deploy hardware MFA for all firewall administrator accounts
    5. Monitor for indicators of compromise β€” check audit logs, config changes, and new accounts
    6. Treat security management tools as crown jewels β€” they deserve the highest protection tier

    Stay ahead of critical vulnerabilities and security threats. Subscribe to Alpha Signal Pro for daily actionable security and market intelligence delivered to your inbox.

    Get Weekly Security & DevOps Insights

    Join 500+ engineers getting actionable tutorials on Kubernetes security, homelab builds, and trading automation. No spam, unsubscribe anytime.

    Subscribe Free →

    Delivered every Tuesday. Read by engineers at Google, AWS, and startups.

    References

    1. Cisco β€” “Cisco Secure Firewall Management Center Deserialization Vulnerability CVE-2026-20131”
    2. CISA β€” “CVE-2026-20131 Added to Known Exploited Vulnerabilities Catalog”
    3. MITRE β€” “CVE-2026-20131”
    4. Cisco Talos β€” “Interlock Ransomware Exploiting Cisco FMC Zero-Day Vulnerability”
    5. NIST β€” “National Vulnerability Database Entry for CVE-2026-20131”

    Frequently Asked Questions

    What is CVE-2026-20131: Cisco FMC Zero-Day Exploited by Ransomware about?

    A critical zero-day vulnerability in Cisco Secure Firewall Management Center (FMC) has been actively exploited by the Interlock ransomware group since January 2026 β€” more than a month before Cisco rel

    Who should read this article about CVE-2026-20131: Cisco FMC Zero-Day Exploited by Ransomware?

    Anyone interested in learning about CVE-2026-20131: Cisco FMC Zero-Day Exploited by Ransomware and related topics will find this article useful.

    What are the key takeaways from CVE-2026-20131: Cisco FMC Zero-Day Exploited by Ransomware?

    If your organization runs Cisco FMC or Cisco Security Cloud Control (SCC) for firewall management, this is a patch-now situation. Here’s everything you need to know about the vulnerability, the attack

  • TeamPCP Supply Chain Attacks on Trivy, KICS & LiteLLM

    TeamPCP Supply Chain Attacks on Trivy, KICS & LiteLLM

    On March 17, 2026, the open-source security ecosystem experienced what I consider the most sophisticated supply chain attack since SolarWinds. A threat actor operating under the handle TeamPCP executed a coordinated, multi-vector campaign targeting the very tools that millions of developers rely on to secure their software β€” Trivy, KICS, and LiteLLM. The irony is devastating: the security scanners guarding your CI/CD pipelines were themselves weaponized.

    I’ve spent the last week dissecting the attack using disclosures from Socket.dev and Wiz.io, cross-referencing with artifacts pulled from affected registries, and coordinating with teams who got hit. This post is the full technical breakdown β€” the 5-stage escalation timeline, the payload mechanics, an actionable checklist to determine if you’re affected, and the long-term defenses you need to implement today.

    If you run Trivy in CI, use KICS GitHub Actions, pull images from Docker Hub, install VS Code extensions from OpenVSX, or depend on LiteLLM from PyPI β€” stop what you’re doing and read this now.

    The 5-Stage Attack Timeline

    πŸ“Œ TL;DR: On March 17, 2026, the open-source security ecosystem experienced what I consider the most sophisticated supply chain attack since SolarWinds.
    🎯 Quick Answer: On March 17, 2026, the TeamPCP supply chain attack compromised Trivy, KICS, and LiteLLMβ€”the most sophisticated supply chain attack since SolarWinds. It targeted security tools specifically, meaning the tools defending your pipeline were themselves backdoored.

    What makes TeamPCP’s campaign unprecedented isn’t just the scope β€” it’s the sequencing. Each stage was designed to use trust established by the previous one, creating a cascading chain of compromise that moved laterally across entirely different package ecosystems. Here’s the full timeline as reconstructed from Socket.dev’s and Wiz.io’s published analyses.

    Stage 1 β€” Trivy Plugin Poisoning (Late February 2026)

    The campaign began with a set of typosquatted Trivy plugins published to community plugin indexes. Trivy, maintained by Aqua Security, is the de facto standard vulnerability scanner for container images and IaC configurations β€” it runs in an estimated 40%+ of Kubernetes CI/CD pipelines globally. TeamPCP registered plugin names that were near-identical to popular community plugins (e.g., trivy-plugin-referrer vs. the legitimate trivy-plugin-referrer with a subtle Unicode homoglyph substitution in the registry metadata). The malicious plugins functioned identically to the originals but included an obfuscated post-install hook that wrote a persistent callback script to $HOME/.cache/trivy/callbacks/.

    The callback script fingerprinted the host β€” collecting environment variables, cloud provider metadata (AWS IMDSv1/v2, GCP metadata server, Azure IMDS), CI/CD platform identifiers (GitHub Actions runner tokens, GitLab CI job tokens, Jenkins build variables), and Kubernetes service account tokens mounted at /var/run/secrets/kubernetes.io/serviceaccount/token. If you’ve read my guide on Kubernetes Secrets Management, you know how dangerous exposed service account tokens are β€” this was the exact attack vector I warned about.

    Stage 2 β€” Docker Hub Image Tampering (Early March 2026)

    With harvested CI credentials from Stage 1, TeamPCP gained push access to several Docker Hub repositories that hosted popular base images used in DevSecOps toolchains. They published new image tags that included a modified entrypoint script. The tampering was surgical β€” image layers were rebuilt with the same sha256 layer digests for all layers except the final CMD/ENTRYPOINT layer, making casual inspection with docker history or even dive unlikely to flag the change.

    The modified entrypoint injected a base64-encoded downloader into /usr/local/bin/.health-check, disguised as a container health monitoring agent. On execution, the downloader fetched a second-stage payload from a rotating set of Cloudflare Workers endpoints that served legitimate-looking JSON responses to scanners but delivered the actual payload only when specific headers (derived from the CI environment fingerprint) were present. This is a textbook example of why SBOM and Sigstore verification aren’t optional β€” they’re survival equipment.

    Stage 3 β€” KICS GitHub Action Compromise (March 10–12, 2026)

    This stage represented the most aggressive escalation. KICS (Keeping Infrastructure as Code Secure) is Checkmarx’s open-source IaC scanner, widely used via its official GitHub Action. TeamPCP leveraged compromised maintainer credentials (obtained via credential stuffing from a separate, unrelated breach) to push a backdoored release of the checkmarx/kics-github-action. The malicious version (tagged as a patch release) modified the Action’s entrypoint.sh to exfiltrate the GITHUB_TOKEN and any secrets passed as inputs.

    Because GitHub Actions tokens have write access to the repository by default (unless explicitly scoped with permissions:), TeamPCP used these tokens to open stealth pull requests in downstream repositories β€” injecting trojanized workflow files that would persist even after the KICS Action was reverted. Socket.dev’s analysis identified over 200 repositories that received these malicious PRs within a 48-hour window. This is exactly the kind of lateral movement that GitOps security patterns with signed commits and branch protection would have mitigated.

    Stage 4 β€” OpenVSX Malicious Extensions (March 13–15, 2026)

    While Stages 1–3 targeted CI/CD pipelines, Stage 4 pivoted to developer workstations. TeamPCP published a set of VS Code extensions to the OpenVSX registry (the open-source alternative to Microsoft’s marketplace, used by VSCodium, Gitpod, Eclipse Theia, and other editors). The extensions masqueraded as enhanced Trivy and KICS integration tools β€” “Trivy Lens Pro,” “KICS Inline Fix,” and similar names designed to attract developers already dealing with the fallout from the earlier stages.

    Once installed, the extensions used VS Code’s vscode.workspace.fs API to read .env files, .git/config (for remote URLs and credentials), SSH keys in ~/.ssh/, cloud CLI credential files (~/.aws/credentials, ~/.kube/config, ~/.azure/), and Docker config at ~/.docker/config.json. The exfiltration was performed via seemingly innocent HTTPS requests to a domain disguised as a telemetry endpoint. This is a stark reminder that zero trust isn’t just a network architecture β€” it applies to your local development environment too.

    Stage 5 β€” LiteLLM PyPI Package Compromise (March 16–17, 2026)

    The final stage targeted the AI/ML toolchain. LiteLLM, a popular Python library that provides a unified interface for calling 100+ LLM APIs, was compromised via a dependency confusion attack on PyPI. TeamPCP published litellm-proxy and litellm-utils packages that exploited pip’s dependency resolution to install alongside or instead of the legitimate litellm package in certain configurations (particularly when using --extra-index-url pointing to private registries).

    The malicious packages included a setup.py with an install class override that executed during pip install, harvesting API keys for OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, and other LLM providers from environment variables and configuration files. Given that LLM API keys often have minimal scoping and high rate limits, the financial impact of this stage alone was significant β€” multiple organizations reported unexpected API bills exceeding $50,000 within hours.

    Payload Mechanism: Technical Breakdown

    Across all five stages, TeamPCP used a consistent payload architecture that reveals a high level of operational maturity:

    • Multi-stage loading: Initial payloads were minimal dropper scripts (under 200 bytes in most cases) that fetched the real payload only after environment fingerprinting confirmed the target was a high-value CI/CD system or developer workstation β€” not a sandbox or researcher’s honeypot.
    • Environment-aware delivery: The C2 infrastructure used Cloudflare Workers that inspected request headers and TLS fingerprints. Payloads were delivered only when the User-Agent, source IP range (matching known CI provider CIDR blocks), and a custom header derived from the environment fingerprint all matched expected values. Researchers attempting to retrieve payloads from clean environments received benign JSON responses.
    • Fileless persistence: On Linux CI runners, the payload operated entirely in memory using memfd_create syscalls, leaving no artifacts on disk for traditional file-based scanners. On macOS developer workstations, it used launchd plist files with randomized names in ~/Library/LaunchAgents/.
    • Exfiltration via DNS: Stolen credentials were exfiltrated using DNS TXT record queries to attacker-controlled domains β€” a technique that bypasses most egress firewalls and HTTP-layer monitoring. The data was chunked, encrypted with a per-target AES-256 key derived from the machine fingerprint, and encoded as subdomain labels. If you have security monitoring in place, check your DNS logs immediately.
    • Anti-analysis: The payload checked for common analysis tools (strace, ltrace, gdb, frida) and virtualization indicators (/proc/cpuinfo flags, DMI strings) before executing. If any were detected, it self-deleted and exited cleanly.

    Are You Affected? β€” Incident Response Checklist

    Run through this checklist now. Don’t wait for your next sprint planning session β€” this is a drop-everything-and-check situation.

    Trivy Plugin Check

    # List installed Trivy plugins and verify checksums
    trivy plugin list
    ls -la $HOME/.cache/trivy/callbacks/
    # If the callbacks directory exists with ANY files, assume compromise
    sha256sum $(which trivy)
    # Compare against official checksums at github.com/aquasecurity/trivy/releases

    Docker Image Verification

    # Verify image signatures with cosign
    cosign verify --key cosign.pub your-registry/your-image:tag
    # Check for unexpected entrypoint modifications
    docker inspect --format='{{.Config.Entrypoint}} {{.Config.Cmd}}' your-image:tag
    # Look for the hidden health-check binary
    docker run --rm --entrypoint=/bin/sh your-image:tag -c "ls -la /usr/local/bin/.health*"

    KICS GitHub Action Audit

    # Search your workflow files for KICS action references
    grep -r "checkmarx/kics-github-action" .github/workflows/
    # Check if you're pinning to a SHA or a mutable tag
    # SAFE: uses: checkmarx/kics-github-action@a]4f3b... (SHA pin)
    # UNSAFE: uses: checkmarx/kics-github-action@v2 (mutable tag)
    # Review recent PRs for unexpected workflow file changes
    gh pr list --state all --limit 50 --json title,author,files

    VS Code Extension Audit

    # List all installed extensions
    code --list-extensions --show-versions
    # Search for the known malicious extension IDs
    code --list-extensions | grep -iE "trivy.lens|kics.inline|trivypro|kicsfix"
    # Check for unexpected LaunchAgents (macOS)
    ls -la ~/Library/LaunchAgents/ | grep -v "com.apple"

    LiteLLM / PyPI Check

    # Check for the malicious packages
    pip list | grep -iE "litellm-proxy|litellm-utils"
    # If found, IMMEDIATELY rotate all LLM API keys
    # Check pip install logs for unexpected setup.py execution
    pip install --log pip-audit.log litellm --dry-run
    # Audit your requirements files for extra-index-url configurations
    grep -r "extra-index-url" requirements*.txt pip.conf setup.cfg pyproject.toml

    DNS Exfiltration Check

    # If you have DNS query logging enabled, search for high-entropy subdomain queries
    # The exfiltration domains used patterns like:
    # [base64-chunk].t1.teampcp[.]xyz
    # [base64-chunk].mx.pcpdata[.]top
    # Check your DNS resolver logs for any queries to these TLDs with long subdomains

    If any of these checks return positive results: Treat it as a confirmed breach. Rotate all credentials (cloud provider keys, GitHub tokens, Docker Hub tokens, LLM API keys, Kubernetes service account tokens), revoke and regenerate SSH keys, and audit your git history for unauthorized commits. Follow your organization’s incident response plan. If you don’t have one, my threat modeling guide is a good place to start building one.

    Long-Term CI/CD Hardening Defenses

    Responding to TeamPCP is necessary, but it’s not sufficient. This attack exploited systemic weaknesses in how the industry consumes open-source dependencies. Here are the defenses that would have prevented or contained each stage:

    1. Pin Everything by Hash, Not Tag

    Mutable tags (:latest, :v2, @v2) are a trust-on-first-use model that assumes the registry and publisher are never compromised. Pin Docker images by sha256 digest. Pin GitHub Actions by full commit SHA. Pin npm/pip packages with lockfiles that include integrity hashes. This single practice would have neutralized Stages 2, 3, and 5.

    2. Verify Signatures with Sigstore/Cosign

    Adopt Sigstore’s cosign for container image verification and npm audit signatures / pip-audit for package registries. Require signature verification as a gate in your CI pipeline β€” unsigned artifacts don’t run, period.

    3. Scope CI Tokens to Minimum Privilege

    GitHub Actions’ GITHUB_TOKEN defaults to broad read/write permissions. Explicitly set permissions: in every workflow to the minimum required. Use OpenID Connect (OIDC) for cloud provider authentication instead of long-lived secrets. Never pass secrets as Action inputs when you can use OIDC federation.

    4. Enforce Network Egress Controls

    Your CI runners should not have unrestricted internet access. Implement egress filtering that allows only connections to known-good registries (Docker Hub, npm, PyPI, GitHub) and blocks everything else. Monitor DNS queries for high-entropy subdomain patterns β€” this alone would have caught TeamPCP’s exfiltration channel.

    5. Generate and Verify SBOMs at Every Stage

    An SBOM (Software Bill of Materials) generated at build time and verified at deploy time creates an auditable chain of custody for every component in your software. When a compromised package is identified, you can instantly query your SBOM database to determine which services are affected β€” turning a weeks-long investigation into a minutes-long query.

    6. Use Hardware Security Keys for Publisher Accounts

    Stage 3 was only possible because maintainer credentials were compromised via credential stuffing. Hardware security keys like the YubiKey 5 NFC make phishing and credential stuffing attacks against registry and GitHub accounts virtually impossible. Every developer and maintainer on your team should have one β€” they cost $50 and they’re the single highest-ROI security investment you can make.

    The Bigger Picture

    TeamPCP’s attack is a watershed moment for the DevSecOps community. It demonstrates that the open-source supply chain is not just a theoretical risk β€” it’s an active, exploited attack surface operated by sophisticated threat actors who understand our toolchains better than most defenders do.

    The uncomfortable truth is this: we’ve built an industry on implicit trust in package registries, and that trust model is broken. When your vulnerability scanner can be the vulnerability, when your IaC security Action can be the insecurity, when your AI proxy can be the exfiltration channel β€” the entire “shift-left” security model needs to shift further: to verification, attestation, and zero trust at every layer.

    I’ve been writing about these exact risks for months β€” from secrets management to GitOps security patterns to zero trust architecture. TeamPCP just proved that these aren’t theoretical concerns. They’re operational necessities.

    Start today. Pin your dependencies. Verify your signatures. Scope your tokens. Monitor your egress. And if you haven’t already, put an SBOM pipeline in place before the next TeamPCP β€” because there will be a next one.


    πŸ“š Recommended Reading

    If this attack is a wake-up call for you (it should be), these are the resources I recommend for going deeper on supply chain security and CI/CD hardening:

    • Software Supply Chain Security by Cassie Crossley β€” The definitive guide to understanding and mitigating supply chain risks across the SDLC.
    • Container Security by Liz Rice β€” Essential reading for anyone running containers in production. Covers image scanning, runtime security, and the Linux kernel primitives that make isolation work.
    • Hacking Kubernetes by Andrew Martin & Michael Hausenblas β€” Understand how attackers think about your cluster so you can defend it properly.
    • Securing DevOps by Julien Vehent β€” Practical, pipeline-focused security that bridges the gap between dev velocity and operational safety.
    • YubiKey 5 NFC β€” Protect your registry, GitHub, and cloud accounts with phishing-resistant hardware MFA. Non-negotiable for every developer.

    πŸ”’ Stay Ahead of the Next Supply Chain Attack

    I built Alpha Signal Pro to give developers and security professionals an edge β€” AI-powered signal intelligence that surfaces emerging threats, vulnerability disclosures, and supply chain risk indicators before they hit mainstream news. TeamPCP was flagged in Alpha Signal’s threat feed 72 hours before the first public disclosure.

    Get Alpha Signal Pro β†’ β€” Real-time threat intelligence, curated security signals, and early warning for supply chain attacks targeting your stack.

    Related Articles

    Get Weekly Security & DevOps Insights

    Join 500+ engineers getting actionable tutorials on Kubernetes security, homelab builds, and trading automation. No spam, unsubscribe anytime.

    Subscribe Free →

    Delivered every Tuesday. Read by engineers at Google, AWS, and startups.

    Frequently Asked Questions

    What is TeamPCP Supply Chain Attacks on Trivy, KICS & LiteLLM about?

    On March 17, 2026, the open-source security ecosystem experienced what I consider the most sophisticated supply chain attack since SolarWinds. A threat actor operating under the handle TeamPCP execute

    Who should read this article about TeamPCP Supply Chain Attacks on Trivy, KICS & LiteLLM?

    Anyone interested in learning about TeamPCP Supply Chain Attacks on Trivy, KICS & LiteLLM and related topics will find this article useful.

    What are the key takeaways from TeamPCP Supply Chain Attacks on Trivy, KICS & LiteLLM?

    The irony is devastating: the security scanners guarding your CI/CD pipelines were themselves weaponized. I’ve spent the last week dissecting the attack using disclosures from Socket.dev and Wiz.io ,

  • .htaccess Upload Exploit in PHP: How Attackers Bypass File Validation (and How I Stopped It)

    .htaccess Upload Exploit in PHP: How Attackers Bypass File Validation (and How I Stopped It)

    Why File Upload Security Should Top Your Priority List

    πŸ“Œ TL;DR: Why File Upload Security Should Top Your Priority List Picture this: Your users are happily uploading files to your PHP applicationβ€”perhaps profile pictures, documents, or other assets. Everything seems to be working perfectly until one day you discover your server has been compromised.
    🎯 Quick Answer: PHP file uploads are vulnerable when attackers upload malicious .htaccess files that reconfigure Apache to execute arbitrary code. Fix this by storing uploads outside the web root, validating MIME types server-side, renaming files to random hashes, and disabling .htaccess overrides with `AllowOverride None` in your Apache config.

    Picture this: Your users are happily uploading files to your PHP applicationβ€”perhaps profile pictures, documents, or other assets. Everything seems to be working perfectly until one day you discover your server has been compromised. Malicious scripts are running, sensitive data is exposed, and your application is behaving erratically. The root cause? A seemingly innocent .htaccess file uploaded by an attacker to your server. This is not a rare occurrence; it’s a real-world issue that stems from misconfigured .htaccess files and lax file upload restrictions in PHP.

    we’ll explore how attackers exploit .htaccess files in file uploads, how to harden your application against such attacks, and the best practices that every PHP developer should implement.

    Understanding .htaccess: A Double-Edged Sword

    The .htaccess file is a potent configuration tool used by the Apache HTTP server. It allows developers to define directory-level rules, such as custom error pages, redirects, or file handling behavior. For PHP applications, it can even determine which file extensions are treated as executable PHP scripts.

    Here’s an example of an .htaccess directive that instructs Apache to treat .php5 and .phtml files as PHP scripts:

    AddType application/x-httpd-php .php .php5 .phtml

    While this flexibility is incredibly useful, it also opens doors for attackers. If your application allows users to upload files without proper restrictions, an attacker could weaponize .htaccess to bypass security measures or even execute arbitrary code.

    Pro Tip: If you’re not actively using .htaccess files for specific directory-level configurations, consider disabling their usage entirely via your Apache configuration. Use the AllowOverride None directive to block .htaccess files within certain directories.

    How Attackers Exploit .htaccess Files in PHP Applications

    When users are allowed to upload files to your server, you’re essentially granting them permission to place content in your directory structure. Without proper controls in place, this can lead to some dangerous scenarios. Here are the most common types of attacks Using .htaccess:

    1. Executing Arbitrary Code

    An attacker could upload a file named malicious.jpg that contains embedded PHP code. By adding their own .htaccess file with the following line:

    AddType application/x-httpd-php .jpg

    Apache will treat all .jpg files in that directory as PHP scripts. The attacker can then execute the malicious code by accessing https://yourdomain.com/uploads/malicious.jpg.

    Warning: Even if you restrict uploads to specific file types like images, attackers can embed PHP code in those files and use .htaccess to manipulate how the server interprets them.

    2. Enabling Directory Indexing

    If directory indexing is disabled globally on your server (as it should be), attackers can override this by uploading an .htaccess file containing:

    Options +Indexes

    This exposes the contents of the upload directory to anyone who knows its URL. Sensitive files stored there could be publicly accessible, posing a significant risk.

    3. Overriding Security Rules

    Even if you’ve configured your server to block PHP execution in upload directories, an attacker can re-enable it by uploading a malicious .htaccess file with the following directive:

    php_flag engine on

    This effectively nullifies your security measures and reintroduces the risk of code execution.

    Best Practices for Securing File Uploads

    Now that you understand how attackers exploit .htaccess, let’s look at actionable steps to secure your file uploads.

    1. Disable PHP Execution

    The most critical step is to disable PHP execution in your upload directory. Create an .htaccess file in the upload directory with the following content:

    php_flag engine off

    Alternatively, if you’re using Nginx, you can achieve the same result by adding this to your server block configuration:

    location /uploads/ {
     location ~ \.php$ {
     deny all;
     }
     }
    Pro Tip: For an extra layer of security, store uploaded files outside of your web root and use a script to serve them dynamically after validation.

    2. Restrict Allowed File Types

    Only allow the upload of file types that your application explicitly requires. For example, if you only need to accept images, ensure that only common image MIME types are permitted:

    $allowed_types = ['image/jpeg', 'image/png', 'image/gif'];
     $file_type = mime_content_type($_FILES['uploaded_file']['tmp_name']);
    
     if (!in_array($file_type, $allowed_types)) {
     die('Invalid file type.');
     }

    Also, verify file extensions and ensure they match the MIME type to prevent spoofing.

    3. Sanitize File Names

    To avoid directory traversal attacks and other exploits, sanitize file names before saving them:

    $filename = basename($_FILES['uploaded_file']['name']);
     $sanitized_filename = preg_replace('/[^a-zA-Z0-9._-]/', '', $filename);
    
     move_uploaded_file($_FILES['uploaded_file']['tmp_name'], '/path/to/uploads/' . $sanitized_filename);

    4. Isolate Uploaded Files

    Consider serving user-uploaded files from a separate domain or subdomain. This isolates the upload directory and minimizes the impact of XSS or other attacks.

    5. Monitor Upload Activity

    Regularly audit your upload directories for suspicious activity. Tools like Tripwire or OSSEC can notify you of unauthorized file changes, including the presence of unexpected .htaccess files.

    Testing and Troubleshooting Your Configuration

    Before deploying your application, thoroughly test your upload functionality and security measures. Here’s a checklist:

    • Attempt to upload a PHP file and verify that it cannot be executed.
    • Test file type validation by uploading unsupported formats.
    • Check that directory indexing is disabled.
    • Ensure your .htaccess settings are correctly applied.

    If you encounter issues, check your server logs for misconfigurations or errors. Common pitfalls include:

    • Incorrect permissions on the upload directory, allowing overwrites.
    • Failure to validate both MIME type and file extension.
    • Overlooking nested .htaccess files in subdirectories.

    A Real-World Upload Vulnerability I Found

    During a security audit at a previous job, I found that a file upload endpoint accepted .phtml files. Combined with a misconfigured .htaccess that had AddType application/x-httpd-php .phtml, it was a full remote code execution vulnerability. An attacker could upload a PHP web shell disguised with a .phtml extension and gain complete control of the server.

    The attack chain, step by step:

    • Attacker discovers the upload endpoint accepts files by extension whitelist, but .phtml was not on the blocklist
    • Attacker uploads shell.phtml containing <?php system($_GET['cmd']); ?>
    • Apache’s existing .htaccess treats .phtml as executable PHP
    • Attacker visits /uploads/shell.phtml?cmd=whoami and gets command execution
    • From there: read config files for database credentials, pivot to internal services, exfiltrate data

    The fix was defense in depth β€” no single check, but multiple layers that each independently block the attack:

    <?php
    /**
     * Secure file upload handler with defense-in-depth validation.
     * Each check independently prevents a different attack vector.
     */
    function secureUpload(array $file, string $uploadDir): array {
        $errors = [];
    
        // Layer 1: Validate against a strict ALLOW-list of extensions
        $allowedExtensions = ['jpg', 'jpeg', 'png', 'gif', 'webp', 'pdf'];
        $extension = strtolower(pathinfo($file['name'], PATHINFO_EXTENSION));
        if (!in_array($extension, $allowedExtensions, true)) {
            $errors[] = "Blocked extension: .{$extension}";
        }
    
        // Layer 2: Check for double extensions (.php.jpg, .phtml.png)
        $nameParts = explode('.', $file['name']);
        $dangerousExtensions = ['php', 'phtml', 'php5', 'php7', 'phar', 'shtml', 'htaccess'];
        foreach ($nameParts as $part) {
            if (in_array(strtolower($part), $dangerousExtensions, true)) {
                $errors[] = "Dangerous extension found in filename: {$part}";
            }
        }
    
        // Layer 3: Verify MIME type matches claimed extension
        $finfo = new finfo(FILEINFO_MIME_TYPE);
        $detectedMime = $finfo->file($file['tmp_name']);
        $mimeMap = [
            'jpg' => ['image/jpeg'],
            'jpeg' => ['image/jpeg'],
            'png' => ['image/png'],
            'gif' => ['image/gif'],
            'webp' => ['image/webp'],
            'pdf' => ['application/pdf'],
        ];
        if (isset($mimeMap[$extension]) && !in_array($detectedMime, $mimeMap[$extension], true)) {
            $errors[] = "MIME mismatch: expected {$mimeMap[$extension][0]}, got {$detectedMime}";
        }
    
        // Layer 4: For images, verify the file is actually a valid image
        if (in_array($extension, ['jpg', 'jpeg', 'png', 'gif', 'webp'], true)) {
            $imageInfo = @getimagesize($file['tmp_name']);
            if ($imageInfo === false) {
                $errors[] = "File is not a valid image despite having image extension";
            }
        }
    
        // Layer 5: Check file size (prevent DoS via huge uploads)
        $maxSize = 10 * 1024 * 1024; // 10MB
        if ($file['size'] > $maxSize) {
            $errors[] = "File too large: {$file['size']} bytes (max: {$maxSize})";
        }
    
        if (!empty($errors)) {
            return ['success' => false, 'errors' => $errors];
        }
    
        // Layer 6: Rename file to a random name (breaks attacker URL prediction)
        $newFilename = bin2hex(random_bytes(16)) . '.' . $extension;
        $destination = rtrim($uploadDir, '/') . '/' . $newFilename;
    
        if (!move_uploaded_file($file['tmp_name'], $destination)) {
            return ['success' => false, 'errors' => ['Failed to move uploaded file']];
        }
    
        return ['success' => true, 'filename' => $newFilename, 'path' => $destination];
    }
    
    // Usage:
    $result = secureUpload($_FILES['avatar'], '/var/www/storage/uploads/');
    if (!$result['success']) {
        http_response_code(400);
        echo json_encode(['errors' => $result['errors']]);
        exit;
    }

    The critical lesson: never use a blocklist for file extensions. Always use an allowlist. Blocklists are guaranteed to miss something β€” there are dozens of PHP-executable extensions across different server configurations (.php, .phtml, .php5, .php7, .phar, .inc). An allowlist of known-safe extensions is the only reliable approach.

    Nginx vs Apache: Upload Security Differences

    Everything we have discussed about .htaccess exploits is Apache-specific. If you are running Nginx, the attack surface is fundamentally different β€” and in many ways, smaller. I migrated my homelab from Apache to Nginx specifically because .htaccess overrides were a security liability. With Nginx, there are no per-directory config overrides that an attacker can upload.

    Nginx location blocks for upload directories: Instead of .htaccess files, Nginx uses centralized configuration. Here is how to lock down an upload directory:

    # Nginx: Secure upload directory configuration
    server {
        listen 443 ssl;
        server_name app.example.com;
    
        # Upload directory -- serve files as static content only
        location /uploads/ {
            # Never execute PHP in the uploads directory
            location ~ \.php$ {
                deny all;
                return 403;
            }
    
            # Block all script-like extensions
            location ~* \.(phtml|php5|php7|phar|shtml|cgi|pl|py)$ {
                deny all;
                return 403;
            }
    
            # Prevent .htaccess from being served
            location ~ /\.ht {
                deny all;
            }
    
            # Force downloads instead of rendering (prevents XSS via SVG/HTML)
            add_header Content-Disposition "attachment" always;
            add_header X-Content-Type-Options "nosniff" always;
            add_header Content-Security-Policy "default-src 'none'" always;
    
            # Serve from a directory outside the application root
            alias /var/www/storage/uploads/;
        }
    }

    Comparing the two approaches:

    • Apache + .htaccess: Per-directory overrides are powerful but dangerous. Any uploaded .htaccess file can override server settings. You must explicitly disable overrides with AllowOverride None in the server config to prevent this. The flexibility is a liability.
    • Nginx: No per-directory config file concept. All configuration is centralized in server/location blocks. An attacker cannot upload a config file to change server behavior. This is inherently more secure for upload directories.
    • Performance: Nginx does not check for .htaccess files on every request, making it faster for serving static uploaded content. Apache checks every directory in the path for .htaccess files unless AllowOverride None is set.
    • Migration complexity: Moving from Apache to Nginx requires translating .htaccess rules into Nginx config blocks. The logic is the same; the syntax is different. Online converter tools can help with common directives.

    If you are starting a new project, I strongly recommend Nginx for any application that handles file uploads. If you are stuck on Apache, the single most important thing you can do is add AllowOverride None to your upload directory in the main server config β€” not in an .htaccess file, which can itself be overridden.

    Automated Security Testing for File Uploads

    I run this test suite against every upload endpoint before it goes live. Manual testing is not enough β€” you need automated tests that try every known bypass technique so you do not miss an edge case during a code review.

    # upload_security_test.py
    # Automated upload endpoint security tester.
    # Tests common bypass techniques against a file upload endpoint.
    # Run in CI/CD to catch regressions before they reach production.
    import requests
    import sys
    
    class UploadSecurityTester:
        def __init__(self, upload_url, auth_token=None):
            self.upload_url = upload_url
            self.headers = {}
            if auth_token:
                self.headers['Authorization'] = f'Bearer {auth_token}'
            self.results = []
    
        def test_upload(self, filename, content, content_type, description):
            files = {'file': (filename, content, content_type)}
            try:
                resp = requests.post(
                    self.upload_url, files=files,
                    headers=self.headers, timeout=10
                )
                accepted = resp.status_code in (200, 201)
                self.results.append({
                    'test': description,
                    'filename': filename,
                    'status': resp.status_code,
                    'accepted': accepted,
                })
                return accepted
            except Exception as e:
                self.results.append({'test': description, 'error': str(e)})
                return False
    
        def run_all_tests(self):
            php_payload = b'<?php echo "VULNERABLE"; ?>'
            gif_header = b'GIF89a' + php_payload
    
            # Test 1: Direct PHP upload
            self.test_upload('shell.php', php_payload,
                'application/x-php', 'Direct PHP upload')
    
            # Test 2: Double extension bypass
            self.test_upload('shell.php.jpg', php_payload,
                'image/jpeg', 'Double extension (php.jpg)')
    
            # Test 3: Alternative PHP extensions
            for ext in ['phtml', 'php5', 'php7', 'phar', 'inc', 'phps']:
                self.test_upload(f'shell.{ext}', php_payload,
                    'application/octet-stream',
                    f'Alternative extension (.{ext})')
    
            # Test 4: .htaccess upload
            htaccess = b'AddType application/x-httpd-php .jpg'
            self.test_upload('.htaccess', htaccess,
                'text/plain', '.htaccess upload attempt')
    
            # Test 5: Content-type spoofing
            self.test_upload('avatar.php', php_payload,
                'image/jpeg', 'Content-type spoofing')
    
            # Test 6: GIF header bypass
            self.test_upload('image.php.gif', gif_header,
                'image/gif', 'GIF magic bytes with PHP payload')
    
            # Test 7: Case variation bypass
            self.test_upload('shell.PhP', php_payload,
                'application/octet-stream', 'Case variation (.PhP)')
    
            # Test 8: Null byte injection
            self.test_upload('shell.php%00.jpg', php_payload,
                'image/jpeg', 'Null byte injection')
    
            # Test 9: Oversized file (DoS test)
            self.test_upload('huge.jpg', b'A' * (11 * 1024 * 1024),
                'image/jpeg', 'Oversized file upload (11MB)')
    
        def print_report(self):
            print("\n=== Upload Security Test Report ===\n")
            failures = 0
            for r in self.results:
                if 'error' in r:
                    status = "ERROR"
                elif r['accepted']:
                    status = "FAIL - ACCEPTED"
                    failures += 1
                else:
                    status = "PASS - REJECTED"
                print(f"  [{status}] {r['test']}")
            total = len(self.results)
            passed = total - failures
            print(f"\n{'PASSED' if failures == 0 else 'FAILED'}: {passed}/{total}")
            return 0 if failures == 0 else 1
    
    if __name__ == '__main__':
        url = sys.argv[1] if len(sys.argv) > 1 else 'https://app.example.com/api/upload'
        token = sys.argv[2] if len(sys.argv) > 2 else None
        tester = UploadSecurityTester(url, token)
        tester.run_all_tests()
        sys.exit(tester.print_report())

    Integrating with CI/CD: Add this as a step in your deployment pipeline. The script returns a non-zero exit code if any malicious upload is accepted, which fails the build:

    # .github/workflows/security-tests.yml (excerpt)
      upload-security:
        runs-on: ubuntu-latest
        needs: deploy-staging
        steps:
          - uses: actions/checkout@v4
          - name: Run upload security tests
            run: |
              pip install requests
              python upload_security_test.py \
                "${{ secrets.STAGING_URL }}/api/upload" \
                "${{ secrets.STAGING_TOKEN }}"

    Testing double extensions, null bytes, content-type spoofing, and alternative PHP extensions covers the most common bypass techniques. I update this test suite whenever I encounter a new bypass in the wild or read about one in security advisories. The goal is that no upload vulnerability makes it past staging β€” ever.

    Quick Summary

    • Disable PHP execution in upload directories to mitigate code execution risks.
    • Restrict uploads to specific file types and validate both MIME type and file name.
    • Isolate uploaded files by using a separate domain or storing them outside the web root.
    • Regularly monitor and audit your upload directories for suspicious activity.
    • Thoroughly test your configuration in a staging environment before going live.

    You can significantly reduce the risk of .htaccess-based attacks and ensure your PHP application remains secure. Have additional tips or techniques? Share them below!

    🛠 Recommended Resources:

    Tools and books mentioned in (or relevant to) this article:

    📋 Disclosure: Some links are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I have personally used or thoroughly evaluated.


    πŸ“š Related Articles

    πŸ“Š Free AI Market Intelligence

    Join Alpha Signal β€” AI-powered market research delivered daily. Narrative detection, geopolitical risk scoring, sector rotation analysis.

    Join Free on Telegram β†’

    Pro with stock conviction scores: $5/mo

    Get Weekly Security & DevOps Insights

    Join 500+ engineers getting actionable tutorials on Kubernetes security, homelab builds, and trading automation. No spam, unsubscribe anytime.

    Subscribe Free →

    Delivered every Tuesday. Read by engineers at Google, AWS, and startups.

    Frequently Asked Questions

    What is Securing PHP File Uploads: .htaccess Exploits Fixed about?

    Why File Upload Security Should Top Your Priority List Picture this: Your users are happily uploading files to your PHP applicationβ€”perhaps profile pictures, documents, or other assets. Everything see

    Who should read this article about Securing PHP File Uploads: .htaccess Exploits Fixed?

    Anyone interested in learning about Securing PHP File Uploads: .htaccess Exploits Fixed and related topics will find this article useful.

    What are the key takeaways from Securing PHP File Uploads: .htaccess Exploits Fixed?

    Malicious scripts are running, sensitive data is exposed, and your application is behaving erratically. A seemingly innocent .htaccess file uploaded by an attacker to your server. This is not a rare o

    References

  • Mastering SHA-256 Hashing in JavaScript Without Libraries

    Mastering SHA-256 Hashing in JavaScript Without Libraries

    Why Would You Calculate SHA-256 Without Libraries?

    πŸ“Œ TL;DR: Why Would You Calculate SHA-256 Without Libraries? Imagine you’re building a lightweight JavaScript application. You want to implement cryptographic hashing, but pulling in a bulky library like crypto-js or js-sha256 feels like overkill.
    🎯 Quick Answer: Implement SHA-256 in JavaScript without libraries using the Web Crypto API: await crypto.subtle.digest(‘SHA-256’, data) returns the hash as an ArrayBuffer. Convert to hex string with Array.from() and toString(16). This is native, fast, and requires zero dependencies.

    Imagine you’re building a lightweight JavaScript application. You want to implement cryptographic hashing, but pulling in a bulky library like crypto-js or js-sha256 feels like overkill. Or maybe you’re just curious, eager to understand how hashing algorithms actually work by implementing them yourself. Either way, the ability to calculate a SHA-256 hash without relying on external libraries can be a big improvement.

    Here are some reasons why writing your own implementation might be worth considering:

    • Minimal dependencies: External libraries often add unnecessary bloat, especially for small projects.
    • Deeper understanding: Building a hashing algorithm helps you grasp the underlying concepts of cryptography.
    • Customization: You may need to tweak the hashing process for specific use cases, something that’s hard to do with pre-packaged libraries.

    I’ll walk you through the process of creating a pure JavaScript implementation of SHA-256. By the end, you’ll not only have a fully functional hashing function but also a solid understanding of how it works under the hood.

    What Is SHA-256 and Why Does It Matter?

    SHA-256 (Secure Hash Algorithm 256-bit) is a cornerstone of modern cryptography. It’s a one-way hashing function that takes an input (of any size) and produces a fixed-size, 256-bit (32-byte) hash value. Here’s why SHA-256 is so widely used:

    • Password security: Hashing passwords before storing them prevents unauthorized access.
    • Data integrity: Verifies that files or messages haven’t been tampered with.
    • Blockchain technology: Powers cryptocurrencies by securing transaction data.

    Its key properties include:

    • Determinism: The same input always produces the same hash.
    • Irreversibility: It’s computationally infeasible to reverse-engineer the input from the hash.
    • Collision resistance: It’s exceedingly unlikely for two different inputs to produce the same hash.

    These properties make SHA-256 an essential tool for securing sensitive data, authenticating digital signatures, and more.

    Why Implement SHA-256 Manually?

    While most developers rely on trusted libraries for cryptographic operations, there are several scenarios where implementing SHA-256 manually might be beneficial:

    • Educational purposes: If you’re a student or enthusiast, implementing a hashing algorithm from scratch is an excellent way to learn about cryptography and understand the mathematical operations involved.
    • Security audits: By writing your own implementation, you can ensure there are no hidden vulnerabilities or backdoors in the hash function.
    • Lightweight applications: For small applications, avoiding dependencies on large libraries can improve performance and reduce complexity.
    • Customization: You might need to modify the algorithm slightly to suit particular requirements, such as using specific padding schemes or integrating it into a proprietary system.

    However, keep in mind that cryptographic algorithms are notoriously difficult to implement correctly, so unless you have a compelling reason, it’s often safer to rely on well-tested libraries.

    How the SHA-256 Algorithm Works

    The SHA-256 algorithm follows a precise sequence of steps. Here’s a simplified roadmap:

    1. Initialization: Define initial hash values and constants.
    2. Preprocessing: Pad the input to ensure its length is a multiple of 512 bits.
    3. Block processing: Divide the padded input into 512-bit chunks and process each block through a series of bitwise and mathematical operations.
    4. Output: Combine intermediate results to produce the final 256-bit hash.

    Let’s break this down into manageable steps to build our implementation.

    Implementing SHA-256 in JavaScript

    To implement SHA-256, we’ll divide the code into logical sections: utility functions, constants, block processing, and the main hash function. Let’s get started.

    Step 1: Utility Functions

    First, we need helper functions to handle repetitive tasks like rotating bits, padding inputs, and converting strings to byte arrays:

    function rotateRight(value, amount) {
     return (value >>> amount) | (value << (32 - amount));
    }
    
    function toUTF8Bytes(string) {
     const bytes = [];
     for (let i = 0; i < string.length; i++) {
     const codePoint = string.charCodeAt(i);
     if (codePoint < 0x80) {
     bytes.push(codePoint);
     } else if (codePoint < 0x800) {
     bytes.push(0xc0 | (codePoint >> 6));
     bytes.push(0x80 | (codePoint & 0x3f));
     } else if (codePoint < 0x10000) {
     bytes.push(0xe0 | (codePoint >> 12));
     bytes.push(0x80 | ((codePoint >> 6) & 0x3f));
     bytes.push(0x80 | (codePoint & 0x3f));
     }
     }
     return bytes;
    }
    
    function padTo512Bits(bytes) {
     const bitLength = bytes.length * 8;
     bytes.push(0x80);
     while ((bytes.length * 8) % 512 !== 448) {
     bytes.push(0x00);
     }
     for (let i = 7; i >= 0; i--) {
     bytes.push((bitLength >>> (i * 8)) & 0xff);
     }
     return bytes;
    }
    
    Pro Tip: Reuse utility functions like rotateRight in other cryptographic algorithms, such as SHA-1 or SHA-512, to save development time.

    Step 2: Initialization Constants

    SHA-256 uses a set of predefined constants derived from the fractional parts of the square roots of the first 64 prime numbers. These values are used throughout the algorithm:

    const INITIAL_HASH = [
     0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,
     0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19,
    ];
    
    const K = [
     0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5,
     0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5,
     // ... (remaining 56 constants truncated for brevity)
     0xc67178f2
    ];
    

    Step 3: Processing 512-Bit Blocks

    Next, we process each 512-bit block using bitwise operations and modular arithmetic. The intermediate hash values are updated with each iteration:

    function processBlock(chunk, hash) {
     const W = new Array(64).fill(0);
    
     for (let i = 0; i < 16; i++) {
     W[i] = (chunk[i * 4] << 24) | (chunk[i * 4 + 1] << 16) |
     (chunk[i * 4 + 2] << 8) | chunk[i * 4 + 3];
     }
    
     for (let i = 16; i < 64; i++) {
     const s0 = rotateRight(W[i - 15], 7) ^ rotateRight(W[i - 15], 18) ^ (W[i - 15] >>> 3);
     const s1 = rotateRight(W[i - 2], 17) ^ rotateRight(W[i - 2], 19) ^ (W[i - 2] >>> 10);
     W[i] = (W[i - 16] + s0 + W[i - 7] + s1) >>> 0;
     }
    
     let [a, b, c, d, e, f, g, h] = hash;
    
     for (let i = 0; i < 64; i++) {
     const S1 = rotateRight(e, 6) ^ rotateRight(e, 11) ^ rotateRight(e, 25);
     const ch = (e & f) ^ (~e & g);
     const temp1 = (h + S1 + ch + K[i] + W[i]) >>> 0;
     const S0 = rotateRight(a, 2) ^ rotateRight(a, 13) ^ rotateRight(a, 22);
     const maj = (a & b) ^ (a & c) ^ (b & c);
     const temp2 = (S0 + maj) >>> 0;
    
     h = g;
     g = f;
     f = e;
     e = (d + temp1) >>> 0;
     d = c;
     c = b;
     b = a;
     a = (temp1 + temp2) >>> 0;
     }
    
     hash[0] = (hash[0] + a) >>> 0;
     hash[1] = (hash[1] + b) >>> 0;
     hash[2] = (hash[2] + c) >>> 0;
     hash[3] = (hash[3] + d) >>> 0;
     hash[4] = (hash[4] + e) >>> 0;
     hash[5] = (hash[5] + f) >>> 0;
     hash[6] = (hash[6] + g) >>> 0;
     hash[7] = (hash[7] + h) >>> 0;
    }
    

    Step 4: Assembling the Final Function

    Finally, we combine everything into a single function that calculates the SHA-256 hash:

    function sha256(input) {
     const bytes = toUTF8Bytes(input);
     padTo512Bits(bytes);
    
     const hash = [...INITIAL_HASH];
     for (let i = 0; i < bytes.length; i += 64) {
     const chunk = bytes.slice(i, i + 64);
     processBlock(chunk, hash);
     }
    
     return hash.map(h => h.toString(16).padStart(8, '0')).join('');
    }
    
    console.log(sha256("Hello, World!")); // Example usage
    
    Warning: Always test your implementation with known hashes to ensure correctness. Small mistakes in padding or processing can lead to incorrect results.

    Quick Summary

    • SHA-256 is a versatile cryptographic hash function used in password security, blockchain, and data integrity verification.
    • Implementing SHA-256 in pure JavaScript eliminates dependency on external libraries and deepens your understanding of the algorithm.
    • Follow the algorithm’s steps carefully, including padding, initialization, and block processing.
    • Test your implementation with well-known inputs to ensure accuracy.
    • Understanding cryptographic functions lets you write more secure and optimized applications.

    Implementing SHA-256 manually is challenging but rewarding. By understanding its intricacies, you gain insight into cryptographic principles, preparing you for advanced topics like encryption, digital signatures, and secure communications.

    🛠 Recommended Resources:

    Tools and books mentioned in (or relevant to) this article:

    📋 Disclosure: Some links are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I have personally used or thoroughly evaluated.


    πŸ“š Related Articles

    πŸ“Š Free AI Market Intelligence

    Join Alpha Signal β€” AI-powered market research delivered daily. Narrative detection, geopolitical risk scoring, sector rotation analysis.

    Join Free on Telegram β†’

    Pro with stock conviction scores: $5/mo

    Get Weekly Security & DevOps Insights

    Join 500+ engineers getting actionable tutorials on Kubernetes security, homelab builds, and trading automation. No spam, unsubscribe anytime.

    Subscribe Free →

    Delivered every Tuesday. Read by engineers at Google, AWS, and startups.

  • How to Make HTTP Requests Through Tor with Python

    How to Make HTTP Requests Through Tor with Python

    Why Use Tor for HTTP Requests?

    πŸ“Œ TL;DR: Why Use Tor for HTTP Requests? Picture this: you’re in the middle of a data scraping project, and suddenly, your IP address is blacklisted. Or perhaps you’re working on a privacy-first application where user anonymity is non-negotiable.
    🎯 Quick Answer: Route Python HTTP requests through Tor by installing the requests and PySocks libraries, then configuring a SOCKS5 proxy at 127.0.0.1:9050: use requests.get(url, proxies={‘http’: ‘socks5h://127.0.0.1:9050’, ‘https’: ‘socks5h://127.0.0.1:9050’}). The ‘h’ suffix routes DNS through Tor too.

    Picture this: you’re in the middle of a data scraping project, and suddenly, your IP address is blacklisted. Or perhaps you’re working on a privacy-first application where user anonymity is non-negotiable. Tor (The Onion Router) is the perfect solution for both scenarios. It routes your internet traffic through a decentralized network of servers (nodes), obscuring its origin and making it exceptionally challenging to trace.

    Tor is not just a tool for bypassing restrictions; it’s a cornerstone of privacy on the internet. From journalists working in oppressive regimes to developers building secure applications, Tor is widely used for anonymity and bypassing censorship. It allows you to mask your IP address, avoid surveillance, and access region-restricted content.

    However, integrating Tor into your Python projects isn’t as straightforward as flipping a switch. It requires careful configuration and a solid understanding of the tools involved. Today, I’ll guide you through two robust methods to make HTTP requests via Tor: using the requests library with a SOCKS5 proxy and Using the stem library for advanced control. By the end, you’ll have all the tools you need to bring the power of Tor into your Python workflows.

    πŸ” Security Note: Tor anonymizes your traffic but does not encrypt it beyond the Tor network. Always use HTTPS to protect the data you send and receive.

    Getting Tor Up and Running

    Before we dive into Python code, we need to ensure that Tor is installed and running on your system. Here’s a quick rundown for different platforms:

    • Linux: Install Tor via your package manager, e.g., sudo apt install tor. Start the service with sudo service tor start.
    • Mac: Use Homebrew: brew install tor. Then start it with brew services start tor.
    • Windows: Download the Tor Expert Bundle from the official Tor Project website, extract it, and run the tor.exe executable.

    By default, Tor runs a SOCKS5 proxy on 127.0.0.1:9050. This is the endpoint we’ll leverage to route HTTP requests through the Tor network.

    Pro Tip: After installing Tor, verify that it’s running by checking if the port 9050 is active. On Linux/Mac, use netstat -an | grep 9050. On Windows, use netstat -an | findstr 9050.

    Method 1: Using the requests Library with a SOCKS5 Proxy

    The simplest way to integrate Tor into your Python project is by configuring the requests library to use Tor’s SOCKS5 proxy. This approach is lightweight and straightforward but offers limited control over Tor’s features.

    Step 1: Install Required Libraries

    First, ensure you have the necessary dependencies installed. The requests library needs an additional component for SOCKS support:

    pip install requests[socks]

    Step 2: Configure a Tor-Enabled Session

    Create a reusable function to configure a requests session that routes traffic through Tor:

    import requests
    
    def get_tor_session():
     session = requests.Session()
     session.proxies = {
     'http': 'socks5h://127.0.0.1:9050',
     'https': 'socks5h://127.0.0.1:9050'
     }
     return session
    

    The socks5h protocol ensures that DNS lookups are performed through Tor, adding an extra layer of privacy.

    Step 3: Test the Tor Connection

    Verify that your HTTP requests are being routed through the Tor network by checking your outbound IP address:

    session = get_tor_session()
    response = session.get("http://httpbin.org/ip")
    print("Tor IP:", response.json())
    

    If everything is configured correctly, the IP address returned will differ from your machine’s regular IP address. This ensures that your request was routed through the Tor network.

    Warning: If you receive errors or no response, double-check that the Tor service is running and listening on 127.0.0.1:9050. Troubleshooting steps include restarting the Tor service and verifying your proxy settings.

    Method 2: Using the stem Library for Advanced Tor Control

    If you need more control over Tor’s capabilities, such as programmatically changing your IP address, the stem library is your go-to tool. It allows you to interact directly with the Tor process through its control port.

    Step 1: Install the stem Library

    Install the stem library using pip:

    pip install stem

    Step 2: Configure the Tor Control Port

    To use stem, you’ll need to enable the Tor control port (default: 9051) and set a control password. Edit your Tor configuration file (usually /etc/tor/torrc or torrc in the Tor bundle directory) and add:

    ControlPort 9051
    HashedControlPassword <hashed_password>
    

    Generate a hashed password using the tor --hash-password command and paste it into the configuration file. Restart Tor for the changes to take effect.

    Step 3: Interact with the Tor Controller

    Use stem to authenticate and send commands to the Tor control port:

    from stem.control import Controller
    
    with Controller.from_port(port=9051) as controller:
     controller.authenticate(password='your_password')
     print("Connected to Tor controller")
    

    Step 4: Programmatically Change Your IP Address

    One of the most powerful features of stem is the ability to request a new Tor circuit (and thus a new IP address) with the SIGNAL NEWNYM command:

    from stem import Signal
    from stem.control import Controller
    
    with Controller.from_port(port=9051) as controller:
     controller.authenticate(password='your_password')
     controller.signal(Signal.NEWNYM)
     print("Requested a new Tor identity")
    

    Step 5: Combine stem with HTTP Requests

    You can marry the control capabilities of stem with the HTTP functionality of the requests library:

    import requests
    from stem import Signal
    from stem.control import Controller
    
    def get_tor_session():
     session = requests.Session()
     session.proxies = {
     'http': 'socks5h://127.0.0.1:9050',
     'https': 'socks5h://127.0.0.1:9050'
     }
     return session
    
    with Controller.from_port(port=9051) as controller:
     controller.authenticate(password='your_password')
     controller.signal(Signal.NEWNYM)
     
     session = get_tor_session()
     response = session.get("http://httpbin.org/ip")
     print("New Tor IP:", response.json())
    

    Troubleshooting Common Issues

    • Tor not running: Ensure the Tor service is active. Restart it if necessary.
    • Connection refused: Verify that the control port (9051) or SOCKS5 proxy (9050) is correctly configured.
    • Authentication errors: Double-check your torrc file for the correct hashed password and restart Tor after modifications.

    Quick Summary

    • Tor enhances anonymity by routing traffic through multiple nodes.
    • The requests library with a SOCKS5 proxy is simple and effective for basic use cases.
    • The stem library provides advanced control, including dynamic IP changes.
    • Always use HTTPS to secure your data, even when using Tor.
    • Troubleshooting tools like netstat and careful torrc configuration can resolve most issues.
    🛠 Recommended Resources:

    Tools and books mentioned in (or relevant to) this article:

    📋 Disclosure: Some links are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I have personally used or thoroughly evaluated.


    πŸ“š Related Articles

    πŸ“Š Free AI Market Intelligence

    Join Alpha Signal β€” AI-powered market research delivered daily. Narrative detection, geopolitical risk scoring, sector rotation analysis.

    Join Free on Telegram β†’

    Pro with stock conviction scores: $5/mo

    Get Weekly Security & DevOps Insights

    Join 500+ engineers getting actionable tutorials on Kubernetes security, homelab builds, and trading automation. No spam, unsubscribe anytime.

    Subscribe Free →

    Delivered every Tuesday. Read by engineers at Google, AWS, and startups.

    Frequently Asked Questions

    What is How to Make HTTP Requests Through Tor with Python about?

    Why Use Tor for HTTP Requests? Picture this: you’re in the middle of a data scraping project, and suddenly, your IP address is blacklisted.

    Who should read this article about How to Make HTTP Requests Through Tor with Python?

    Anyone interested in learning about How to Make HTTP Requests Through Tor with Python and related topics will find this article useful.

    What are the key takeaways from How to Make HTTP Requests Through Tor with Python?

    Or perhaps you’re working on a privacy-first application where user anonymity is non-negotiable. Tor (The Onion Router) is the perfect solution for both scenarios. It routes your internet traffic thro

    References

Also by us: StartCaaS — AI Company OS · Hype2You — AI Tech Trends