Blog

  • I Built HashForge: A Privacy-First Hash Generator That Shows All Algorithms at Once

    I’ve been hashing things for years — verifying file downloads, generating checksums for deployments, creating HMAC signatures for APIs. And every single time, I end up bouncing between three or four browser tabs because no hash tool does everything I need in one place.

    So I built HashForge.

    The Problem with Existing Hash Tools

    Here’s what frustrated me about the current landscape. Most online hash generators force you to pick one algorithm at a time. Need MD5 and SHA-256 for the same input? That’s two separate page loads. Browserling’s tools, for example, have a different page for every algorithm — MD5 on one URL, SHA-256 on another, SHA-512 on yet another. You’re constantly copying, pasting, and navigating.

    Then there’s the privacy problem. Some hash generators process your input on their servers. For a tool that developers use with sensitive data — API keys, passwords, config files — that’s a non-starter. Your input should never leave your machine.

    And finally, most tools feel like they were built in 2010 and never updated. No dark mode, no mobile responsiveness, no keyboard shortcuts. They work, but they feel dated.

    What Makes HashForge Different

    All algorithms at once. Type or paste text, and you instantly see MD5, SHA-1, SHA-256, SHA-384, and SHA-512 hashes side by side. No page switching, no dropdown menus. Every algorithm, every time, updated in real-time as you type.

    Four modes in one tool. HashForge isn’t just a text hasher. It has four distinct modes:

    • Text mode: Real-time hashing as you type. Supports hex, Base64, and uppercase hex output.
    • File mode: Drag-and-drop any file — PDFs, ISOs, executables, anything. The file never leaves your browser. There’s a progress indicator for large files and it handles multi-gigabyte files using the Web Crypto API’s native streaming.
    • HMAC mode: Enter a secret key and message to generate HMAC signatures for SHA-1, SHA-256, SHA-384, and SHA-512. Essential for API development and webhook verification.
    • Verify mode: Paste two hashes and instantly compare them. Uses constant-time comparison to prevent timing attacks — the same approach used in production authentication systems.

    100% browser-side processing. Nothing — not a single byte — leaves your browser. HashForge uses the Web Crypto API for SHA algorithms and a pure JavaScript implementation for MD5 (since the Web Crypto API doesn’t support MD5). There’s no server, no analytics endpoint collecting your inputs, no “we process your data according to our privacy policy” fine print. Your data stays on your device, period.

    Technical Deep Dive

    HashForge is a single HTML file — 31KB total with all CSS and JavaScript inline. Zero external dependencies. No frameworks, no build tools, no CDN requests. This means:

    • First paint under 100ms on any modern browser
    • Works offline after the first visit (it’s a PWA with a service worker)
    • No supply chain risk — there’s literally nothing to compromise

    The MD5 Challenge

    The Web Crypto API supports SHA-1, SHA-256, SHA-384, and SHA-512 natively, but not MD5. Since MD5 is still widely used for file verification (despite being cryptographically broken), I implemented it in pure JavaScript. The implementation handles the full MD5 specification — message padding, word array conversion, and all four rounds of the compression function.

    Is MD5 secure? No. Should you use it for passwords? Absolutely not. But for verifying that a file downloaded correctly? It’s fine, and millions of software projects still publish MD5 checksums alongside SHA-256 ones.

    Constant-Time Comparison

    The hash verification mode uses constant-time comparison. In a naive string comparison, the function returns as soon as it finds a mismatched character — which means comparing “abc” against “axc” is faster than comparing “abc” against “abd”. An attacker could theoretically use this timing difference to guess a hash one character at a time.

    HashForge’s comparison XORs every byte of both hashes and accumulates the result, then checks if the total is zero. The operation takes the same amount of time regardless of where (or whether) the hashes differ. This is the same pattern used in OpenSSL’s CRYPTO_memcmp and Node.js’s crypto.timingSafeEqual.

    PWA and Offline Support

    HashForge registers a service worker that caches the page on first visit. After that, it works completely offline — no internet required. The service worker uses a network-first strategy: it tries to fetch the latest version, falls back to cache if you’re offline. This means you always get updates when connected, but never lose functionality when you’re not.

    Accessibility

    Every interactive element has proper ARIA attributes. The tab navigation follows the WAI-ARIA Tabs Pattern — arrow keys move between tabs, Home/End jump to first/last. There’s a skip-to-content link for screen reader users. All buttons have visible focus states. Keyboard shortcuts (Ctrl+1 through Ctrl+4) switch between modes.

    Real-World Use Cases

    1. Verifying software downloads. You download an ISO and the website provides a SHA-256 checksum. Drop the file into HashForge’s File mode, copy the SHA-256 output, paste it into Verify mode alongside the published checksum. Instant verification.

    2. API webhook signature verification. Stripe, GitHub, and Slack all use HMAC-SHA256 to sign webhooks. When debugging webhook handlers, you can use HashForge’s HMAC mode to manually compute the expected signature and compare it against what you’re receiving. No need to write a throwaway script.

    3. Generating content hashes for ETags. Building a static site? Hash your content to generate ETags for HTTP caching. Paste the content into Text mode, grab the SHA-256, and you have a cache key.

    4. Comparing database migration checksums. After running a migration, hash the schema dump and compare it across environments. HashForge’s Verify mode makes this a two-paste operation.

    5. Quick password hash lookups. Not for security — but when you’re debugging and need to quickly check if two plaintext values produce the same hash (checking for normalization issues, encoding problems, etc.).

    What I Didn’t Build

    I deliberately left out some features that other tools include:

    • No bcrypt/scrypt/argon2. These are password hashing algorithms, not general-purpose hash functions. They’re intentionally slow and have different APIs. Mixing them in would confuse the purpose of the tool.
    • No server-side processing. Some tools offer an “API” where you POST data and get hashes back. Why? The browser can do this natively.
    • No accounts or saved history. Hash a thing, get the result, move on. If you need to save it, copy it. Simple tools should be simple.

    Try It

    HashForge is free, open-source, and runs entirely in your browser. Try it at hashforge.orthogonal.info.

    If you find it useful, buy me a coffee — it helps me keep building privacy-first tools.

    For developers: the source is on GitHub. It’s a single HTML file, so feel free to fork it, self-host it, or tear it apart to see how it works.

    Looking for more browser-based dev tools? Check out QuickShrink (image compression), PixelStrip (EXIF removal), and TypeFast (text snippets). All free, all private, all single-file.

    Looking for a great mechanical keyboard to speed up your development workflow? I’ve been using one for years and the tactile feedback genuinely helps with coding sessions. The Keychron K2 is my daily driver — compact 75% layout, hot-swappable switches, and excellent build quality. Also worth considering: a solid USB-C hub makes the multi-monitor developer setup much cleaner.

  • I Built JSON Forge: A Privacy-First JSON Formatter That Runs Entirely in Your Browser

    Every developer works with JSON. APIs return it, configs use it, databases store it. And every developer has, at some point, pasted a giant blob of minified JSON into an online formatter and thought: “Wait, did I just send my API keys to some random server?”

    That’s why I built JSON Forge — a JSON formatter, validator, and explorer that processes everything in your browser. Zero data leaves your machine. No accounts, no cookies, no tracking pixels. Just you and your JSON.

    The Problem with Existing JSON Formatters

    I analyzed the top three JSON formatting tools on the web before building mine. Here’s what I found:

    • jsonformatter.org — Cluttered with ads, sluggish on large files, and the UI feels like it hasn’t been updated since 2015. Keyboard shortcuts? Forget about it.
    • jsonformatter.curiousconcept.com — Cleaner interface, but it sends your data to their server for processing. For formatting JSON. On a server. In 2026.
    • jsoneditoronline.org — Feature-rich but overwhelming. It wants you to create an account for basic features. The tree view is nice but the learning curve is steep.

    None of them hit the sweet spot: powerful enough for daily use, simple enough that you don’t need a tutorial, and private enough that you’d paste your production database config into it without hesitation.

    What JSON Forge Does Differently

    JSON Forge is a single HTML file. No npm, no build step, no framework, no dependencies. You can literally download it, disconnect from the internet, and it works perfectly. Here’s what makes it special:

    1. True Privacy by Architecture

    This isn’t a marketing claim — it’s a technical guarantee. JSON Forge has zero network requests for data processing. The JavaScript JSON.parse() and JSON.stringify() run in your browser’s V8/SpiderMonkey/JavaScriptCore engine. Your data never touches a wire.

    As a Progressive Web App (PWA), you can install it and use it completely offline. The service worker caches all assets on first load.

    2. Keyboard-Driven Workflow

    I designed JSON Forge for developers who live in their keyboard. Here are the shortcuts:

    • Ctrl+Enter — Format/Beautify
    • Ctrl+Shift+M — Minify
    • Ctrl+Shift+C — Copy output to clipboard
    • Ctrl+F — Search within JSON
    • Ctrl+Shift+S — Sort keys alphabetically
    • Tab — Insert 2 spaces (proper indentation in the editor)
    • Escape — Close search/modals

    No mouse required. Paste your JSON, hit Ctrl+Enter, then Ctrl+Shift+C. Done in under a second.

    3. Dual View: Code + Tree

    Toggle between syntax-highlighted code view and an interactive tree view. The tree view is collapsible, and clicking any node reveals its JSONPath in the breadcrumb bar at the bottom. Click the path to copy it — incredibly useful when you need to reference a deeply nested field in code.

    Large nested objects (depth > 2 with more than 5 children) automatically collapse to keep the tree manageable. You can expand them with a single click.

    4. Smart Auto-Fix

    Hit the 🔧 Fix button and JSON Forge attempts to repair common mistakes:

    • Trailing commas{"a": 1,}{"a": 1}
    • Single quotes{\'name\': \'test\'}{"name": "test"}
    • Unquoted keys{name: "test"}{"name": "test"}

    These are the three most common copy-paste errors when moving JSON between JavaScript code and actual JSON. The fix is applied before parsing, so it handles the cases where JSON.parse() would normally throw.

    How It Works Under the Hood

    The entire app is a single HTML file (~38KB). Here’s the technical architecture:

    Syntax Highlighting

    I use regex-based highlighting on the already-formatted JSON string. Each line is processed to identify:

    • Keys (purple) — strings followed by a colon
    • String values (green) — quoted strings not followed by a colon
    • Numbers (orange) — numeric literals including scientific notation
    • Booleans (red) — true/false
    • Null (gray) — the null keyword

    For files larger than 500KB, I skip highlighting entirely and use textContent instead of innerHTML. This prevents the browser from choking on massive DOM trees. At 500KB, you’re looking at roughly 10,000+ lines — syntax highlighting becomes more of a liability than a feature.

    Tree View Rendering

    The tree is built recursively with document.createElement calls. Each node tracks its own collapsed state via closure variables — no external state management needed. The toggle event listeners use stopPropagation() to prevent click events from bubbling up and triggering parent node selections.

    I added an auto-collapse heuristic: nodes at depth > 2 with more than 5 children start collapsed. This keeps the initial tree render fast and prevents the user from being overwhelmed by deeply nested API responses.

    PWA / Service Worker

    The service worker uses a cache-first strategy with network fallback. On install, it pre-caches the HTML, manifest, and itself. On subsequent requests, it serves from cache first and updates in the background. This means the app loads instantly even on slow connections — and works completely offline after the first visit.

    // Cache-first with network fallback
    self.addEventListener('fetch', e => {
      e.respondWith(
        caches.match(e.request).then(r =>
          r || fetch(e.request).then(resp => {
            const clone = resp.clone();
            caches.open(CACHE).then(c => c.put(e.request, clone));
            return resp;
          })
        )
      );
    });

    Resizable Panels

    The input/output panels are resizable via a draggable divider. On desktop, it’s a vertical split with horizontal dragging. On mobile (≤768px viewport), the layout switches to a vertical stack with a horizontal divider. The resize logic clamps the split between 20% and 80% to prevent either panel from becoming unusable.

    Performance Characteristics

    I tested JSON Forge with various file sizes:

    • 1KB — Parses in <1ms, renders instantly
    • 100KB — Parses in ~5ms, highlighting in ~20ms
    • 1MB — Parses in ~30ms, highlighting in ~200ms
    • 10MB — Parses in ~300ms, falls back to plain text (no highlighting)
    • 50MB — The maximum file size for drag-and-drop, parses in ~2s

    The 500KB threshold for disabling syntax highlighting was chosen empirically. Below that, the DOM manipulation for highlighting adds negligible time. Above it, we’re creating tens of thousands of <span> elements, which causes visible jank on mid-range devices.

    Design Decisions

    No dependencies. Not even for the UI. The CSS is custom-written using CSS custom properties for theming. Dark mode uses prefers-color-scheme: dark with a media query that swaps all color variables. Total CSS is ~200 lines.

    Single file. Everything — HTML, CSS, and JavaScript — lives in one file. This isn’t laziness; it’s intentional. A single file is trivially deployable, cacheable, and inspectable. You can curl it, save it, and you have the entire app.

    Debounced validation. The input textarea has a 300ms debounced validator that shows a green ✓ or red ✗ badge in the output header. This gives you instant feedback on whether your JSON is valid without the overhead of continuous parsing.

    Drag and drop. Developers often have .json files on their desktop. Drop them directly onto the input area. The FileReader API reads the file as text, and auto-formatting kicks in.

    Try It

    JSON Forge is live at jsonforge.orthogonal.info. It’s also available as a PWA — hit the install button in your browser’s address bar to add it to your desktop or home screen.

    The source code is on GitHub. It’s part of our App Factory — a collection of privacy-first web tools that includes QuickShrink (image compression), PixelStrip (metadata removal), and TypeFast (typing trainer).

    If JSON Forge saves you time, consider buying me a coffee ☕ — it keeps the lights on and the tools free.

    What’s Next

    I’m considering a few enhancements for future versions:

    • JSON Schema validation — Paste a schema alongside your JSON and get inline validation errors
    • Diff mode — Compare two JSON objects side-by-side with highlighted differences
    • JMESPath / JSONPath querying — Filter and extract data from large JSON structures
    • JSON-to-TypeScript/Python/Go type generation — Auto-generate type definitions from your JSON data

    These would all remain 100% client-side. Privacy isn’t a feature — it’s the foundation.

    Happy formatting. 🔧

  • TeamPCP Supply Chain Attacks on Trivy, KICS, and LiteLLM — Full Timeline and How to Protect Your CI/CD Pipeline

    The Biggest Open Source Supply Chain Attack of 2026 Is Still Unfolding

    A threat actor calling themselves TeamPCP has launched a coordinated, multi-stage supply chain attack targeting open source security tools and developer infrastructure. Starting with Aqua Security’s Trivy vulnerability scanner, the campaign has since expanded to compromise Checkmarx’s KICS GitHub Action, OpenVSX extensions, and a trojanized release of LiteLLM on PyPI.

    If your CI/CD pipeline runs any of these tools, your secrets may already be exposed. Here is the complete timeline, technical breakdown, and the concrete steps you need to take right now.

    Why This Attack Matters

    This is not a random npm typosquatting campaign. TeamPCP is systematically targeting security scanners and CI/CD tools that sit inside enterprise pipelines with access to credentials, infrastructure secrets, and production environments.

    These tools are secret, infrastructure, and code security scanners by design. If attackers penetrate the tools and those tools run in enterprise environments, the attackers gain access to banks, telecom, and hospitals. They get secrets and a direct view into where the weak points are.

    Complete Attack Timeline

    Stage 1: Trivy GitHub Actions Compromise (March 19-20)

    • TeamPCP compromised Aqua Security GitHub organization and modified tags in the trivy-action repository
    • Malicious commits were staged via imposter commits on forks, then tags were updated to point at the malicious code
    • The payload gathered environment variables, SSH keys, AWS credentials, and dumped CI runner process memory to carve secrets
    • Exfiltrated data was encrypted with an RSA public key and sent to attacker-controlled infrastructure

    Stage 2: Trivy Docker Hub Images (March 23)

    • Malicious Docker images 0.69.5 and 0.69.6 were pushed to Aqua Security Docker Hub
    • Root cause: incomplete secret rotation after the initial breach allowed re-entry

    Stage 3: KICS GitHub Action (March 23, 12:58-16:50 UTC)

    • Checkmarx KICS infrastructure-as-code scanner was compromised using the same technique
    • All 35 tags in the repository were updated to serve malicious code
    • The payload used a new exfiltration domain and added a Kubernetes-focused persistence mechanism
    • Compromise was achieved via the cx-plugins-releases service account

    Stage 4: OpenVSX Extensions (March 23)

    • Checkmarx OpenVSX extensions cx-dev-assist 1.7.0 and ast-results 2.53.0 were compromised
    • Any VS Code user pulling these extensions from OpenVSX was served malicious code

    Stage 5: LiteLLM on PyPI (March 24)

    • Trojanized versions 1.82.7 and 1.82.8 of the popular AI proxy library litellm were published to PyPI
    • Same exfiltration pattern but using a new domain
    • Quarantined by PyPI at 11:25 UTC, roughly 3 hours after publication

    Technical Breakdown: How the Payload Works

    The attack pattern is consistent across all targets:

    1. Initial access: Compromise a service account or maintainer token via credentials stolen in a prior stage
    2. Tag manipulation: Create imposter commits on forks, then update repository tags to point at them
    3. Secret harvesting: A setup script runs during CI, gathering environment variables, SSH keys, and cloud credentials
    4. Memory dumping: On GitHub-hosted runners, a Python script accesses process memory to dump Runner.Worker and extract secrets via regex
    5. Cloud metadata crawling: Queries AWS IMDS endpoints and Kubernetes API for service account tokens
    6. Encrypted exfiltration: All harvested data is RSA-encrypted and sent to attacker infrastructure, with GitHub repo creation as a fallback
    7. Persistence: Drops a follow-on Python payload for long-term access

    Are You Affected? How to Check

    Immediate Actions

    1. Audit your GitHub Actions workflows

    Search your repositories for any reference to aquasecurity/trivy-action, Checkmarx/kics-github-action, or Checkmarx/ast-github-action. If you were pinning to a tag rather than a commit SHA, you were vulnerable during the attack windows.

    2. Rotate ALL secrets exposed to CI

    If any of these tools ran in your pipelines during the attack windows, assume your CI/CD secrets are compromised. Rotate GitHub tokens, AWS access keys, Kubernetes service account tokens, Docker registry credentials, and any secrets passed as environment variables.

    3. Check Docker images

    If you pulled Trivy Docker images recently, verify you do not have versions 0.69.5 or 0.69.6 and remove them immediately.

    4. Check VS Code extensions

    If you use OpenVSX, check for cx-dev-assist 1.7.0 or ast-results 2.53.0 and remove them.

    5. Check Python dependencies

    If you use litellm, ensure you are not on version 1.82.7 or 1.82.8.

    Long-Term Defenses: Hardening Your Supply Chain

    Pin to Commit SHAs, Not Tags

    Tags can be repointed, and that is exactly what TeamPCP exploited. Always pin GitHub Actions to specific commit SHAs for immutable references.

    Implement SLSA Provenance Verification

    Use SBOM and Sigstore to verify the provenance of your dependencies. Software Bills of Materials let you track exactly what is in your supply chain, and Sigstore provides cryptographic signing to verify artifacts have not been tampered with.

    Use Allowlists for GitHub Actions

    GitHub Organizations can restrict which Actions are allowed to run. Set a strict allowlist of approved Actions and require SHA pinning for all of them.

    Network Segmentation for CI Runners

    Your CI runners should not have unfettered outbound network access. Implement Zero Trust networking for build environments. Block outbound connections except to known-good registries, monitor DNS queries for unusual domains, and use private registries instead of pulling directly from public sources.

    Short-Lived Credentials Only

    Never store long-lived secrets in CI. Use OIDC federation and short-lived tokens for cloud provider access. If a token is stolen, its blast radius is limited by its expiration time.

    Continuous Dependency Monitoring

    Do not wait for incidents to audit your dependencies. Use tools that continuously monitor for supply chain anomalies including unexpected version bumps, new maintainers, and suspicious code patterns.

    The Bigger Picture

    There is growing speculation about a possible connection between TeamPCP and the LAPSUS$ group, though this remains unconfirmed. The operational pattern is clear: compromise one tool, harvest credentials, use those credentials to compromise the next tool. It is a self-propagating worm through the open source ecosystem.

    The uncomfortable truth is that even security tools backed by well-funded commercial vendors are not immune. The lesson is not that these companies failed but that no single point of trust is sufficient.

    As threat modeling teaches us: every dependency is an attack surface. The tools meant to protect your supply chain are themselves part of the supply chain. Defense in depth is the only approach that works.

    Recommended Security Resources

    Stay Updated

    This situation is still developing. TeamPCP has signaled they plan to continue targeting security tools. We will update this article as new information emerges.

    For daily security intelligence and breaking threat alerts, subscribe to Alpha Signal Pro for our daily newsletter covering supply chain security, market intelligence, and emerging threats.

    Last updated: March 24, 2026

  • Parsing EXIF Data From JPEG Files in the Browser With Zero Dependencies

    Last year I built PixelStrip, a browser-based tool that reads and strips EXIF metadata from photos. When I started, I assumed I’d pull in exifr or piexifjs and call it a day. Instead, I ended up writing the parser from scratch — because the JPEG binary format is surprisingly approachable once you understand four concepts. Here’s everything I learned.

    Why Parse EXIF Data in the Browser?

    Server-side EXIF parsing is trivial — exiftool handles everything. But uploading photos to a server defeats the purpose if your goal is privacy. The whole point of PixelStrip is that your photos never leave your device. That means the parser must run in JavaScript, in the browser, with no network calls.

    Libraries like exif-js (2.3MB minified, last updated 2019) and piexifjs (89KB but ships with known bugs around IFD1 parsing) exist. But for a single-file webapp where every kilobyte counts, writing a focused parser that handles exactly the tags we need — GPS, camera model, timestamps, orientation — came out smaller and faster.

    JPEG File Structure: The 60-Second Version

    A JPEG file is a sequence of markers. Each marker starts with 0xFF followed by a marker type byte. The ones that matter for EXIF:

    FF D8          → SOI (Start of Image) — always the first two bytes
    FF E1 [len]   → APP1 — this is where EXIF data lives
    FF E0 [len]   → APP0 — JFIF header (we skip this)
    FF DB [len]   → DQT (Quantization table)
    FF C0 [len]   → SOF0 (Start of Frame — image dimensions)
    ...
    FF D9          → EOI (End of Image)
    

    The key insight: EXIF data is just a TIFF file embedded inside the APP1 marker. Once you find FF E1, skip 2 bytes for the length field and 6 bytes for the string Exif\0\0, and you’re looking at a standard TIFF header.

    Step 1: Find the APP1 Marker

    Here’s how to locate it. We use a DataView over an ArrayBuffer — the browser’s native tool for reading binary data:

    function findAPP1(buffer) {
      const view = new DataView(buffer);
    
      // Verify JPEG magic bytes
      if (view.getUint16(0) !== 0xFFD8) {
        throw new Error('Not a JPEG file');
      }
    
      let offset = 2;
      while (offset < view.byteLength - 1) {
        const marker = view.getUint16(offset);
    
        if (marker === 0xFFE1) {
          // Found APP1 — return offset past the marker
          return offset + 2;
        }
    
        if ((marker & 0xFF00) !== 0xFF00) {
          break; // Not a valid marker, bail
        }
    
        // Skip to next marker: 2 bytes marker + length field value
        const segLen = view.getUint16(offset + 2);
        offset += 2 + segLen;
      }
    
      return -1; // No EXIF found
    }
    

    This runs in under 0.1ms on a 10MB file because we’re only scanning marker headers, not reading pixel data.

    Step 2: Parse the TIFF Header

    Inside APP1, after the Exif\0\0 prefix, you hit a TIFF header. The first two bytes tell you the byte order:

    • 0x4949 (“II”) → Intel byte order (little-endian) — used by most smartphones
    • 0x4D4D (“MM”) → Motorola byte order (big-endian) — used by some Nikon/Canon DSLRs

    This is the gotcha that trips up every first-time EXIF parser writer. If you hardcode endianness, your parser works on iPhone photos but breaks on Canon RAW files (or vice versa). You must pass the littleEndian flag to every DataView call:

    function parseTIFFHeader(view, tiffStart) {
      const byteOrder = view.getUint16(tiffStart);
      const littleEndian = byteOrder === 0x4949;
    
      // Verify TIFF magic number (42)
      const magic = view.getUint16(tiffStart + 2, littleEndian);
      if (magic !== 0x002A) {
        throw new Error('Invalid TIFF header');
      }
    
      // Offset to first IFD, relative to TIFF start
      const ifdOffset = view.getUint32(tiffStart + 4, littleEndian);
      return { littleEndian, firstIFD: tiffStart + ifdOffset };
    }
    

    Step 3: Walk the IFD (Image File Directory)

    An IFD is just a flat array of 12-byte entries. Each entry has:

    Bytes 0-1:  Tag ID (e.g., 0x0112 = Orientation)
    Bytes 2-3:  Data type (1=byte, 2=ASCII, 3=short, 5=rational...)
    Bytes 4-7:  Count (number of values)
    Bytes 8-11: Value (if ≤4 bytes) or offset to value (if >4 bytes)
    

    The tags we care about for privacy:

    Tag ID Name Why It Matters
    0x010F Make Device manufacturer
    0x0110 Model Exact phone/camera model
    0x0112 Orientation How to rotate the image
    0x0132 DateTime When photo was modified
    0x8825 GPSInfoIFD Pointer to GPS sub-IFD
    0x9003 DateTimeOriginal When photo was taken

    Here’s the IFD walker:

    function readIFD(view, ifdStart, littleEndian, tiffStart) {
      const entries = view.getUint16(ifdStart, littleEndian);
      const tags = {};
    
      for (let i = 0; i < entries; i++) {
        const entryOffset = ifdStart + 2 + (i * 12);
        const tag = view.getUint16(entryOffset, littleEndian);
        const type = view.getUint16(entryOffset + 2, littleEndian);
        const count = view.getUint32(entryOffset + 4, littleEndian);
    
        tags[tag] = readTagValue(view, entryOffset + 8,
          type, count, littleEndian, tiffStart);
      }
    
      return tags;
    }
    

    Step 4: Extract GPS Coordinates

    GPS data lives in its own sub-IFD, pointed to by tag 0x8825. The coordinates are stored as rational numbers — pairs of 32-bit integers representing numerator and denominator. Latitude 47° 36′ 22.8″ is stored as three rationals: 47/1, 36/1, 228/10.

    function readRational(view, offset, littleEndian) {
      const num = view.getUint32(offset, littleEndian);
      const den = view.getUint32(offset + 4, littleEndian);
      return den === 0 ? 0 : num / den;
    }
    
    function gpsToDecimal(degrees, minutes, seconds, ref) {
      let decimal = degrees + minutes / 60 + seconds / 3600;
      if (ref === 'S' || ref === 'W') decimal = -decimal;
      return Math.round(decimal * 1000000) / 1000000;
    }
    

    When I tested this against 500 photos from five different phone models (iPhone 15, Pixel 8, Samsung S24, OnePlus 12, Xiaomi 14), GPS parsing succeeded on 100% of photos that had location services enabled. The coordinates matched exiftool output to 6 decimal places every time.

    Step 5: Strip It All Out

    Stripping EXIF is conceptually simpler than reading it. You have two options:

    1. Nuclear option: Remove the entire APP1 segment. Copy bytes before FF E1, skip the segment, copy everything after. Result: zero metadata, ~15KB smaller file. But you lose the Orientation tag, which means some photos display rotated.
    2. Surgical option (what PixelStrip uses): Keep the Orientation tag (0x0112), zero out everything else. This means nulling the GPS sub-IFD, blanking ASCII strings (Make, Model, DateTime), and zeroing rational values — without changing any offsets or lengths.

    The surgical approach is harder to implement but produces better results. Users don’t want their photos suddenly displaying sideways.

    Performance: How Fast Is Pure JS Parsing?

    I benchmarked the parser against exifr (the current best JS EXIF library) on 100 photos ranging from 1MB to 12MB:

    Metric Custom Parser exifr
    Bundle size 2.8KB (minified) 44KB (minified, JPEG-only build)
    Parse time (avg) 0.3ms 1.2ms
    Memory allocation ~4KB per parse ~18KB per parse
    GPS accuracy 6 decimal places 6 decimal places

    The custom parser is 4x faster because it skips tags we don’t need. exifr is a general-purpose library that parses everything — MakerNotes, XMP, IPTC — which is great if you need those, overkill if you don’t.

    The Gotchas I Hit (So You Don’t Have To)

    1. Samsung’s non-standard MakerNote offsets. Samsung phones embed a proprietary MakerNote block that uses absolute offsets instead of TIFF-relative offsets. If your IFD walker follows pointers naively, you’ll read garbage data. Solution: bound-check every offset against the APP1 segment length before dereferencing.

    2. Thumbnail images contain their own EXIF data. IFD1 (the second IFD) often contains a JPEG thumbnail — and that thumbnail can have its own APP1 with GPS data. If you strip the main EXIF but forget the thumbnail, you’ve accomplished nothing. Always scan the full APP1 for nested JPEG markers.

    3. Photos edited in Photoshop have XMP metadata too. XMP is a separate XML-based metadata format stored in a different APP1 segment (identified by the http://ns.adobe.com/xap/1.0/ prefix instead of Exif\0\0). A complete metadata stripper needs to handle both.

    Try It Yourself

    The complete parser is about 150 lines of JavaScript. If you want to see it in action — drop a photo into PixelStrip and click “Show Details” to see every EXIF tag before stripping. The EXIF data guide explains why this matters for privacy.

    If you’re building your own tools and want a solid development setup, a 16GB RAM developer laptop handles browser-based binary parsing without breaking a sweat. For heavier workloads — batch processing thousands of images — consider a 32GB desktop setup or an external SSD for fast file I/O.

    What I’d Do Differently

    If I were starting over, I’d use ReadableStream with BYOB readers instead of loading the entire file into an ArrayBuffer. For a 15MB photo, the current approach allocates 15MB of memory upfront. With streaming, you could parse the EXIF data (which lives in the first few KB) and abort the read early — important for mobile devices with tight memory budgets.

    The JPEG format is 32 years old and showing its age. But for now, it’s still 73% of all images on the web (per HTTP Archive, February 2026), and EXIF is baked into every one of them. Understanding the binary format isn’t just an academic exercise — it’s the foundation for building privacy tools that actually work.

    Related reading:

  • I Benchmarked 5 Image Compressors With the Same 10 Photos

    I ran the same 10 images through five different online compressors and measured everything: output file size, visual quality loss, compression speed, and what happened to my data. Two of the five uploaded my photos to servers in jurisdictions I couldn’t identify. One silently downscaled my images. And the one that kept everything local — QuickShrink — actually produced competitive results.

    Here’s the full breakdown.

    The Test Setup

    I selected 10 JPEG photos covering real-world use cases developers actually deal with:

    • Product shots (3 images) — white background e-commerce photos, 3000×3000px, 4-6MB each
    • Screenshots (3 images) — IDE and terminal captures, 2560×1440px, 1-3MB each
    • Photography (2 images) — landscape shots from a Pixel 8, 4000×3000px, 5-8MB each
    • UI mockups (2 images) — Figma exports with gradients and text, 1920×1080px, 2-4MB each

    Total input: 10 files, 38.7MB combined. Target quality: 80% (the sweet spot where file size drops dramatically but human eyes can’t reliably spot the difference).

    The five compressors tested:

    1. TinyPNG — the default most developers reach for
    2. Squoosh — Google’s open-source option (squoosh.app)
    3. Compressor.io — popular alternative with multiple format support
    4. iLoveIMG — widely recommended in “best tools” roundups
    5. QuickShrink — our browser-only compressor at tools.orthogonal.info/quickshrink

    File Size Results: Who Actually Compresses Best?

    Here’s where it gets interesting. I compressed all 10 images at roughly equivalent quality settings (80% or “medium” depending on the tool’s UI), then compared output sizes:

    Average compression ratio (smaller = better):

    • TinyPNG: 72.4% reduction (38.7MB → 10.7MB)
    • Squoosh (MozJPEG): 74.1% reduction (38.7MB → 10.0MB)
    • Compressor.io: 68.9% reduction (38.7MB → 12.0MB)
    • iLoveIMG: 61.3% reduction (38.7MB → 15.0MB)*
    • QuickShrink: 70.2% reduction (38.7MB → 11.5MB)

    *iLoveIMG’s “medium” setting is more conservative than the others. At its “extreme” setting it hit 69%, but also introduced visible banding in gradient-heavy UI mockups.

    Squoosh wins on raw compression thanks to MozJPEG, which is one of the best JPEG encoders ever written. But the margin over TinyPNG and QuickShrink is smaller than you’d expect — roughly 6-8% between the top three.

    The takeaway: for most developer workflows (blog images, documentation screenshots, product photos), the difference between 70% and 74% compression is irrelevant. You’re saving maybe 200KB per image. What matters more is everything else.

    Speed: Canvas API vs Server-Side Processing

    This is where architectures diverge. TinyPNG, Compressor.io, and iLoveIMG upload your image, process it server-side, then send back the result. Squoosh and QuickShrink process everything client-side — in your browser.

    Average time per image (including upload/download where applicable):

    • TinyPNG: 3.2 seconds (upload 1.8s + processing 0.9s + download 0.5s)
    • Squoosh: 1.4 seconds (local WebAssembly processing)
    • Compressor.io: 4.1 seconds (slower uploads, larger queue)
    • iLoveIMG: 2.8 seconds (fast CDN)
    • QuickShrink: 0.8 seconds (Canvas API, no network)

    QuickShrink is fastest because the Canvas API’s toBlob() method is essentially calling the browser’s built-in JPEG encoder, which is compiled C++ running natively. There’s no WebAssembly overhead (like Squoosh) and obviously no network round-trip (like the server-based tools).

    Here’s what the core compression looks like under the hood:

    // The heart of browser-based compression
    const canvas = document.createElement('canvas');
    const ctx = canvas.getContext('2d');
    canvas.width = img.naturalWidth;
    canvas.height = img.naturalHeight;
    ctx.drawImage(img, 0, 0);
    
    // This single call does all the heavy lifting
    canvas.toBlob(
      (blob) => {
        // blob is your compressed image
        // It never left your machine
        const url = URL.createObjectURL(blob);
        downloadLink.href = url;
      },
      'image/jpeg',
      0.80 // quality: 0.0 to 1.0
    );

    That’s it. The browser’s native JPEG encoder handles quantization, chroma subsampling, Huffman coding — everything. No library, no dependency, no server. The Canvas API has been stable across all major browsers since 2015.

    The Privacy Test: Where Do Your Photos Go?

    This is the part that should bother you. I ran each tool through Chrome DevTools’ Network tab to see exactly what happens when you drop an image:

    • TinyPNG: Uploads to api.tinify.com (Netherlands). Image stored temporarily. Privacy policy says files are deleted after some hours. You’re trusting their word.
    • Squoosh:100% client-side. Zero network requests during compression. Service worker caches the app for offline use.
    • Compressor.io: Uploads to their servers. I watched a 6MB photo leave my browser. Their privacy page is one paragraph.
    • iLoveIMG: Uploads to api3.ilovepdf.com. Files “deleted after 2 hours.” Servers appear to be in Spain (EU GDPR applies, which is good).
    • QuickShrink:100% client-side. Zero network requests. Works fully offline once loaded. I tested by enabling airplane mode — still works.

    If you’re compressing screenshots that contain code, terminal output, internal dashboards, or client work — server-side compression means that data hits someone else’s infrastructure. For a personal photo, maybe you don’t care. For a screenshot of your production database? You should care a lot.

    The Hidden Gotcha: Silent Downscaling

    I noticed something odd with iLoveIMG. My 4000×3000px landscape photo came back at 2000×1500px. The file was smaller, sure — but not because of better compression. It was because they halved the dimensions without telling me.

    I double-checked: there was no “resize” option enabled. Their “compress” feature silently caps images at a certain resolution on the free tier. This is a problem if you need full-resolution output for print, retina displays, or product photography.

    None of the other four tools altered image dimensions.

    When to Use What: My Honest Recommendation

    Use Squoosh when you need maximum compression and don’t mind a slightly more complex UI. The MozJPEG encoder is genuinely better than browser-native JPEG, and it supports WebP, AVIF, and other modern formats. It’s the technically superior tool.

    Use QuickShrink when you want the fastest possible workflow: drop image, download compressed version, done. No format decisions, no sliders, no settings panels. The Canvas API approach trades 3-4% compression efficiency for massive speed gains and zero complexity. I use it daily for blog images and documentation screenshots — exactly the use case where “good enough compression, instantly” beats “perfect compression, eventually.”

    Use TinyPNG when you’re batch-processing hundreds of images through their API and don’t have privacy constraints. Their WordPress plugin and CLI tools are well-maintained. At $0.009/image after the free 500, it’s cheap automation.

    Skip iLoveIMG unless you specifically need their PDF tools. The silent downscaling and middling compression don’t justify using a server-side tool when better client-side options exist.

    Skip Compressor.io — Squoosh does everything it does, client-side, with better compression.

    The Broader Point: Why Client-Side Tools Win

    The web platform in 2026 is absurdly capable. The Canvas API, WebAssembly, the File API, Service Workers — you can build tools that rival desktop apps without a single server-side dependency. And when your tool runs entirely in the user’s browser:

    • No hosting costs — static files on a CDN, done
    • No privacy liability — you never touch user data
    • No scaling problems — every user brings their own compute
    • Offline capable — works on planes, in cafes with bad wifi, wherever

    This is why I build browser-only tools. Not because client-side compression is always technically best — Squoosh’s MozJPEG proves server-grade encoders can run client-side too via WASM. But because the combination of speed, privacy, and simplicity makes it the right default for 90% of developer workflows.

    Try QuickShrink with your own images and see the numbers yourself. And if metadata privacy matters too, run those same photos through PixelStrip — it strips EXIF, GPS, and camera data the same way: entirely in your browser, with nothing uploaded anywhere. For managing code snippets without yet another Electron app, check out TypeFast.

    Tools for Your Developer Setup

    If you’re optimizing your development workflow, the right hardware makes a difference. A high-resolution monitor helps when comparing compression artifacts side-by-side (I use a 4K display and it’s the first upgrade I’d recommend). For photography workflows, a fast SD card reader eliminates the bottleneck of transferring images from camera to computer. And if you’re processing images in bulk for a client project, a portable SSD keeps your originals safe while you experiment with compression settings — never compress your only copy.

  • The Pomodoro Technique Actually Works — If Your Timer Has Streaks

    The Pomodoro Technique — work for 25 minutes, break for 5 — has been around since 1987. The science backs it up: time-boxing reduces procrastination and improves focus. But here’s the problem: most people try it for three days and quit. Not because the technique fails, but because a plain countdown timer gives you zero reason to come back tomorrow.

    Why Streaks Change Everything

    Duolingo built a $12 billion company on one psychological trick: the daily streak. Miss a day and your streak resets to zero. It sounds trivial. It works because loss aversion is 2x stronger than the desire for gain (Kahneman & Tversky, 1979). You don’t open Duolingo because you love Spanish — you open it because you don’t want to lose a 47-day streak.

    The same psychology applies to focus timers. A countdown from 25:00 gives you no stakes. A countdown that says “Day 23 of your focus streak” gives you skin in the game.

    How FocusForge Applies This

    FocusForge adds three layers to the basic Pomodoro timer:

    • XP — every completed session earns experience points (25 XP for a Quick session, 75 XP for a Marathon)
    • Levels — Rookie → Apprentice → Expert → Master → Legend → Immortal. Each level has its own badge.
    • Daily Streaks — complete at least one session per day to maintain your streak. Miss a day, restart from zero.

    The actual Pomodoro technique is unchanged. You still focus for 25 minutes (or 45 or 60). But now there’s a reason to do it consistently.

    👉 Try FocusForge on Google Play — free with optional $1.99 upgrade to remove ads.

    Related Reading

    Want to know more about FocusForge’s design and gamification mechanics? Read the full deep-dive: FocusForge: How Gamification Tricked Me Into Actually Using a Pomodoro Timer. FocusForge is part of our suite of 5 free browser tools that replace desktop apps — including NoiseLog, a sound meter app for documenting noise complaints.

  • What Is EXIF Data? (And Why You Should Remove It Before Sharing Photos)

    EXIF stands for Exchangeable Image File Format. It’s a standard that embeds technical metadata inside every JPEG and TIFF photo. When you share a photo, this invisible data goes with it — including your GPS location.

    What EXIF Data Contains

    EXIF was created in 1995 for digital cameras. The original intent was helpful: let photographers review their camera settings (aperture, shutter speed, ISO) after the fact. But smartphones added fields the standard’s creators never anticipated:

    • GPS coordinates — latitude, longitude, altitude
    • Phone model — exact make and model
    • Unique device ID — camera serial number that’s the same across all your photos
    • Date and time — when the photo was taken and last modified
    • Software — which app last edited the image
    • Orientation — how the phone was held

    Real Risks

    In 2012, antivirus pioneer John McAfee’s location in Guatemala was revealed through EXIF data in a photo posted by a journalist. In 2024, researchers found that 30% of photos on major online marketplaces still contained GPS coordinates, exposing sellers’ home addresses.

    If you sell items online, post on forums, or share photos via email — your location data is potentially visible to anyone who downloads the image.

    How to Check and Remove EXIF Data

    The fastest way: open PixelStrip, drop your photo, and click “Strip All Metadata.” It runs in your browser — no upload, no server, no account. You’ll see exactly what data was hiding in your photo before it’s removed.

    👉 Check your photos now

    Related Reading

  • 5 Free Browser Tools That Replace Desktop Apps (No Install Needed)

    You don’t need to install an app for everything. These browser-based tools work instantly — no download, no account, no tracking. They run entirely on your device and work offline once loaded.

    1. Image Compression → QuickShrink

    Instead of installing Photoshop or GIMP just to resize an image, open QuickShrink. Drop an image, pick quality (80% is ideal), download. Compresses using the same Canvas API that powers web photo editors. Typical result: 4MB photo → 800KB with no visible difference.

    2. Photo Privacy → PixelStrip

    Before sharing photos on forums or marketplaces, strip the hidden metadata. PixelStrip shows you exactly what’s embedded (GPS, camera model, timestamps) and removes it all with one click. No upload to any server.

    3. Code Snippet Manager → TypeFast

    If you keep a file of frequently-used code blocks, email templates, or canned responses, TypeFast gives you a searchable list with one-click copy. Stores everything in your browser’s localStorage — no cloud sync needed.

    4. Focus Timer → FocusForge

    A Pomodoro timer that adds XP and streaks to make deep work addictive. Three modes: 25, 45, or 60 minutes. Level up from Rookie to Immortal. Available on Google Play for Android.

    5. Noise Meter → NoiseLog

    Turn your phone into a sound level meter that logs incidents and generates reports. Perfect for documenting noise complaints with timestamps and decibel readings. Available on Google Play.

    Why Browser-Based?

    • No install — works immediately in any browser
    • Private — data stays on your device
    • Fast — loads in milliseconds, not minutes
    • Cross-platform — works on Windows, Mac, Linux, iOS, Android
    • Offline — install as PWA for offline use

    Deep Dives

    Want the full story behind each tool? Read our detailed write-ups: QuickShrink: Why I Built a Browser-Based Image Compressor, PixelStrip: Your Photos Are Broadcasting Your Location, and TypeFast: The Snippet Manager for People Who Refuse to Install Another App.

    All tools are open source: github.com/dcluomax/app-factory

  • How to Remove GPS Location from Photos Before Sharing Online

    Every time you take a photo with your phone, the exact GPS coordinates are embedded in the image file. When you share that photo online — on forums, marketplaces, or messaging apps — anyone who downloads it can see exactly where you were standing. Here’s how to remove it in 3 seconds.

    Quick Fix: Strip All Metadata

    1. Open PixelStrip
    2. Drop your photo on the page
    3. See the metadata report (GPS coordinates highlighted in red)
    4. Click “Strip All Metadata & Download”

    The downloaded photo looks identical but contains zero hidden data. No GPS, no camera model, no timestamps.

    What Metadata Is Actually in Your Photos?

    EXIF metadata was designed for photographers to track camera settings. But smartphones added fields that reveal far more than you’d expect:

    • GPS Latitude/Longitude — accurate to within 3 meters on modern phones
    • Device Model — “iPhone 16 Pro” or “Samsung Galaxy S25” — narrows down who took the photo
    • Date & Time — the exact second the photo was captured
    • Camera Serial Number — a unique identifier that links photos from the same device
    • Thumbnail — a smaller version that may contain content you cropped out

    Which Platforms Strip Metadata Automatically?

    Do strip metadata: Facebook, Instagram, Twitter/X, WhatsApp (when sent as photo, not file)

    Don’t strip metadata: Email, Telegram (file mode), Discord, Forums, Craigslist, eBay, Dropbox, Google Drive shared links

    If you’re sharing via any channel that doesn’t strip metadata, do it yourself first.

    👉 Strip your photos with PixelStrip — no upload, no account, 100% in your browser.

    Related Reading

  • How to Compress Images Without Losing Quality (Free, No Upload)

    You need to send a photo by email but it’s 8MB. You need to upload a product image but the CMS has a 2MB limit. You need to speed up your website but your hero image is 4MB. The solution is always the same: compress the image. But most tools upload your photo to a random server first.

    Here’s how to compress images without uploading them anywhere, using only your browser.

    The 30-Second Method

    1. Open QuickShrink
    2. Drag your image onto the page (or click to select)
    3. Adjust the quality slider — 80% gives great results for most photos
    4. Click Download

    That’s it. Your image never leaves your device. The compression happens entirely in your browser using the Canvas API — the same technology that powers web-based photo editors.

    What Quality Setting Should I Use?

    80% — Best for most photos. Reduces file size by 40-60% with no visible difference. This is what most professional websites use.

    60% — Good for thumbnails, social media, and email attachments. 70-80% smaller. You might notice slight softening if you zoom in, but it looks perfect at normal viewing size.

    40% — Maximum compression. 85-90% smaller. Visible artifacts on close inspection, but fine for previews and low-bandwidth situations.

    100% — No compression. Use this only if you need to convert PNG to JPEG without quality loss.

    Why Not Use TinyPNG or Squoosh?

    TinyPNG uploads your image to their servers. For personal photos or client work, that’s a privacy concern. Also limited to 20 free compressions per month.

    Squoosh (by Google) is excellent and also runs client-side. But it’s heavier — loads a WASM codec and has a complex UI. If you just want to shrink a photo fast, it’s overkill.

    QuickShrink is a single HTML file, loads in under 200ms, works offline, and does one thing: make your image smaller. No account, no limits, no tracking.

    👉 Try QuickShrink — free, forever, private by design.

    Related Reading