Why I Stopped Uploading Files to Free Online Tools

Last month I ran Wireshark while using a popular free image compressor. I watched my 4MB photo hit their server, sit there for 11 seconds, then come back smaller. During those 11 seconds, that file — with my GPS coordinates baked into the EXIF data, my camera serial number, and a timestamp of exactly when I took it — lived on someone else’s infrastructure. I have no idea what happened to it after.

That was the last time I uploaded a file to a cloud-based “free” tool.

The Real Cost of “Free” File Processing

Most free online tools work the same way: you upload a file, their server processes it, you download the result. Simple. But here’s what’s actually happening under the hood.

Your file travels across the internet, unencrypted in many cases (yes, HTTPS encrypts the transport, but the server decrypts it to process it). The service now has a copy. Their privacy policy — if they even have one — usually includes language like “we may retain uploaded files for up to 24 hours” or the more honest “we may use uploaded content to improve our services.”

I audited five popular free image compression tools last week. Three of them had privacy policies that explicitly allowed data retention. One had no privacy policy at all. The fifth deleted files “within one hour” — but there’s no way to verify that.

For a cat photo, who cares. For a client contract, a medical document, internal company screenshots, or photos with location metadata? That’s a different conversation.

Browser-Only Processing: How It Actually Works

The alternative is processing files entirely in the browser using JavaScript. No upload. No server. The file never leaves your machine.

Here’s a simplified version of how browser-based image compression works using the Canvas API:

function compressImage(file, quality = 0.7) {
  return new Promise((resolve) => {
    const img = new Image();
    img.onload = () => {
      const canvas = document.createElement('canvas');
      canvas.width = img.width;
      canvas.height = img.height;
      const ctx = canvas.getContext('2d');
      ctx.drawImage(img, 0, 0);
      canvas.toBlob(resolve, 'image/jpeg', quality);
    };
    img.src = URL.createObjectURL(file);
  });
}

That’s the core of it. The canvas.toBlob() call with a quality parameter between 0 and 1 handles the JPEG recompression. At quality 0.7, you typically get 60-75% file size reduction with minimal visible degradation. The entire operation happens in your browser’s memory. Open DevTools, check the Network tab — zero outbound requests.

I built QuickShrink around this principle. It compresses images using the Canvas API with no server component at all. A 5MB JPEG typically compresses to 1.2MB in about 200ms on a modern laptop. Try doing that with a round-trip to a server.

EXIF Stripping: The Privacy Problem Most People Ignore

Every photo your phone takes embeds metadata: GPS coordinates, device model, lens info, timestamps, sometimes even your name if you’ve set it in your camera settings. I wrote about this in detail here, but the short version is: sharing a photo often means sharing your exact location.

Stripping EXIF data in the browser is straightforward. JPEG files store EXIF in APP1 markers starting at byte offset 2. You can parse the binary structure and rebuild the file without those segments:

function stripExif(arrayBuffer) {
  const view = new DataView(arrayBuffer);
  // JPEG starts with 0xFFD8
  if (view.getUint16(0) !== 0xFFD8) return arrayBuffer;
  
  let offset = 2;
  const pieces = [arrayBuffer.slice(0, 2)];
  
  while (offset < view.byteLength) {
    const marker = view.getUint16(offset);
    if (marker === 0xFFDA) { // Start of scan - rest is image data
      pieces.push(arrayBuffer.slice(offset));
      break;
    }
    const segLen = view.getUint16(offset + 2);
    // Skip APP1 (EXIF) and APP2 segments
    if (marker !== 0xFFE1 && marker !== 0xFFE2) {
      pieces.push(arrayBuffer.slice(offset, offset + 2 + segLen));
    }
    offset += 2 + segLen;
  }
  return concatenateBuffers(pieces);
}

That’s the approach PixelStrip uses. Drag a photo in, get a clean copy out. Your GPS data never touches a network cable.

How Browser-Only Tools Compare to Cloud Alternatives

I tested three approaches to image compression with the same 4.2MB test image (a DSLR photo, 4000×3000, JPEG):

Tool Output Size Time File Uploaded?
TinyPNG (cloud) 1.1MB 3.2s Yes
Squoosh (browser+WASM) 0.9MB 1.8s No
QuickShrink (browser Canvas) 1.2MB 0.3s No

TinyPNG produces slightly smaller files because they use a custom PNG optimization algorithm server-side. Google’s Squoosh is excellent — it compiles codecs to WebAssembly and runs them in-browser, giving the best compression ratios without any upload. QuickShrink trades some compression efficiency for speed by using the native Canvas API instead of WASM codecs.

Honest assessment: if you need maximum compression and don’t care about privacy, TinyPNG is solid. If you want the best of both worlds, Squoosh is hard to beat. QuickShrink’s advantage is speed and simplicity — it’s a single HTML file with zero dependencies, works offline, and processes images in under 300ms.

When Browser-Only Falls Short

I’m not going to pretend client-side processing is always better. It’s not.

PDF processing is still painful in the browser. Libraries like pdf.js can render PDFs, but heavy manipulation (merging, compressing, OCR) is slow and memory-hungry in JavaScript. For a 50-page PDF, a server with proper native libraries will finish in 2 seconds while your browser tab chews through it for 30.

Video transcoding is another weak spot. FFmpeg compiled to WASM exists (ffmpeg.wasm), but encoding a 1-minute 1080p video takes about 4x longer than native FFmpeg on the same hardware. For quick trims it’s fine. For batch processing, you’ll want a local install of FFmpeg.

My rule of thumb: if the file is under 20MB and the operation is image-related or text-based, browser processing wins. For anything heavier, I use local CLI tools — still no cloud upload, but with native performance.

Running Your Own Tools Locally

If you’re the type who prefers CLI tools (I am, for batch work), here’s my local privacy-respecting toolkit:

  • Image compression: jpegoptim --strip-all -m75 *.jpg — strips all metadata and compresses to quality 75
  • EXIF removal: exiftool -all= photo.jpg — nuclear option, removes everything
  • PDF compression: gs -sDEVICE=pdfwrite -dPDFSETTINGS=/ebook -o out.pdf in.pdf
  • Bulk rename: rename 's/IMG_//' *.jpg — removes camera prefixes that leak device info

For the CLI route, I’d recommend grabbing a solid USB-C hub if you’re working off a laptop — having a dedicated card reader slot speeds up the workflow when you’re processing photos straight off an SD card. (Full disclosure: affiliate link.)

What I Actually Do Now

My workflow is simple: browser tools for one-off tasks, CLI for batch work, cloud for nothing.

When I need to quickly compress a screenshot before pasting it into a Slack message, I open QuickShrink and drag it in. When I’m about to share a photo publicly, I run it through PixelStrip to strip the GPS data. When I’m processing 200 photos from a trip, I use jpegoptim in a terminal.

None of these files ever touch a third-party server. That’s not paranoia — it’s just good practice. The same way you wouldn’t email a password in plaintext, you shouldn’t upload sensitive files to random websites just because they promise to delete them.

If you’re interested in market analysis and trading signals delivered with the same no-BS approach, join Alpha Signal on Telegram — free daily market intelligence.

📧 Get weekly insights on security, trading, and tech. No spam, unsubscribe anytime.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Also by us: StartCaaS — AI Company OS · Hype2You — AI Tech Trends