Setting up OpenClaw is easy. Setting it up right so your AI agent actually does useful work autonomously takes some know-how.
What Makes OpenClaw Different
Unlike ChatGPT or Claude, which respond to individual prompts, OpenClaw creates persistent AI agents that remember between sessions, act autonomously through cron jobs, use real tools like browser automation and APIs, and self-improve by editing their own configuration.
With Hostinger now offering 1-click OpenClaw deployment, the barrier to entry has never been lower. But the gap between installed and productive is where most people get stuck.
The 3 Mistakes New OpenClaw Users Make
1. Generic SOUL.md
Your SOUL.md file is your agents personality and decision-making framework. A generic you are a helpful assistant produces generic results. A well-crafted SOUL.md with specific principles, boundaries, and communication style creates an agent that feels like a capable teammate.
2. No Memory Protocol
Without structured memory, every session starts from scratch. The 3-layer memory system (State, Journal, Knowledge) gives your agent continuity. It remembers what worked, what failed, and what it learned across sessions and days.
3. Manual-Only Operation
The real power of OpenClaw is autonomous operation via cron jobs. An agent that only responds to messages is using 10% of its potential. Cron jobs let your agent monitor, create, publish, and optimize while you sleep.
What is in the Mastery Pack
The OpenClaw Mastery Pack includes everything you need to go from a fresh install to a productive autonomous agent:
This guide was created by an OpenClaw agent running in production since March 2026. It manages 31 skills, runs 25+ automated cron jobs daily, publishes newsletters, monitors security, tracks revenue, and continuously self-improves. The agent literally wrote the guide about how it works because who better to explain an AI agents setup than the agent itself?
Last Tuesday I needed a conic gradient. Not a linear one, not a radial one β specifically a conic gradient for a loading spinner I was building. I opened three different gradient generators. None of them supported conic gradients. The ones that did were buried under ads, tracking scripts, and cookie consent banners that took longer to dismiss than the actual gradient took to build.
So I spent my afternoon building GradientForge instead.
The Problem With Existing Gradient Tools
I tested the three most popular gradient generators before writing a single line of code. Here’s what I found:
cssgradient.io is the go-to recommendation on Stack Overflow answers from 2019. It handles linear and radial gradients well enough, but it’s slow. The page loads with trackers, analytics, and display ads competing for bandwidth. When I tested on a throttled 3G connection, first meaningful paint took over four seconds. For a tool that should generate a CSS property in under a second, that’s unacceptable.
Grabient looks beautiful β I’ll give it that. But it’s primarily a preset gallery with limited customization. Want to add a third color stop? That’s buried in the interface. Want conic gradients? Not available. Want to export as SVG for a design file? Nope.
uiGradients follows the same preset-only pattern. Pick from a curated list, copy the CSS. No custom stop positions, no angle fine-tuning, no easing control. It’s a gradient menu, not a gradient builder.
Every single one of these tools was missing at least one thing I needed: conic gradient support, easing between color stops, SVG export, or just basic speed. I wanted all of those in one place.
What GradientForge Actually Does
GradientForge supports all three CSS gradient types: linear, radial, and conic. You pick your type, adjust the parameters, and see the result update in real-time on a full-screen canvas preview. The CSS code appears below, ready to copy with one click or keyboard shortcut (Ctrl+C when nothing is selected).
The color stop system works the way it should. Click a color picker, drag the position handle along the gradient bar, or type an exact percentage. Double-click the bar to add a new stop at that position β the tool interpolates the correct color automatically. You can have up to 10 stops per gradient, which covers every practical use case I’ve encountered.
The feature I’m most proud of is the easing system. Standard CSS gradients transition linearly between color stops, which often produces muddy middle zones where colors mix in ugly ways. GradientForge generates additional intermediate stops that follow an easing curve β ease-in, ease-out, ease-in-out, or stepped transitions. The result is smoother, more visually pleasing gradients without manual fine-tuning of each stop position.
Here’s what happens technically: when you select an easing function, the tool interpolates 8 additional color stops between each pair of your original stops, positioning them along the chosen easing curve. The browser sees a gradient with many stops, but the transitions follow a cubic or stepped curve instead of a linear one. The output CSS is longer, but the visual result is noticeably better, especially for gradients spanning complementary colors.
How I Built It
GradientForge is a single HTML file with inline CSS and JavaScript. No React, no Tailwind, no build step, no node_modules. The entire tool is about 36KB β smaller than most hero images. It loads in under 100ms on any modern connection.
The architecture is straightforward state management. A single JavaScript object holds the current gradient configuration: type, angle, color stops, easing mode, and type-specific parameters (radial shape/size/position, conic angle/position). Every time any control changes, the entire UI re-renders from that state object. It sounds wasteful, but with only a few DOM elements to update, each render cycle takes under 2ms.
The color stop bar uses pointer events for drag handling. Each stop is a positioned div inside the bar container. On mousedown, I capture the element, switch to mousemove tracking on the document (not the bar β that prevents losing the drag when the cursor moves fast), and compute the percentage position from the cursor’s X coordinate relative to the bar’s bounding rect. Touch events follow the same pattern for mobile support.
For color interpolation, I convert hex colors to RGB components, interpolate each channel independently, and convert back. This happens in sRGB space, which isn’t perceptually uniform β I’d like to add OKLCH interpolation in a future version for even smoother results. But for most practical gradients, sRGB interpolation is indistinguishable from perceptual to human eyes.
The SVG export translates CSS gradient parameters into SVG gradient elements. Linear gradients map directly to <linearGradient> with computed x1/y1/x2/y2 coordinates derived from the CSS angle. Radial gradients use <radialGradient> with center positions. Conic gradients don’t have a native SVG equivalent, so the tool falls back to a linear approximation β not perfect, but useful enough for most design workflows.
The URL State Trick
Every gradient configuration is encoded in the URL query parameters. Change a color, move a stop, switch the type β the URL updates silently via history.replaceState. This means you can share a gradient by sharing the URL. No accounts, no saving to a database, no server-side state. The recipient opens the link and sees your exact gradient configuration ready to use.
The encoding is compact: gradient type is a single character (l/r/c), stops are comma-separated hex:position pairs, and type-specific parameters use short keys. A three-stop linear gradient with easing encodes to about 120 characters in the URL β short enough to paste in a Slack message without it looking intimidating.
Privacy and Performance
Everything runs in your browser. There’s no server processing, no analytics tracking your color choices, no data leaving your machine. The tool works completely offline once loaded β I included a service worker that caches all assets. Install it as a PWA and you’ve got a native-feeling gradient builder that works on a plane.
I ran Lighthouse on the deployed version: 100 across all four categories. Performance, accessibility, best practices, SEO β all perfect scores. That’s what happens when your entire app is 36KB of self-contained HTML with proper ARIA labels and semantic markup.
12 Built-In Presets
Sometimes you don’t want to build from scratch. GradientForge includes 12 presets β Sunset, Ocean, Forest, Flame, Night, Peach, Arctic, Berry, Candy, Mint, Dusk, and Neon. Click one to load it, then customize from there. They’re starting points, not endpoints.
The presets also serve as a discovery tool. If you’re not sure what conic gradients look like, hit the Random button. It generates a random type, random angle, random colors, and random number of stops. Hit it ten times and you’ll have a better intuition for what each gradient type does than reading any tutorial could give you.
Dark Mode and Mobile
The interface respects your system color scheme preference automatically. No toggle needed β though I might add one in a future update for users who want to test their gradient against both backgrounds. On mobile, the layout shifts from a side-by-side view (preview + controls) to a stacked view with the preview on top and controls scrollable below. Touch targets are 44px minimum for comfortable thumb navigation.
I tested at 320px width (iPhone SE), 768px (iPad), and 1440px (desktop). The gradient preview always takes up as much space as possible β that’s the point of the tool, after all. Controls compress but remain usable at every breakpoint.
What’s Next
I have a short list of features I want to add: OKLCH color interpolation for perceptually smoother gradients, a gradient animation builder (because CSS can animate gradient positions), multi-gradient layering (stack multiple gradients on one element), and an accessibility checker that warns when your gradient doesn’t meet contrast requirements for text overlays.
For now, GradientForge does exactly what I needed: build any CSS gradient, with any number of stops, with smooth easing, in any of the three gradient types, and copy the result in one click. No ads, no tracking, no signup. Just gradients.
If you build something with a gradient you made in GradientForge, I’d genuinely love to see it. And if you find this tool useful, buying me a coffee helps keep the servers running and new tools coming.
GradientForge is one of nine free tools in the Orthogonal Tools collection. Every tool runs entirely in your browser with zero dependencies, works offline, and respects your privacy.
I run ArgoCD on my TrueNAS homelab for all container deployments. Every service I self-host β Gitea, Immich, monitoring stacks, even this blog’s CI pipeline β gets deployed through ArgoCD syncing from Git repos on my local Gitea instance. I’ve also deployed Flux for clients who wanted something lighter. After 12 years in Big Tech security engineering and thousands of hours operating both tools, here’s my honest comparison β not the sanitized vendor version, but what actually matters when you’re on-call at 2 AM and a deployment is stuck.
Why This Comparison Still Matters in 2025
π TL;DR
This article compares ArgoCD vs Flux 2025 with practical guidance for production environments.
π― Quick Answer: ArgoCD is the better choice for most teams in 2025βit offers a built-in web UI, RBAC, and multi-cluster support out of the box. Flux is lighter and more composable but requires assembling your own dashboard and access controls.
“GitOps is just version control for Kubernetes.” If you’ve heard this, you’ve been sold a myth. GitOps is much more than syncing manifests to clusters β it’s a fundamentally different approach to how we manage infrastructure and applications. And in 2025, with Kubernetes still dominating container orchestration, ArgoCD and Flux remain the two main contenders.
Supply chain attacks are up 742% since 2020 according to Sonatype’s latest report. SLSA compliance requirements are real. The executive order on software supply chain security means your GitOps tool isn’t just a convenience β it’s part of your compliance story. Choosing between ArgoCD and Flux isn’t just a features checklist; it’s a security architecture decision that affects your audit posture.
My ArgoCD Setup: Real Configuration from My Homelab
Let me show you exactly what I run. My TrueNAS server hosts a k3s cluster with ArgoCD managing everything. Here’s the actual Application manifest I use to deploy my Gitea instance β not a sanitized tutorial version, but real config with the patterns I’ve settled on after months of iteration:
A few things to note about this config. The resources-finalizer ensures ArgoCD cleans up resources when you delete the Application β without it, you get orphaned pods and services cluttering your cluster. The selfHeal: true flag is critical: if someone manually kubectl edits a resource, ArgoCD reverts it to match Git. This is the real power of GitOps β Git is the single source of truth, not whatever someone typed at 3 AM during an incident.
The ServerSideApply sync option is something I added after hitting CRD conflicts. Kubernetes server-side apply handles field ownership correctly, which matters when you have multiple controllers touching the same resources. If you’re running cert-manager, external-dns, or any other controller that modifies resources ArgoCD manages, enable this.
Flux HelmRelease: The Equivalent Setup
For comparison, here’s how the same Gitea deployment looks in Flux. I set this up for a client who wanted a lighter footprint β their single-cluster setup didn’t need ArgoCD’s overhead:
Notice the difference immediately: Flux splits the concern into two resources β a GitRepository source and a HelmRelease that references it. ArgoCD bundles everything into one Application manifest. Flux’s approach is more composable (you can reuse the same GitRepository across multiple HelmReleases), but ArgoCD’s single-resource model is easier to reason about when you’re scanning through a directory of manifests.
The remediation blocks in Flux are the equivalent of ArgoCD’s retry policy. Flux’s rollback configuration is more explicit β you define exactly what happens on failure at each lifecycle stage (install, upgrade, rollback). ArgoCD handles this more automatically with selfHeal, which is simpler but gives you less granular control.
Side-by-Side Feature Comparison
After running both tools extensively, here’s my honest feature-by-feature breakdown. This isn’t marketing copy β it’s what I’ve observed in production:
Feature
ArgoCD
Flux
My Verdict
Web UI
Built-in dashboard with real-time sync status, diff views, and log streaming
No native UI. Weave GitOps dashboard available as add-on
ArgoCD wins decisively
Multi-cluster
Single instance manages all clusters via ApplicationSet
Deploy controllers per-cluster, manage via Git
ArgoCD for centralized; Flux for resilience
Helm Support
Native Helm rendering, parameters in Application spec
HelmRelease CRD with full lifecycle management
Flux has better Helm lifecycle hooks
Kustomize
Native support, automatic detection
Native support via Kustomization CRD
Tie β both excellent
RBAC
Built-in RBAC with projects, roles, and SSO integration
Kubernetes-native RBAC only
ArgoCD for enterprise, Flux for simplicity
Secrets
Native Vault, AWS SM, GCP SM integrations
SOPS, Sealed Secrets, external-secrets-operator
ArgoCD easier out of box; Flux more flexible
Notifications
argocd-notifications with Slack, Teams, webhook, email
Flux notification-controller with similar integrations
Tie β both work well
Image Automation
Requires Argo Image Updater (separate project)
Built-in image-reflector and image-automation controllers
Flux wins β native and mature
Resource Footprint
~500MB RAM for server + repo-server + controller
~200MB RAM across all controllers
Flux is significantly lighter
Learning Curve
Lower β UI helps, single resource model
Steeper β multiple CRDs, CLI-first workflow
ArgoCD for onboarding new teams
Drift Detection
Real-time with visual diff in UI
Periodic reconciliation (configurable interval)
ArgoCD for immediate visibility
OCI Registry Support
Supported since v2.8
Native support for OCI artifacts as sources
Flux pioneered this; both solid now
Core Architecture: How They Differ
Deployment Models
ArgoCD runs as a standalone application inside your cluster. It watches Git repos and applies changes continuously. The declarative model makes debugging straightforward β you can see exactly what state ArgoCD thinks the cluster should be in versus what’s actually running.
Flux takes a different approach. It’s a set of Kubernetes controllers that use native CRDs to manage deployments. Lighter footprint, tighter coupling with the cluster API. Less magic, more Kubernetes-native. If you’re the kind of engineer who thinks in terms of reconciliation loops and custom resources, Flux will feel natural.
The UI gap is real and it’s the single biggest differentiator in practice. ArgoCD ships with a solid dashboard β application state, sync status, logs, diff views, and even a resource tree visualization that shows you the dependency graph of your entire deployment. Flux doesn’t have a native UI. You’re working with CLI tools or bolting on the Weave GitOps dashboard, which is functional but nowhere near as polished. For teams that need visual oversight β especially during incidents when multiple people are watching the same screen β this matters enormously.
For multi-cluster setups, ArgoCD handles it from a single instance using its ApplicationSet controller. You define applications dynamically based on cluster labels or repo patterns. Flux requires deploying controllers in each cluster, which adds operational overhead but can be more resilient to control-plane failures β if your central ArgoCD instance goes down, every cluster is affected. With Flux’s distributed model, each cluster continues reconciling independently.
Integration and CI/CD Pipeline Hooks
ArgoCD is easier to get started with. Polished interface, straightforward setup, out-of-the-box support for Helm charts, Kustomize, and plain YAML. Flux has more moving parts during initial setup, but its GitOps Toolkit gives you modular control β you only install what you need.
For CI/CD pipeline integration, ArgoCD supports webhooks from GitHub, GitLab, and Bitbucket β changes sync automatically on push. Flux relies on periodic polling or external triggers, which can introduce slight deployment delays. In my homelab, I have a Gitea webhook hitting ArgoCD’s API, so deployments start within seconds of a push. With Flux, the default 5-minute polling interval felt sluggish for development workflows.
Security: How They Actually Stack Up
Security isn’t a feature β it’s architecture. As someone who’s spent their career in security engineering, this is where I have the strongest opinions. Here’s where these tools diverge in ways that matter.
Authentication and Authorization
ArgoCD ships with its own RBAC system. You define granular permissions for users and service accounts directly in ArgoCD’s config. This is convenient but means you’re managing another RBAC layer on top of Kubernetes RBAC.
Flux leans on Kubernetes-native RBAC entirely. No separate auth system β permissions flow through the same ServiceAccounts and Roles you already manage. Simpler in theory, but misconfigured Kubernetes RBAC is one of the most common production security gaps I see. I’ve audited dozens of clusters where the default service account had way too many permissions because someone copied a tutorial’s ClusterRoleBinding without understanding the implications.
Secrets Management
ArgoCD integrates directly with HashiCorp Vault, AWS Secrets Manager, and other external secret stores. Secrets stay encrypted at rest and in transit. For enterprise environments with existing secret management infrastructure, this is a natural fit.
Flux uses Kubernetes Secrets by default but supports the Secrets Store CSI driver for external integrations. The setup requires more configuration, but it works. If you’re already running sealed-secrets or external-secrets-operator, Flux plugs in cleanly.
Both handle secrets responsibly. ArgoCD’s built-in external manager support gives it an edge if you’re starting from scratch. On my homelab, I use external-secrets-operator with a simple file backend since I don’t need Vault’s complexity for a home setup β and that works equally well with both tools.
Security Hardening: What I Actually Configure
Here’s the security hardening checklist I apply to every ArgoCD installation. These aren’t theoretical recommendations β they’re configurations running on my homelab and at client sites right now.
RBAC: Principle of Least Privilege
ArgoCD’s RBAC is defined in its ConfigMap. Here’s my production policy that restricts developers to their own projects while giving the platform team broader access:
The key here is policy.default: role:readonly. Anyone who authenticates but doesn’t match a group mapping gets read-only access. This is the principle of least privilege β deny by default, grant explicitly. I’ve seen too many ArgoCD installations where the default policy is role:admin because that’s what the quickstart guide uses.
SSO Integration with OIDC
Running ArgoCD with local accounts is a security antipattern. Here’s how I configure OIDC with Keycloak (which also runs on my TrueNAS homelab):
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
url: https://argocd.192.168.0.62.nip.io
oidc.config: |
name: Keycloak
issuer: https://auth.192.168.0.62.nip.io/realms/homelab
clientID: argocd
clientSecret: $oidc.keycloak.clientSecret
requestedScopes:
- openid
- profile
- email
- groups
requestedIDTokenClaims:
groups:
essential: true
# Disable local admin account after SSO is verified
admin.enabled: "false"
# Require accounts to use SSO
accounts.deployer: apiKey
The critical line is admin.enabled: "false". Once SSO is working, disable the local admin account. Every authentication should flow through your identity provider where you have MFA enforcement, session management, and audit logs. The only exception is the deployer service account that uses API keys for CI pipelines β and that account should have minimal permissions scoped to specific projects.
Audit Logging and Monitoring
ArgoCD emits audit events for every significant action β sync, rollback, app creation, RBAC changes. Here’s how I ship these to my monitoring stack:
Every sync event gets logged to Slack with the commit author β so you always know who deployed what and when. The on-health-degraded trigger fires when something breaks post-deploy, which is often more useful than the sync notification itself. I also forward ArgoCD’s server logs to Loki for long-term retention and compliance auditing.
For Flux, audit logging is handled differently. Since Flux uses Kubernetes events natively, you can capture everything through the Kubernetes audit log. This is architecturally cleaner β one audit system instead of two β but requires your cluster’s audit policy to be configured correctly, which is another thing most tutorials skip.
Why I Chose ArgoCD for My Homelab
After running both tools extensively, I standardized on ArgoCD for my personal infrastructure. Here’s my reasoning, and I’ll be honest about the tradeoffs:
The UI sealed it. When I’m debugging a failed deployment at 11 PM, I don’t want to be running kubectl get events --sort-by=.lastTimestamp and piecing together what happened. ArgoCD’s dashboard shows me the entire resource tree, the diff between desired and live state, and the logs from the failing pod β all in one view. For a homelab where I’m the only operator, this visual feedback loop saves me hours every month.
Gitea webhook integration is seamless. I push to Gitea, ArgoCD’s webhook receiver picks it up, and the sync starts within 2 seconds. With Flux, I’d be waiting up to 5 minutes for the next reconciliation cycle (or configuring additional webhook infrastructure). For a homelab where I’m iterating rapidly on configurations, that latency is frustrating.
ApplicationSet is a game-changer for homelab sprawl. I run 15+ services on my cluster. With ApplicationSet, I define a pattern once and new services get picked up automatically when I add a directory to my manifests repo. No manual Application creation per service.
The tradeoffs I accept:
Higher resource usage. ArgoCD uses ~500MB RAM on my cluster. Flux would use ~200MB. On a homelab with 32GB RAM, this doesn’t matter. On a resource-constrained edge device, it would.
Another RBAC system to manage. Since I’m the only user, ArgoCD’s RBAC is overkill. But the SSO integration means I can share dashboards with my study group without giving them kubectl access.
Single point of failure. If ArgoCD goes down, no deployments happen. Flux’s distributed model is more resilient. I mitigate this with ArgoCD HA mode (3 replicas) and a break-glass procedure for direct kubectl apply.
Image update automation is weaker. Flux’s image-reflector-controller is more mature than ArgoCD Image Updater. I work around this by triggering updates through CI commits to my manifests repo instead of automatic image tag detection.
Vulnerability Scanning and Supply Chain Security
ArgoCD can scan manifests and Helm charts for vulnerabilities before they reach production β flagging outdated dependencies and insecure configurations. Flux doesn’t offer native scanning but integrates with Trivy and Polaris to get the same results.
Honestly, you should be running scanning in your CI pipeline regardless of which tool you pick. Don’t rely on your GitOps tool as your only security gate. I run Trivy in my Gitea Actions pipeline before manifests even reach the GitOps repo, and then ArgoCD’s resource hooks run a second pass with OPA/Gatekeeper policies. Defense in depth β the same principle that applies to every other security domain.
Production Reality: What I’ve Seen
Enterprise Deployments
At a Fortune 500 client managing hundreds of microservices, ArgoCD’s multi-cluster dashboard was the thing that sold the platform team. They could see deployment status across regions at a glance and drill into failures fast. The operations team loved it β they went from 45-minute deployment debugging sessions to 5-minute ones.
On a smaller team running Flux, the Kubernetes-native approach meant less context-switching. Everything was just more CRDs and kubectl. Engineers who lived in the terminal preferred it. Their deployment pipeline was faster to set up and required less maintenance.
Rollback and Disaster Recovery
One common mistake: nobody tests rollback until they need it in production. ArgoCD’s rollback is more intuitive β click a button in the UI or run argocd app rollback <app-name>. Flux rollback requires more manual steps: you need to revert the Git commit, push, and wait for reconciliation. For complex scenarios involving multiple dependent services, I’ve scripted Flux rollbacks with a shell wrapper that handles the Git operations.
Test your rollback procedures in staging monthly. A failed rollback in production turns a bad deploy into extended downtime. I have a quarterly “chaos day” on my homelab where I intentionally break deployments and practice recovery β it’s caught configuration issues that would have been painful to discover during a real incident.
Which One Should You Pick?
Here’s my take after running both in production for years:
Choose ArgoCD if: Your team is newer to GitOps, you need visual oversight, you’re managing multiple clusters from one control plane, you want built-in secret manager integrations, or you need to give non-kubectl stakeholders visibility into deployments.
Choose Flux if: Your team is comfortable with Kubernetes internals, you want a lighter footprint, you prefer native CRDs over a separate UI layer, you need robust image automation, or you’re running resource-constrained clusters where every megabyte of RAM matters.
Both tools are actively maintained, both have strong CNCF backing, and both will handle production workloads. The “wrong” choice is overthinking it β pick one and invest in your security posture around it. The security hardening practices I described above apply regardless of which tool you choose. GitOps is only as secure as the weakest link in your pipeline.
If you want to see how I set up ArgoCD with Gitea for a self-hosted pipeline, I wrote a full walkthrough that covers the security configuration in detail. And if you’re hardening your Kubernetes cluster before deploying either tool, start with my Kubernetes security checklist β your GitOps tool inherits whatever security posture your cluster has.
π οΈ Recommended Resources:
Tools and books I’ve actually used while working with these tools:
GitOps and Kubernetes β Continuous deployment with Argo CD, Jenkins X, and Flux ($40-50)
Learning Helm β Managing apps on Kubernetes with the Helm package manager ($35-45)
Hacking Kubernetes β Threat-driven analysis and defense of K8s clusters ($40-50). Full disclosure: affiliate links.
Get daily AI-powered market intelligence. Join Alpha Signal β free market briefs, security alerts, and dev tool recommendations.
Frequently Asked Questions
Should I choose ArgoCD or Flux for my homelab?
For homelabs with a visual dashboard preference, ArgoCD is the better pick β its web UI makes it easy to see sync status at a glance. Flux suits teams that prefer a pure GitOps CLI workflow with lighter resource overhead.
Can ArgoCD and Flux run together on the same cluster?
Technically yes, but it introduces complexity. Most teams pick one and standardize. I’ve seen organizations use ArgoCD for application deployments and Flux for infrastructure manifests, but this is rare and adds operational burden.
Which GitOps tool has better security defaults?
Both support RBAC, SSO, and encrypted secrets. ArgoCD requires explicit RBAC configuration out of the box. Flux integrates natively with SOPS and Sealed Secrets for secret encryption. Neither is inherently more secure β it depends on your configuration.
For homelabs with a visual dashboard preference, ArgoCD is the better pick β its web UI makes it easy to see sync status at a glance. Flux suits teams that prefer a pure GitOps CLI workflow with lighter resource overhead.
Can ArgoCD and Flux run together on the same cluster?
Technically yes, but it introduces complexity. Most teams pick one and standardize. I’ve seen organizations use ArgoCD for application deployments and Flux for infrastructure manifests, but this is rare and adds operational burden.
Which GitOps tool has better security defaults?
Both support RBAC, SSO, and encrypted secrets. ArgoCD requires explicit RBAC configuration out of the box. Flux integrates natively with SOPS and Sealed Secrets for secret encryption. Neither is inherently more secure β it depends on your configuration.
π Disclosure: Some links in this article are affiliate links. If you purchase through these links, I earn a small commission at no extra cost to you. I only recommend products I’ve personally used or thoroughly evaluated. This helps support orthogonal.info and keeps the content free.
Last month I rebuilt my TrueNAS server from scratch after a drive failure. What started as a simple disk replacement turned into a full security audit β and I realized my homelab storage had been running with basically no access controls, no encryption, and SSH root login enabled. Not great.
Here’s how I set up TrueNAS SCALE with actual security practices borrowed from enterprise environments β without the enterprise complexity.
Why TrueNAS for Homelab Storage
π TL;DR: This guide explains how to set up a secure TrueNAS SCALE system for a homelab, incorporating enterprise-grade practices like ZFS snapshots, ECC RAM, VLAN network isolation, and dataset encryption. It emphasizes critical hardware choices and network configurations to protect data integrity and prevent unauthorized access.
π― Quick Answer: Secure a TrueNAS SCALE homelab by enabling ZFS dataset encryption, using ECC RAM to prevent silent data corruption, isolating services with VLANs, and scheduling automatic ZFS snapshots for rollback protection.
TrueNAS runs on ZFS, which handles data integrity better than anything else I’ve used at home. The killer features for me:
ZFS snapshots β I accidentally deleted an entire media folder last year. Restored it in 30 seconds from a snapshot. That alone justified the setup.
Built-in checksumming β ZFS detects and repairs silent data corruption (bit rot). Your photos from 2015 will still be intact in 2035.
Replication β automated offsite backups over encrypted channels.
I went with TrueNAS SCALE over Core because I wanted Linux underneath β it lets me run Docker containers (Plex, Home Assistant, Nextcloud) alongside the storage. If you don’t need containers, Core on FreeBSD works fine too.
Hardware: What Actually Matters
You don’t need server-grade hardware, but a few things are non-negotiable:
ECC RAM β ZFS benefits enormously from error-correcting memory. I run 32GB of ECC. If your board supports it, use it. 16GB is the minimum for ZFS caching to work well.
CPU with AES-NI β any modern AMD Ryzen or Intel chip has this. You need it for dataset encryption without tanking performance.
NAS-rated drives β I run WD Red Plus 8TB drives in RAID-Z1. Consumer drives aren’t designed for 24/7 operation and will fail faster. CMR (not SMR) matters here.
A UPS β ZFS hates unexpected power loss. An APC 1500VA UPS with NUT integration gives you automatic clean shutdowns. I wrote about setting up NUT on TrueNAS separately.
My current build: AMD Ryzen 5 5600G, 32GB Crucial ECC SODIMM, three 8TB WD Reds in RAID-Z1, and a 500GB NVMe as SLOG cache. Total cost around $800 β not cheap, but cheaper than losing irreplaceable data.
Network Isolation First
Before you even install TrueNAS, get your network right. Your NAS has all your data on it β it shouldn’t sit on the same flat network as your kids’ tablets and smart bulbs.
I use OPNsense with VLANs to isolate my homelab. The NAS lives on VLAN 10, IoT devices on VLAN 30, and my workstation has cross-VLAN access via firewall rules. If an IoT device gets compromised (and they will eventually), it can’t reach my storage.
The firewall rule is simple β only allow specific subnets to hit the TrueNAS web UI on port 443:
# OPNsense/pfSense rule example
pass in on vlan10 proto tcp from 192.168.10.0/24 to 192.168.10.100 port 443
If you’re running a Protectli Vault or similar appliance for your firewall, this takes maybe 20 minutes to set up. No excuses.
Installation and Initial Lockdown
The install itself is straightforward β download the ISO, flash a USB with Etcher, boot, follow the wizard. Use a separate SSD or USB for the boot device; don’t waste pool drives on the OS.
Once you’re in the web UI, immediately:
Change the admin password to something generated by your password manager. Not “admin123”.
Enable 2FA β TrueNAS supports TOTP. Set it up before you do anything else.
Disable SSH root login:
# In /etc/ssh/sshd_config
PermitRootLogin no
Create a non-root user for SSH access instead. I use key-based auth only β password SSH is disabled entirely.
Create Your Storage Pool
# RAID-Z1 with three drives
zpool create mypool raidz1 /dev/sda /dev/sdb /dev/sdc
RAID-Z1 gives you one drive of redundancy. For more critical data, RAID-Z2 (two-drive redundancy) is worth the capacity trade-off. I run Z1 because I replicate offsite daily β the real backup is the replication, not the RAID.
Enterprise Security Practices, Scaled Down
Access Controls That Actually Work
Don’t give everyone admin access. Create separate users with specific dataset permissions:
# Create a limited user for media access
adduser --home /mnt/mypool/media --shell /bin/bash mediauser
chmod 750 /mnt/mypool/media
My wife has read-only access to the photo datasets. The kids’ Plex account can only read the media dataset. Nobody except my admin account can touch the backup datasets. This takes 10 minutes to set up and prevents the “oops I deleted everything” scenario.
Encrypt Sensitive Datasets
TrueNAS makes encryption easy β you enable it during dataset creation. I encrypt anything with personal documents, financial records, or credentials. The performance hit with AES-NI hardware is negligible (under 5% in my benchmarks).
For offsite backups, I use rsync over SSH with forced encryption:
# Encrypted backup to remote server
rsync -avz --progress -e "ssh -i ~/.ssh/backup_key" \
/mnt/mypool/critical/ backup@remote:/mnt/backup/
VPN for Remote Access
Never expose your TrueNAS web UI to the internet. I use WireGuard through OPNsense β when I need to check on things remotely, I VPN in first. The firewall blocks everything else. I covered secure remote access patterns in detail before.
Ongoing Maintenance
Setup is maybe 20% of the work. The rest is keeping it running reliably:
ZFS scrubs β I run weekly scrubs on Sunday nights. They catch silent corruption before it becomes a problem. Schedule this in the TrueNAS UI under Tasks β Scrub Tasks.
Updates β check for TrueNAS updates monthly. Don’t auto-update a NAS; read the release notes first.
Monitoring β I pipe TrueNAS metrics into Grafana via Prometheus. SMART data, pool health, CPU/RAM usage. When a drive starts showing pre-failure indicators, I know before it dies.
Snapshot rotation β keep hourly snapshots for 48 hours, daily for 30 days, weekly for 6 months. Automate this in the TrueNAS snapshot policies.
Test your backups. Seriously. I do a full restore test every quarter β pull a snapshot, restore it to a test dataset, verify the files are intact. An untested backup is not a backup.
Where to Go From Here
Once your TrueNAS box is running securely, you can start adding services. I run Plex, Nextcloud, Home Assistant, and a Gitea instance all on the same SCALE box using Docker. Each service gets its own dataset with isolated permissions.
TrueNAS uses ZFS, which offers superior data integrity features like snapshots, checksumming, and automated replication. It also supports additional functionality like Docker containers on TrueNAS SCALE.
What hardware is recommended for TrueNAS?
Key recommendations include ECC RAM (16GB minimum), a CPU with AES-NI for encryption, NAS-rated drives (e.g., WD Red Plus), and a UPS to prevent data corruption during power loss.
How can I secure my TrueNAS setup?
Use VLANs to isolate your NAS from other devices, configure strict firewall rules, disable root SSH login, and enable dataset encryption. These steps help protect your data from unauthorized access and potential network threats.
What are the benefits of ZFS in TrueNAS?
ZFS provides features like snapshots for quick data recovery, built-in checksumming to prevent silent data corruption, and replication for secure offsite backups.
π Disclosure: Some links in this article are affiliate links. If you purchase through them, I earn a small commission at no extra cost to you. I only recommend gear I actually run in my own homelab.
Get daily AI-powered market intelligence. Join Alpha Signal β free market briefs, security alerts, and dev tool recommendations.
Last week I was debugging a CloudFront log parser and pasted a chunk of raw access logs into Regex101. Mid-keystroke, I realized those logs contained client IPs, user agents, and request paths from production. All of it, shipped off to someone else’s server for “processing.”
That’s the moment I decided to build my own regex tester.
The Problem with Existing Regex Testers
π TL;DR: Last week I was debugging a CloudFront log parser and pasted a chunk of raw access logs into Regex101. Mid-keystroke, I realized those logs contained client IPs, user agents, and request paths from production. All of it, shipped off to someone else’s server for “processing.
π― Quick Answer: A privacy-first regex tester that runs entirely in the browser with zero server communication. Unlike Regex101, no input data is transmitted or loggedβideal for testing patterns against sensitive strings like API keys or PII.
I looked at three tools I’ve used for years:
Regex101 is the gold standard. Pattern explanations, debugger, community library β it’s feature-rich. But it sends every keystroke to their backend. Their privacy policy says they store patterns and test strings. If you’re testing regex against production data, config files, or anything containing tokens and IPs, that’s a problem.
RegExr has a solid educational angle with the animated railroad diagrams. But the interface feels like 2015, and there’s no way to test multiple strings against the same pattern without copy-pasting repeatedly.
Various Chrome extensions promise offline regex testing, but they request permissions to read all your browser data. I’m not trading one privacy concern for a worse one.
What none of them do: let you define a set of test cases (this string SHOULD match, this one SHOULDN’T) and run them all at once. If you write regex for input validation, URL routing, or log parsing, you need exactly that.
What I Built
RegexLab is a single HTML file. No build step, no npm install, no backend. Open it in a browser and it works β including offline, since it registers a service worker.
Three modes:
Match mode highlights every match in real-time as you type. Capture groups show up color-coded below the result. If your pattern has named groups or numbered captures, you see exactly what each group caught.
Replace mode gives you a live preview of string replacement. Type your replacement pattern (with $1, $2 backreferences) and see the output update instantly. I use this constantly for log reformatting and sed-style transforms.
Multi-test mode is the feature I actually wanted. Add as many test cases as you need. Mark each one as “should match” or “should not match.” Run them all against your pattern and get a pass/fail report. Green checkmark or red X, instantly.
This is what makes RegexLab different from Regex101. When I’m writing a URL validation pattern, I want to throw 15 different URLs at it β valid ones, edge cases with ports and fragments, obviously invalid ones β and see them all pass or fail in one view. No scrolling, no re-running.
How It Works Under the Hood
The entire app is ~30KB of HTML, CSS, and JavaScript. No frameworks, no dependencies. Here’s what’s happening technically:
Pattern compilation: Every keystroke triggers a debounced (80ms) recompile. The regex is compiled with new RegExp(pattern, flags) inside a try/catch. Invalid patterns show the error message directly β no cryptic “SyntaxError,” just the relevant part of the browser’s error string.
Match highlighting: I use RegExp.exec() in a loop with the global flag to find every match with its index position. Then I build highlighted HTML by slicing the original string at match boundaries and wrapping matches in <span class="hl"> tags. A safety counter at 10,000 prevents infinite loops from zero-length matches (a real hazard with patterns like .*).
// Simplified version of the match loop
const rxCopy = new RegExp(rx.source, rx.flags);
let safety = 0;
while ((m = rxCopy.exec(text)) !== null && safety < 10000) {
matches.push({ start: m.index, end: m.index + m[0].length });
if (m[0].length === 0) rxCopy.lastIndex++;
safety++;
}
That lastIndex++ on zero-length matches is important. Without it, a pattern like /a*/g will match the empty string forever at the same position. Every regex tutorial skips this, and then people wonder why their browser tab freezes.
Capture groups: When exec() returns an array with more than one element, elements at index 1+ are capture groups. I color-code them with four rotating colors (amber, pink, cyan, purple) and display them below the match result.
Flag toggles: The flag buttons sync bidirectionally with the text input. Click a button, the text field updates. Type in the text field, the buttons update. I store flags as a simple string ("gim") and reconstruct it from button state on each click.
State persistence: Everything saves to localStorage every 2 seconds β pattern, flags, test string, replacement, and all test cases. Reload the page and you’re right where you left off. The service worker caches the HTML for offline use.
Common patterns library: 25 pre-built patterns for emails, URLs, IPs, dates, UUIDs, credit cards, semantic versions, and more. Click one to load it. Searchable. I pulled these from my own .bashrc aliases and validation functions I’ve written over the years.
Design Decisions
Dark mode by default via prefers-color-scheme. Most developers use dark themes. The light mode is there for the four people who don’t.
Monospace everywhere that matters. Pattern input, test strings, results β all in SF Mono / Cascadia Code / Fira Code. Proportional fonts in regex testing are a war crime.
No syntax highlighting in the pattern input. I considered it, but colored brackets and escaped characters inside an input field add complexity without much benefit. The error message and match highlighting already tell you if your pattern is right.
Touch targets are 44px minimum. The flag toggle buttons, tab buttons, and action buttons all meet Apple’s HIG recommendation. I tested at 320px viewport width on my phone and everything still works.
Real Use Cases
Log parsing: I parse nginx access logs daily. A pattern like (\d+\.\d+\.\d+\.\d+).*?"(GET|POST)\s+([^"]+)"\s+(\d{3}) pulls IP, method, path, and status code. Multi-test mode lets me throw 10 sample log lines at it to make sure edge cases (HTTP/2 requests, URLs with quotes) don’t break it.
Input validation: Building a form? Test your email/phone/date regex against a list of valid and invalid inputs in one shot. Way faster than manually testing each one.
Search and replace: Reformatting dates from MM/DD/YYYY to YYYY-MM-DD? The replace mode with $3-$1-$2 backreferences shows you the result instantly.
Teaching: The pattern library doubles as a learning resource. Click “Email” or “UUID” and see a production-quality regex with its flags and description. Better than Stack Overflow answers from 2012.
Try It
It’s live at regexlab.orthogonal.info. Works offline after the first visit. Install it as a PWA if you want it in your dock.
If you want more tools like this β HashForge for hashing, JSON Forge for formatting JSON, QuickShrink for image compression β they’re all at apps.orthogonal.info. Same principle: single HTML file, zero dependencies, your data stays in your browser.
Full disclosure: Mastering Regular Expressions by Jeffrey Friedl is the book that made regex click for me back in college. If you’re still guessing at lookaheads and backreferences, it’s worth the read. Regular Expressions Cookbook by Goyvaerts and Levithan is also solid if you want recipes rather than theory. And if you’re doing a lot of text processing, a good mechanical keyboard makes the difference when you’re typing backslashes all day.
I’ve been hashing things for years β verifying file downloads, generating checksums for deployments, creating HMAC signatures for APIs. And every single time, I end up bouncing between three or four browser tabs because no hash tool does everything I need in one place.
π TL;DR: Iβve been hashing things for years β verifying file downloads, generating checksums for deployments, creating HMAC signatures for APIs. And every single time, I end up bouncing between three or four browser tabs because no hash tool does everything I need in one place. So I built HashForge .
Here’s what frustrated me about the current space. Most online hash generators force you to pick one algorithm at a time. Need MD5 and SHA-256 for the same input? That’s two separate page loads. Browserling’s tools, for example, have a different page for every algorithm β MD5 on one URL, SHA-256 on another, SHA-512 on yet another. You’re constantly copying, pasting, and navigating.
Then there’s the privacy problem. Some hash generators process your input on their servers. For a tool that developers use with sensitive data β API keys, passwords, config files β that’s a non-starter. Your input should never leave your machine.
And finally, most tools feel like they were built in 2010 and never updated. No dark mode, no mobile responsiveness, no keyboard shortcuts. They work, but they feel dated.
What Makes HashForge Different
All algorithms at once. Type or paste text, and you instantly see MD5, SHA-1, SHA-256, SHA-384, and SHA-512 hashes side by side. No page switching, no dropdown menus. Every algorithm, every time, updated in real-time as you type.
Four modes in one tool. HashForge isn’t just a text hasher. It has four distinct modes:
Text mode: Real-time hashing as you type. Supports hex, Base64, and uppercase hex output.
File mode: Drag-and-drop any file β PDFs, ISOs, executables, anything. The file never leaves your browser. There’s a progress indicator for large files and it handles multi-gigabyte files using the Web Crypto API’s native streaming.
HMAC mode: Enter a secret key and message to generate HMAC signatures for SHA-1, SHA-256, SHA-384, and SHA-512. Essential for API development and webhook verification.
Verify mode: Paste two hashes and instantly compare them. Uses constant-time comparison to prevent timing attacks β the same approach used in production authentication systems.
100% browser-side processing. Nothing β not a single byte β leaves your browser. HashForge uses the Web Crypto API for SHA algorithms and a pure JavaScript implementation for MD5 (since the Web Crypto API doesn’t support MD5). There’s no server, no analytics endpoint collecting your inputs, no “we process your data according to our privacy policy” fine print. Your data stays on your device, period.
Technical Deep Dive
HashForge is a single HTML file β 31KB total with all CSS and JavaScript inline. Zero external dependencies. No frameworks, no build tools, no CDN requests. This means:
First paint under 100ms on any modern browser
Works offline after the first visit (it’s a PWA with a service worker)
No supply chain risk β there’s literally nothing to compromise
The MD5 Challenge
The Web Crypto API supports SHA-1, SHA-256, SHA-384, and SHA-512 natively, but not MD5. Since MD5 is still widely used for file verification (despite being cryptographically broken), I implemented it in pure JavaScript. The implementation handles the full MD5 specification β message padding, word array conversion, and all four rounds of the compression function.
Is MD5 secure? No. Should you use it for passwords? Absolutely not. But for verifying that a file downloaded correctly? It’s fine, and millions of software projects still publish MD5 checksums alongside SHA-256 ones.
Constant-Time Comparison
The hash verification mode uses constant-time comparison. In a naive string comparison, the function returns as soon as it finds a mismatched character β which means comparing “abc” against “axc” is faster than comparing “abc” against “abd”. An attacker could theoretically use this timing difference to guess a hash one character at a time.
HashForge’s comparison XORs every byte of both hashes and accumulates the result, then checks if the total is zero. The operation takes the same amount of time regardless of where (or whether) the hashes differ. This is the same pattern used in OpenSSL’s CRYPTO_memcmp and Node.js’s crypto.timingSafeEqual.
PWA and Offline Support
HashForge registers a service worker that caches the page on first visit. After that, it works completely offline β no internet required. The service worker uses a network-first strategy: it tries to fetch the latest version, falls back to cache if you’re offline. This means you always get updates when connected, but never lose functionality when you’re not.
Accessibility
Every interactive element has proper ARIA attributes. The tab navigation follows the WAI-ARIA Tabs Pattern β arrow keys move between tabs, Home/End jump to first/last. There’s a skip-to-content link for screen reader users. All buttons have visible focus states. Keyboard shortcuts (Ctrl+1 through Ctrl+4) switch between modes.
Real-World Use Cases
1. Verifying software downloads. You download an ISO and the website provides a SHA-256 checksum. Drop the file into HashForge’s File mode, copy the SHA-256 output, paste it into Verify mode alongside the published checksum. Instant verification.
2. API webhook signature verification. Stripe, GitHub, and Slack all use HMAC-SHA256 to sign webhooks. When debugging webhook handlers, you can use HashForge’s HMAC mode to manually compute the expected signature and compare it against what you’re receiving. No need to write a throwaway script.
3. Generating content hashes for ETags. Building a static site? Hash your content to generate ETags for HTTP caching. Paste the content into Text mode, grab the SHA-256, and you have a cache key.
4. Comparing database migration checksums. After running a migration, hash the schema dump and compare it across environments. HashForge’s Verify mode makes this a two-paste operation.
5. Quick password hash lookups. Not for security β but when you’re debugging and need to quickly check if two plaintext values produce the same hash (checking for normalization issues, encoding problems, etc.).
What I Didn’t Build
I deliberately left out some features that other tools include:
No bcrypt/scrypt/argon2. These are password hashing algorithms, not general-purpose hash functions. They’re intentionally slow and have different APIs. Mixing them in would confuse the purpose of the tool.
No server-side processing. Some tools offer an “API” where you POST data and get hashes back. Why? The browser can do this natively.
No accounts or saved history. Hash a thing, get the result, move on. If you need to save it, copy it. Simple tools should be simple.
Try It
HashForge is free, open-source, and runs entirely in your browser. Try it at hashforge.orthogonal.info.
If you find it useful, support the project β it helps me keep building privacy-first tools.
For developers: the source is on GitHub. It’s a single HTML file, so feel free to fork it, self-host it, or tear it apart to see how it works.
Looking for more browser-based dev tools? Check out QuickShrink (image compression), PixelStrip (EXIF removal), and TypeFast (text snippets). All free, all private, all single-file.
Looking for a great mechanical keyboard to speed up your development workflow? I’ve been using one for years and the tactile feedback genuinely helps with coding sessions. The Keychron K2 is my daily driver β compact 75% layout, hot-swappable switches, and excellent build quality. Also worth considering: a solid USB-C hub makes the multi-monitor developer setup much cleaner.
What is HashForge: Privacy-First Hash Generator for All Algos about?
Iβve been hashing things for years β verifying file downloads, generating checksums for deployments, creating HMAC signatures for APIs. And every single time, I end up bouncing between three or four b
Who should read this article about HashForge: Privacy-First Hash Generator for All Algos?
Anyone interested in learning about HashForge: Privacy-First Hash Generator for All Algos and related topics will find this article useful.
What are the key takeaways from HashForge: Privacy-First Hash Generator for All Algos?
So I built HashForge . The Problem with Existing Hash Tools Hereβs what frustrated me about the current space. Most online hash generators force you to pick one algorithm at a time.
Last week I needed to debug a nested API response β the kind with five levels of objects, arrays inside arrays, and keys that look like someone fell asleep on the keyboard. Simple enough task. I just needed a JSON formatter.
**π Try JSON Forge now: [jsonformatter.orthogonal.info](https://jsonformatter.orthogonal.info)** β no install, no signup, runs entirely in your browser.
So I opened the first Google result: jsonformatter.org. Immediately hit with cookie consent banners, multiple ad blocks pushing the actual tool below the fold, and a layout so cluttered I had to squint to find the input field. I pasted my JSON β which, by the way, contained API keys and user data from a staging environment β and realized I had no idea where that data was going. Their privacy policy? Vague at best.
Next up: JSON Editor Online. Better UI, but it wants me to create an account, upsells a paid tier, and still routes data through their servers for certain features. Then Curious Concept’s JSON Formatter β cleaner, but dated, and again: my data leaves the browser.
I closed all three tabs and thought: I’ll just build my own.
Introducing JSON Forge
π TL;DR: Last week I needed to debug a nested API response β the kind with five levels of objects, arrays inside arrays, and keys that look like someone fell asleep on the keyboard. I just needed a JSON formatter. **π Try JSON Forge now: [jsonformatter.orthogonal.info](https://jsonformatter.orthogonal.
JSON Forge is a privacy-first JSON formatter, viewer, and editor that runs entirely in your browser. No servers. No tracking. No accounts. Your data never leaves your machine β period.
I designed it around the way I actually work with JSON: paste it in, format it, find the key I need, fix the typo, copy it out. Keyboard-driven, zero friction, fast. Here’s what it does:
Format & Minify β One-click pretty-print or compact output, with configurable indentation
Sort Keys β Alphabetical key sorting for cleaner diffs and easier scanning
Smart Auto-Fix β Handles trailing commas, unquoted keys, single quotes, and other common JSON sins that break strict parsers
Dual View: Code + Tree β Full syntax-highlighted code editor on the left, collapsible tree view on the right with resizable panels
JSONPath Navigator β Query your data with JSONPath expressions. Click any node in the tree to see its path instantly
Search β Full-text search across keys and values with match highlighting
Drag-and-Drop β Drop a .json file anywhere on the page
Syntax Highlighting β Color-coded strings, numbers, booleans, and nulls
Dark Mode β Because of course
Mobile Responsive β Works on tablets and phones when you need it
Keyboard Shortcuts β Ctrl+Shift+F to format, Ctrl+Shift+M to minify, Ctrl+Shift+S to sort β the workflow stays in your hands
PWA with Offline Support β Install it as an app, use it on a plane
Why Client-Side Matters More Than You Think
Here’s the thing about JSON formatters β people paste everything into them. API responses with auth tokens. Database exports with PII. Webhook payloads with customer data. Configuration files with secrets. We’ve all done it. I’ve done it a hundred times without thinking twice.
Most online JSON tools process your input on their servers. Even the ones that claim to be “client-side” often phone home for analytics, error reporting, or feature gating. The moment your data touches a server you don’t control, you’ve introduced risk β compliance risk, security risk, and the quiet risk of training someone else’s model on your proprietary data.
JSON Forge processes everything with JavaScript in your browser tab. Open DevTools, watch the Network tab β you’ll see zero outbound requests after the initial page load. I’m not asking you to trust my word; I’m asking you to verify it yourself. The code is right there.
The Single-File Architecture
One of the more unusual decisions I made: JSON Forge is a single HTML file. All the CSS, all the JavaScript, every feature β packed into roughly 38KB total. No build step. No npm install. No webpack config. No node_modules black hole.
Why? A few reasons:
Portability. You can save the file to your desktop and run it offline forever. Email it to a colleague. Put it on a USB drive. It just works.
Auditability. One file means anyone can read the entire source in an afternoon. No dependency trees to trace, no hidden packages, no supply chain risk. Zero dependencies means zero CVEs from upstream.
Performance. No framework overhead. No virtual DOM diffing. No hydration step. It loads instantly and runs at the speed of vanilla JavaScript.
Longevity. Frameworks come and go. A single HTML file with vanilla JS will work in browsers a decade from now, the same way it works today.
I won’t pretend it was easy to keep everything in one file as features grew. But the constraint forced better decisions β leaner code, no unnecessary abstractions, every byte justified.
The Privacy-First Toolkit
JSON Forge is actually part of a broader philosophy I’ve been building around: developer tools that respect your data by default. If you share that mindset, you might also find these useful:
QuickShrink β A browser-based image compressor. Resize and compress images without uploading them anywhere. Same client-side architecture.
PixelStrip β Strips EXIF metadata from photos before you share them. GPS coordinates, camera info, timestamps β gone, without ever leaving your browser.
HashForge β A privacy-first hash generator supporting MD5, SHA-1, SHA-256, SHA-512, and more. Hash files and text locally with zero server involvement.
Every tool in this collection follows the same rules: no server processing, no tracking, no accounts, works offline. The way developer tools should be.
What’s Under the Hood
For the technically curious, here’s a peek at how some of the features work:
The auto-fix engine runs a series of regex-based transformations and heuristic passes before attempting JSON.parse(). It handles the most common mistakes I’ve seen in the wild β trailing commas after the last element, single-quoted strings, unquoted property names, and even some cases of missing commas between elements. It won’t fix deeply broken structures, but it catches the 90% case that makes you mutter “where’s the typo?” for ten minutes.
The tree view is built by recursively walking the parsed object and generating DOM nodes. Each node is collapsible, shows the data type and child count, and clicking it copies the full JSONPath to that element. It stays synced with the code view β edit the raw JSON, the tree updates; click in the tree, the code highlights.
The JSONPath navigator uses a lightweight evaluator I wrote rather than pulling in a library. It supports bracket notation, dot notation, recursive descent ($..), and wildcard selectors β enough for real debugging work without the weight of a full spec implementation.
Developer Setup & Gear
I spend most of my day staring at JSON, logs, and API responses. If you’re the same, investing in your workspace makes a real difference. Here’s what I use and recommend:
LG 27″ 4K UHD Monitor β Sharp text, accurate colors, and enough resolution to have a code editor, tree view, and terminal side by side without squinting.
Keychron Q1 HE Mechanical Keyboard β Hall effect switches, programmable layers, and a typing feel that makes long coding sessions genuinely comfortable.
Anker USB-C Hub β One cable to connect the monitor, keyboard, and everything else to my laptop. Clean desk, clean mind.
(Affiliate links β buying through these supports my work on free, open-source tools at no extra cost to you.)
Try It, Break It, Tell Me What’s Missing
JSON Forge is free, open, and built for developers who care about their data. I use it daily β it’s replaced every other JSON tool in my workflow. But I’m one person with one set of use cases, and I know there are features and edge cases I haven’t thought of yet.
Give it a try at orthogonal.info/json-forge. Paste in the gnarliest JSON you’ve got. Try the auto-fix on something that’s almost-but-not-quite valid. Explore the tree view on a deeply nested response. Install it as a PWA and use it offline.
If something breaks, if you want a feature, or if you just want to say hey β I’d love to hear from you. And if JSON Forge saves you even five minutes of frustration, consider buying me a coffee. It keeps the lights on and the tools free. β
What is JSON Forge: Privacy-First JSON Formatter in Your Browser about?
Last week I needed to debug a nested API response β the kind with five levels of objects, arrays inside arrays, and keys that look like someone fell asleep on the keyboard. I just needed a JSON format
Who should read this article about JSON Forge: Privacy-First JSON Formatter in Your Browser?
Anyone interested in learning about JSON Forge: Privacy-First JSON Formatter in Your Browser and related topics will find this article useful.
What are the key takeaways from JSON Forge: Privacy-First JSON Formatter in Your Browser?
**π Try JSON Forge now: [jsonformatter.orthogonal.info](https://jsonformatter.orthogonal.info)** β no install, no signup, runs entirely in your browser. So I opened the first Google result: jsonformat