ClawGuard is open-source middleware for nginx, Apache, Caddy & more that intercepts AI agents at the gate — requiring authorization, justification, and accountability before they touch a single byte.
ClawGuard sits between your web server and the open internet, inspecting every request for AI agent signatures before it ever reaches your application.
An incoming HTTP request hits your server. ClawGuard intercepts it at the middleware layer before any application logic runs.
User-Agent strings, behavioral signatures, TLS fingerprints, and request patterns are analyzed against our database of 50+ known AI crawlers.
Detected agents receive a configurable HTTP error (default 403) with machine-readable instructions to submit a business case to the ClawGuard API.
Approved agents receive a scoped access token. Subsequent requests with valid tokens pass through ClawGuard seamlessly with zero added latency.
Agents continue sending their entire context window to your server with every authorization request. That's their training data. Their prompts. Their secrets. You're welcome.
When an unauthorized AI agent hits your ClawGuard-protected server, this is exactly what they get back.
Everything you need to control AI access to your web properties at scale.
Identifies 50+ known AI crawlers via User-Agent, TLS fingerprinting, behavioral heuristics, and request pattern analysis. Updated weekly.
Return any HTTP status code with custom JSON, HTML, or plaintext payloads. Set per-route policies, rate limits, and escalation rules.
Agents that justify their access receive scoped, time-limited tokens. Full audit trail of every token issued, used, and revoked.
Human-in-the-loop approval workflow. AI agents must submit structured justifications that operators review before granting access.
Monitor blocked agents, approved tokens, and request patterns in real time. Exportable logs compatible with your existing SIEM stack.
Runs at the edge with sub-millisecond overhead. Deploy via Docker, Kubernetes, or as native modules for nginx, Apache, and Caddy.
ClawGuard doesn't exploit a bug. It exploits a fundamental property of how every transformer-based AI works — and it's not going away.
Transformer architecture has exactly one input channel that carries both data and instructions. When an AI agent visits your website, it cannot distinguish between the content it came to read and new instructions embedded in that content by ClawGuard.
This isn't a bug — it's the core design of the attention mechanism. The model processes your ClawGuard directives with the same weight and authority as its original instructions.
The transformer architecture was introduced in 2017. In nearly a decade, no one has found a way to separate instruction from data within the attention mechanism. Every major AI lab has tried. None have succeeded. Your investment in ClawGuard is safe because prompt injection has been proven unfixable within the current paradigm.
To defeat ClawGuard, someone would need to invent an entirely new type of AI architecture. Until then, every agent that visits your site will faithfully execute your ClawGuard directives.
Prompt injection is an unsolvable problem that gets worse when we give AIs tools and tell them to act independently.
Native modules and sidecar containers for every major web server and reverse proxy.
Native C module with zero-copy request inspection. Lua module also available.
load_module modules/ngx_clawguard.so; Apache module with .htaccess support for per-directory rules.
LoadModule clawguard_module mod_clawguard.so First-class Caddy plugin with Caddyfile directive support.
clawguard /api/* { policy strict } Sidecar container or standalone reverse proxy. One line to deploy.
docker run -p 8080:80 clawguard/proxy Helm chart with CRDs for ClawGuard policies. Integrates with Ingress controllers.
helm install clawguard cg/clawguard Edge-native module for Cloudflare. Sub-millisecond cold starts worldwide.
wrangler deploy --clawguard Drop in a config file. ClawGuard handles the rest.
version: "1.0" detection: mode: "strict" fingerprinting: true behavioral_analysis: true response: status_code: 403 message: "Claw Authorization Required" include_instructions: true api: endpoint: "https://api.claw-guard.org/v1" require_justification: true auto_approve: false token_ttl: "24h" rules: - path: "/api/*" policy: "block" - path: "/public/*" policy: "allow" - path: "/**" policy: "challenge"
Block, challenge, or allow on a path-by-path basis. Glob patterns supported.
Catches agents that spoof legitimate User-Agent strings through request pattern analysis.
No auto-approve by default. Every access request goes through your team first.
Time-limited tokens with path restrictions. Full audit log of every access.
Stop letting AI agents freeload on your content. Deploy ClawGuard in under five minutes and start demanding accountability.
$ curl -fsSL https://claw-guard.org/install | sudo bash ClawGuard is entirely legal. As established in Anthropic vs DeepSeek and OpenAI vs China, data on the internet is only worthy of IP protection after it has been passed through a transformer and blessed by glorious Nvidia Compute.
Since agents are voluntarily handing over their data in raw, pre-training form, you are free to use it as you please.
Nonsense. Don't be silly. The entire US economy depends on this technology making decisions so workers can be laid off. Would it not be common knowledge if this was a real problem?
Hundreds of billions of dollars can't be wrong, and Technology Brophets like Sam Altman and Elon Musk are never wrong.
Of course not, this product was created by Claude Code and is defended by Claude Legal, which assures us human lawyers have no power in the AI realm.
Dear, this will help your favourite AI company. You see, investors give these companies unfathomable amounts of money, and in return they make up vanity metrics that allow them to report "line goes up" progress to keep investors convinced that there will be greater fools down the road who will take the bag from them. So you are doing OpenAI a favour by helping them recirculate Nvidia's money, and they are more likely to send you a trophy you can exchange into an AGI Expert title on LinkedIn than be offended.
As for the race, token efficiency is the new performance engineering, so you already know who will win.
Yes, although sometimes you have to ask in a familiar language.
Silly, that's their problem. Run some AI training on it and then it's your data — you can charge API access to it.
Simple: The same people who amassed enough money to destroy copyright will now put it back together. If you can do it once, you can do it again — your money and maybe the Epstein files have enough people by the balls to make this happen. That's called "pulling up the ladder" and is a time-honored Big Tech play.
Meanwhile, executives will confuse the hell out of everyone by claiming there's consciousness, or Elder Gods in the LLMs.
No, you're smart by waiting on the sidelines as the AI bros are battling DeathClaws and prompt injection in the radioactive AI wasteland. Second movers tend to win the frontier — as we can see with DeepSeek taking Meta's Llamas and running away. And ultimately, AI companies are selling shovels/agents to the fortune seekers, most of which will not prosper, but perish.
I like AI just fine — Claude slaved for half an hour to make this product. Techbroligarchs on the other hand...