AI APIs shouldn't see
your raw data.
n0inject sits between your application and the model scrubbing sensitive data, authenticating every caller, and defending against prompt injection. Self-hosted. No vendor dependency. No SaaS required.
$ n0inject proxy --config ./proxy.yaml
Why n0inject exists
Three problems every team hits when wiring LLMs into production.
How it works
Up and running in three steps.
Download a single binary for your platform. One config file, no infrastructure changes.
$ wget n0inject-linux-amd64
$ ./n0inject --config proxy.yaml
✓ proxy ready on :8080
Define virtual keys, token budgets, scrubbing policies, and your provider all in one YAML file.
provider: openai
virtual_key: app-prod
budget: 100 000 tokens
rate_limit: 60 / min
scrub: [email, iban, phone]
injection_threshold: 0.7
All AI traffic flows through n0inject authenticated, scrubbed, scored, and routed on every call.
01 Live Request
Watch a real request travel through n0inject.
Every AI call passes through six ordered steps. Watch what happens to authentication, budget, PII, injection risk, routing, and the response automatically.
Auth
Who is calling?
Govern
Within budget?
Scrub
Hide sensitive data
Score
Detect attacks
Route
Pick the provider
Rehydrate
Restore the response
01
Auth
02
Govern
03
Scrub
04
Score
05
Route
06
Rehydrate
Incoming request…
02 Capabilities
Six controls. One ordered pipeline.
n0inject handles the full lifecycle of an AI request from authenticating the caller to re-hydrating scrubbed data on the way back.
Access Control
Every call is authenticated against a virtual key. Keys are isolated, one caller's credentials never bleed into another's budget, rate limit, or identity.
Privacy Filtering
Sensitive fields are scrubbed before the provider sees them, then deterministically restored in the response. Data never leaves your boundary in plaintext.
Injection Hardening
Prompt injection is scored before the request is forwarded. You set the policy the proxy enforces it, with canaries that catch leakage on the way back.
Provider Routing
The proxy speaks to OpenAI, Anthropic, or a local mock. Switch providers in config no code changes. Circuit breakers handle failures automatically.
Governance Controls
Token budgets, rate windows, and payload size limits enforced per key at the proxy edge no external control plane needed.
Operational Surfaces
Health, readiness, status, and metrics endpoints out of the box. An offline selftest validates the full pipeline without live provider keys.
Access Control
Every call is authenticated against a virtual key. Keys are isolated, one caller's credentials never bleed into another's budget, rate limit, or identity.
Proxy checks
03 Open Source
Free. Open source. Self-host in minutes.
Clone the repository, build the binary, and deploy it in front of your AI provider. Every release includes a signed binary, checksum, and installer. Nothing is hidden or proprietary.
GitHub · Soon
Repository publishing shortly
Linux
AMD64
Linux
ARM64
Windows
AMD64
Darwin
All architectures
Windows
ARM64
Quick start
# repository coming soon on GitHub
git clone github.com/n0inject # soon
cd n0inject && make build
./n0inject --config config.yamlRelease verification
manifest → target match
checksum → sha256 verify
installer → offline selftestThe Reality
AI is already in production.
The security layer isn't.
Most teams ship AI features fast and secure them later. Later rarely comes. n0inject is the layer you add once, before something goes wrong.
Every unprotected AI call is a data transfer you did not authorize.
Your system prompt, your user's message, any PII in the conversation: it all reaches the provider verbatim unless something scrubs it first.
One crafted input can redirect an agent that has access to your systems.
Prompt injection does not need a sophisticated attacker. A user who knows how language models respond to certain phrases is enough. Agents with tool access make the stakes real.
You cannot audit what you never controlled.
If you are not enforcing authentication and policy at the boundary, you have no record of who called what, with what data, or why. That matters the first time you are asked.
Put the security layer in.
Own every byte of your pipeline.
Self-hosted. Open source. No vendor dependency. Deploy in front of any AI provider and enforce your own rules from day one.