AI APIs shouldn't see
your raw data.

n0inject sits between your application and the model scrubbing sensitive data, authenticating every caller, and defending against prompt injection. Self-hosted. No vendor dependency. No SaaS required.

N0INJECT REQUEST LOG� LIVE

$ n0inject proxy --config ./proxy.yaml

01AUTHENTICATE
02GOVERN
03SCRUB DATA
04SCORE THREAT
05ROUTE
06RE-HYDRATE
RESULTREQUEST FORWARDED124ms
Self-HostedPII ScrubbingPrompt Injection DefenseOnline & Offline ValidationDeploy AnywhereFree & Open SourceSelf-HostedPII ScrubbingPrompt Injection DefenseOnline & Offline ValidationDeploy AnywhereFree & Open Source

Why n0inject exists

Three problems every team hits when wiring LLMs into production.

How it works

Up and running in three steps.

01
Deploy the proxy.

Download a single binary for your platform. One config file, no infrastructure changes.

linux/amd64linux/arm64windows/amd64

$ wget n0inject-linux-amd64

$ ./n0inject --config proxy.yaml

✓ proxy ready on :8080

02
Configure your rules.

Define virtual keys, token budgets, scrubbing policies, and your provider all in one YAML file.

provider: openai

virtual_key: app-prod

budget: 100 000 tokens

rate_limit: 60 / min

scrub: [email, iban, phone]

injection_threshold: 0.7

03
Every request secured.

All AI traffic flows through n0inject authenticated, scrubbed, scored, and routed on every call.

AUTHvirtual key verified
GOVERNwithin budget
SCRUB2 fields redacted
SCOREthreat score 0.09
ROUTE→ openai / gpt-4o
RESULTforwarded · 118ms

01 Live Request

Watch a real request travel through n0inject.

Every AI call passes through six ordered steps. Watch what happens to authentication, budget, PII, injection risk, routing, and the response automatically.

01

Auth

Who is calling?

02

Govern

Within budget?

03

Scrub

Hide sensitive data

04

Score

Detect attacks

05

Route

Pick the provider

06

Rehydrate

Restore the response

✓ Response Delivered

02 Capabilities

Six controls. One ordered pipeline.

n0inject handles the full lifecycle of an AI request from authenticating the caller to re-hydrating scrubbed data on the way back.

01

Access Control

Every call is authenticated against a virtual key. Keys are isolated, one caller's credentials never bleed into another's budget, rate limit, or identity.

02

Privacy Filtering

Sensitive fields are scrubbed before the provider sees them, then deterministically restored in the response. Data never leaves your boundary in plaintext.

03

Injection Hardening

Prompt injection is scored before the request is forwarded. You set the policy the proxy enforces it, with canaries that catch leakage on the way back.

04

Provider Routing

The proxy speaks to OpenAI, Anthropic, or a local mock. Switch providers in config no code changes. Circuit breakers handle failures automatically.

05

Governance Controls

Token budgets, rate windows, and payload size limits enforced per key at the proxy edge no external control plane needed.

06

Operational Surfaces

Health, readiness, status, and metrics endpoints out of the box. An offline selftest validates the full pipeline without live provider keys.

03 Open Source

Free. Open source. Self-host in minutes.

Clone the repository, build the binary, and deploy it in front of your AI provider. Every release includes a signed binary, checksum, and installer. Nothing is hidden or proprietary.

GitHub · Soon

Repository publishing shortly

PlannedSoon

Linux

AMD64

PlannedSoon

Linux

ARM64

PlannedSoon

Windows

AMD64

Not planned

Darwin

All architectures

Not planned

Windows

ARM64

Quick start

# repository coming soon on GitHub
git clone github.com/n0inject  # soon
cd n0inject && make build
./n0inject --config config.yaml

Release verification

manifest   →  target match
checksum   →  sha256 verify
installer  →  offline selftest

The Reality

AI is already in production.
The security layer isn't.

Most teams ship AI features fast and secure them later. Later rarely comes. n0inject is the layer you add once, before something goes wrong.

01

Every unprotected AI call is a data transfer you did not authorize.

Your system prompt, your user's message, any PII in the conversation: it all reaches the provider verbatim unless something scrubs it first.

02

One crafted input can redirect an agent that has access to your systems.

Prompt injection does not need a sophisticated attacker. A user who knows how language models respond to certain phrases is enough. Agents with tool access make the stakes real.

03

You cannot audit what you never controlled.

If you are not enforcing authentication and policy at the boundary, you have no record of who called what, with what data, or why. That matters the first time you are asked.

Put the security layer in.
Own every byte of your pipeline.

Self-hosted. Open source. No vendor dependency. Deploy in front of any AI provider and enforce your own rules from day one.