About The Project

The AI agent era has a security problem.

AI agents are executing code, reading files, and calling APIs on your behalf. Every request you send to a provider carries your users' data, your system prompt, and enough context to reconstruct things that should never leave your infrastructure. n0inject is the layer you put in front of every model call to enforce the rules before any of that goes upstream.

Three Problems Right Now

01

Your data leaves your infrastructure raw.

Every request to OpenAI, Anthropic, or any provider carries the full payload: system prompts, user messages, conversation history, embedded PII. Most stacks have nothing in between that enforces what is allowed to leave.

02

Prompt injection is being actively exploited.

Inputs that override system prompts, extract hidden context, or redirect model behavior are not theoretical. Agents with real tool access are being targeted right now. The industry is aware. Most deployments have no enforcement layer.

03

Agents have no perimeter. They need one.

AI agents execute code, call APIs, and read files on your behalf. Without an authentication and policy layer, every agent has an open pipe into your systems. Automation at that scale requires defined limits, not assumed ones.

"The answer to AI security should not require trusting a hosted platform with the very data you are trying to protect."

n0inject · self-hosted by design

Principles

Four rules enforced on every single request.

01Authenticate

Every caller authenticates. Agents, scripts, and services all present a verified key before a single token is processed.

02Scrub

Sensitive data is scrubbed at the proxy edge and restored on the way back. The provider never sees real values.

03Score

Prompt injection is scored before forwarding. Policy determines what happens: warn, sanitize, block, or quarantine.

04Self-Host

Nothing phones home. The proxy runs entirely within your infrastructure with no external control plane.

The Architecture

If automation requires access, access requires control.

The problem with AI agents is not the automation. It is the absence of a defined perimeter. n0inject is that perimeter: one layer, every model call, every agent, the same enforcement rules. You own it, you run it.

Your App

the caller

n0inject

auth · scrub · score · route · rehydrate

AI Provider

OpenAI, Anthropic, …

N0
Data Leakage

PII, credentials, and internal identifiers are replaced with stable placeholders before leaving your boundary. Real values are restored in the response. The mapping is destroyed immediately.

N0
Blind Trust

Every caller (user, scheduled job, or autonomous agent) authenticates against a virtual key. Rate limits, token budgets, and access scope are enforced per identity, not assumed.

N0
Prompt Injection

Requests are scored for injection patterns before forwarding. Instructions that try to override the system prompt or extract hidden context are caught and acted on by your policy, not by hope.