Blind(L)LM™
Let any LLM operate on encrypted data
BlindInsightOrchestrator intercepts a natural-language prompt, translates it into encrypted queries or BlindML model runs, and returns aggregate results to the LLM—which never receives raw records. Keys are scoped to the minimum required for the session and revoked when it ends.
Prompts route through the encrypted query layer
The orchestrator sits between your LLM and your encrypted data store. It issues queries against ciphertext, collects aggregate results, and passes only those aggregates to the model. The model receives counts, ratios, and inference outputs—not records.
from blind_llm import BlindInsightOrchestrator
orc = BlindInsightOrchestrator(backend=proxy_backend)
r = orc.run("Cancer rate for women 60+ with dense breast tissue?")
# LLM sees only aggregate counts — no raw records
print(r.answer) Minimum key set. Session-scoped revocation.
The orchestrator assigns the minimum set of decryption keys the agent needs for the current prompt. At session end, access is revoked—not just the bearer token, but the specific decryption capability granted to that session.
- Minimum key set
- Scoped per session, not per user
- Session revocation
- Decryption capability revoked at session end
- Aggregate-only responses
- LLM receives counts and ratios, not records
- Any LLM backend
- OpenAI, Anthropic, Gemini, Llama, Mistral, Cohere, DeepSeek
Start building on your schema.
Get started at the Build tier, $9/month.