Public Roadmap
Full transparency into what we've shipped and what's coming next. We build in the open.
Shipped
CORS-Ready API
Call Behest directly from your browser. No backend proxy needed.
Auth & Tenant Isolation
Multi-tenant authentication with per-project API keys and JWT support.
Three-Tier Rate Limiting
Per-IP, per-project, and per-user rate limiting with zero code.
PII Shield
Automatic PII detection and protection before data reaches the LLM.
Sentinel — Prompt Injection Defense
Block jailbreak attempts and prompt injection attacks automatically.
Conversation Memory
Session-based conversation memory. Users pick up where they left off.
System Prompt Management
Configure your AI's personality and behavior per project.
Token Budgets
Automatic cost control with per-user and per-project daily limits.
Kill Switches
Instant emergency shutdown at global, tenant, or project level.
Full Observability Stack
Traces, metrics, and logs — correlated automatically.
Usage Analytics & Spend Tracking
Know your AI spend. Per-model, per-user analytics in real time.
Self-Hosted Deployment
Deploy in your cloud. Your data never leaves your infrastructure.
Planned
Smart LLM Routing
Route requests to the optimal model based on cost, latency, and capability.
Semantic Cache
Faster responses and lower costs for semantically similar queries.
Built-in RAG
Ground your AI in your documents with built-in retrieval.
Usage Tiers & Token Economics
Set up tiered pricing for your end users. Monetize your AI features.
BYO LLM API Keys
Bring your own OpenAI, Anthropic, Mistral, or any LLM API keys.
Have a feature request?
Let us know what you need →