Call Your LLM from the Browser
Behest handles CORS natively. No backend proxy, no server code, no infrastructure to maintain. Call AI from React, Vue, Svelte, or vanilla JS.
The Problem
LLM provider APIs (OpenAI, Anthropic, Google) do not support CORS. If you try to call them from browser JavaScript, the browser blocks the request:
Access to fetch at 'https://api.openai.com/v1/chat/completions'
from origin 'http://localhost:3000' has been blocked by CORS policy:
No 'Access-Control-Allow-Origin' header is present on the requested resource.The traditional solution is to build a backend proxy server — a Node.js, Python, or Go service that sits between your frontend and the LLM. This means more code, more infrastructure, more latency, and more maintenance. For many teams, the proxy server becomes more complex than the app itself.
How Behest Solves CORS
Behest includes per-project CORS configuration. Set your allowed origins in the dashboard, and Behest handles preflight responses automatically. Your frontend calls Behest directly — no proxy needed.
Per-Project Origins
Configure allowed origins per project. Dev and production environments can use different origin lists.
Preflight Handling
Behest responds to OPTIONS preflight requests automatically with the correct CORS headers.
Credentials Support
Authorization headers work out of the box. No special configuration needed for authenticated requests.
Configure CORS Origins
- Open the Behest dashboard
- Navigate to your project settings
- Add your allowed origins (e.g.,
http://localhost:3000,https://myapp.com) - Save — changes take effect immediately
Tip: Add http://localhost:3000 for local development and your production domain for deployment. You can add multiple origins per project.
React Example — Direct Browser Call
This React component calls Behest directly from the browser. No backend server, no API route, no proxy:
import { useState } from "react";
function AiChat() {
const [response, setResponse] = useState("");
const [loading, setLoading] = useState(false);
async function askAi(question) {
setLoading(true);
try {
const res = await fetch(
"https://your-project.behest.app/v1/chat/completions",
{
method: "POST",
headers: {
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "gemini-2.5-flash",
messages: [{ role: "user", content: question }],
}),
}
);
const data = await res.json();
setResponse(data.choices[0].message.content);
} catch (error) {
console.error("Request failed:", error);
} finally {
setLoading(false);
}
}
return (
<div>
<button onClick={() => askAi("Explain CORS in one sentence")}>
{loading ? "Thinking..." : "Ask AI"}
</button>
{response && <p>{response}</p>}
</div>
);
}Without Behest vs. With Behest
Without Behest
- Build a backend server (Node.js, Python, Go)
- Add an API route to proxy LLM calls
- Handle authentication on the server
- Configure CORS on your own server
- Deploy and maintain the backend
- Add PII protection yourself
- Build rate limiting yourself
- Add conversation memory yourself
With Behest
- Set your origins in the dashboard
- Call Behest from your frontend
- Done. Ship your app.
Security Considerations
- Origin validation: Only requests from your configured origins are allowed. All others are rejected.
- API key in the browser: Your Behest API key is scoped to a single project with configured rate limits, token budgets, and kill switches. Even if exposed, the damage surface is limited by your project configuration.
- Rate limiting: Three-tier rate limiting (per-IP, per-project, per-user) prevents abuse even if your API key is compromised.
- Token budgets: Daily token budgets prevent runaway costs regardless of how the API is called.