Migration Guide — Switching to Behest
⚠️ This page documents the v1.x migration pattern (raw
BehestClient+BEHEST_API_KEY). The new per-framework migration guides cover the v1.5 mint flow and explain whyBEHEST_KEYis not a drop-in replacement for yourOPENAI_API_KEYon/v1/chat/completions:
- Migrating from OpenAI ← the one most readers want
- Migrating from OpenRouter
The claims below about "what changes is just the Base URL" are only true for server-to-server code using the v1.x
BehestClient. For v1.5 flows or per-end-user JWTs, follow the guides above.
Behest is API-compatible with the OpenAI API format. The migration principle is: what changes is how you authenticate and where you send requests; what stays the same is every API call, every response type, every streaming pattern, and every existing TypeScript type.
What Changes, What Stays the Same
| Aspect | Changes? | Notes |
|---|---|---|
| Base URL | Yes | Point to https://api.behest.app/v1 |
| API key | Yes | Use your Behest API key, not your OpenAI key |
| SDK constructor | Yes | Swap client class (TypeScript) or base_url (Python) |
| All API calls | No | chat.completions.create() works unchanged |
| All response types | No | Same JSON structure, same TypeScript types |
| Streaming | No | stream: true and SSE work identically |
| Model names | No | Use the same model IDs (gpt-4o, claude-sonnet-4-20250514, etc.) |
| Function calling | No | Tools and function calling work unchanged |
| Temperature, max_tokens, etc. | No | All standard parameters pass through |
| Error format | No | OpenAI-format error responses |
From OpenAI TypeScript SDK
Before:
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is 2 + 2?" },
],
});
console.log(response.choices[0].message.content);After:
import { BehestClient } from "@behest/client-ts"; // changed
const client = new BehestClient({
apiKey: process.env.BEHEST_API_KEY, // changed
});
// Everything below is unchanged
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is 2 + 2?" },
],
});
console.log(response.choices[0].message.content);Install the SDK:
npm install @behest/client-ts@beta openaiFrom OpenAI Python SDK
Before:
from openai import OpenAI
client = OpenAI(api_key="sk-...")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)After:
from openai import OpenAI
client = OpenAI(
api_key="bh_live_...", # changed
base_url="https://api.behest.app/v1", # added
)
# Everything below is unchanged
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)No new packages to install — openai is already a dependency.
From Direct OpenAI REST API Calls
Before:
curl -X POST https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'After:
curl -X POST https://{slug}.behest.app/v1/chat/completions \
-H "Authorization: Bearer $BEHEST_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'Two values change: the URL and the API key. The request body and response format are identical.
From Another AI Gateway (Portkey, Helicone)
The migration from Portkey or Helicone is nearly identical to migrating from OpenAI directly. Both gateways use the same OpenAI-compatible API format.
From Portkey:
// Before (Portkey)
import Portkey from "portkey-ai";
const client = new Portkey({ apiKey: "...", virtualKey: "..." });
// After (Behest)
import { BehestClient } from "@behest/client-ts";
const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });From Helicone (OpenAI with custom baseURL):
// Before (Helicone)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: "https://oai.helicone.ai/v1",
defaultHeaders: { "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}` },
});
// After (Behest)
import { BehestClient } from "@behest/client-ts";
const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });Remove any gateway-specific headers — Behest uses your Behest API key as the sole authentication credential.
Step-by-Step Migration Process
Step 1: Create a Behest account and project
- Sign up at behest.ai using Google, GitHub, or email magic link.
- Create a project — a name is the only required input.
- Copy your project's API key. It is shown once — save it securely.
Step 2: Configure your provider keys (BYOK)
If you use models other than the platform default (gemini-2.5-flash), add your provider
API keys in the dashboard under Settings > Providers.
Supported providers:
- OpenAI — for
gpt-4o,gpt-4o-mini,o1,o3 - Anthropic — for
claude-*models - Google — for
gemini-*models - Mistral — for
mistral-*andcodestral-*models - Cohere — for
command-r-*models - OpenRouter — for any model via OpenRouter routing
Step 3: Update your client
Follow one of the "before/after" examples above for your language. Update your .env:
# Remove (or keep for direct OpenAI usage elsewhere)
# OPENAI_API_KEY=sk-...
# Add
BEHEST_API_KEY=bh_live_...Step 4: Verify the migration
Run your existing test suite or use a smoke test:
// smoke-test.ts
import { BehestClient } from "@behest/client-ts";
const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: 'Respond with "migration successful".' }],
max_tokens: 10,
});
console.log(response.choices[0].message.content);
// Expected output: migration successfulStep 5: Deploy incrementally (optional but recommended)
For production traffic, route a percentage through Behest before fully cutting over:
const usesBehest = Math.random() < 0.1; // 10% of requests through Behest
const client = usesBehest
? new BehestClient({ apiKey: process.env.BEHEST_API_KEY })
: new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await client.chat.completions.create({ ... });Monitor error rates and latency, then increase the percentage as confidence builds.
BYOK Setup (Bring Your Own Keys)
Behest's BYOK system lets you use your existing provider API keys. Behest decrypts your key at request time, sends it to the provider, and returns the response. The plaintext key is never logged or persisted beyond the duration of one request.
Via the dashboard
- In the dashboard, navigate to Settings > Providers.
- Click Add Provider Key for your provider (OpenAI, Anthropic, Google, etc.).
- Enter your provider API key and click Save. Behest validates the key format before storing it.
- In your project's Model Settings, select the model you want to use.
- Click Deploy to publish the configuration. Changes take effect within 30 seconds.
Via the API
# Save a provider key for your tenant
curl -X PUT https://api.behest.app/v1/tenants/$TENANT_ID/providers/openai \
-H "Authorization: Bearer $DASHBOARD_JWT" \
-H "Content-Type: application/json" \
-d '{"api_key": "sk-..."}'
# Update your project to use GPT-4o
curl -X PUT https://api.behest.app/v1/projects/$PROJECT_ID/settings \
-H "Authorization: Bearer $DASHBOARD_JWT" \
-H "Content-Type: application/json" \
-d '{"provider_model": "gpt-4o"}'
# Deploy to activate
curl -X POST https://api.behest.app/v1/projects/$PROJECT_ID/settings/deploy \
-H "Authorization: Bearer $DASHBOARD_JWT"Common Migration Gotchas
"Model not found" after switching
Behest needs a provider key for non-default models. If you switch from gpt-4o (OpenAI) to
Behest without adding your OpenAI key, the request fails with:
{
"error": "Configure your openai API key in Provider Settings to use gpt-4o"
}Fix: add your OpenAI key in Settings > Providers, then select gpt-4o in your project's
model settings and deploy.
Model parameter is ignored
If you have a provider model configured in project settings, Behest uses that model and
ignores the model field in your request — unless the requested model belongs to the same
provider. This is by design: it prevents accidental cross-provider credential misrouting.
To use a different model per-request, configure it in project settings and deploy. For multi-model routing, use separate projects.
CORS errors when calling from the browser
Behest restricts CORS by default. After project creation, the dashboard's origin is automatically allowed. To allow additional origins:
curl -X PUT https://api.behest.app/v1/projects/$PROJECT_ID/settings \
-H "Authorization: Bearer $DASHBOARD_JWT" \
-H "Content-Type: application/json" \
-d '{"cors_origins": ["https://your-app.com", "https://staging.your-app.com"]}'Deploy the settings after updating.
Streaming works differently in some frameworks
Behest returns the same SSE format as OpenAI. If streaming breaks, check that your proxy or
middleware is not buffering the response. In Node.js, ensure the Transfer-Encoding: chunked
header passes through.
Free tier limits
The free tier allows 3 projects and has rate limits on requests per minute. If you hit a
429 during testing, check the Retry-After response header. Upgrade to Pro at
behest.ai/billing to increase limits.