TypeScript SDK — @behest/client-ts
⚠️ This page documents the v1.x legacy
BehestClientAPI. For new integrations, use the v1.5Behestclass — it addsauth.mint(), dual-mode (API-key + local-signing), typed errors, threads, usage, and proper TTL/session handling.Start here instead:
- Next.js App Router quickstart · React + Vite · Node + Express
- Auth modes · Error handling · Multi-conversation chat
The
BehestClient-based examples below still work (v1.x is supported through v2.0), but you'll see deprecation warnings and miss the new surface.
The official Behest TypeScript/JavaScript SDK. It extends the OpenAI Node.js SDK with Behest authentication and header injection, so any code that works with the OpenAI SDK works with Behest by changing the client constructor.
Installation
npm install @behest/client-ts@beta openaiThe OpenAI SDK is a peer dependency. Install both together so your lockfile pins compatible versions. The Behest SDK requires Node.js 18+.
Basic Usage
import { BehestClient } from "@behest/client-ts";
const client = new BehestClient({
apiKey: process.env.BEHEST_API_KEY,
});
const response = await client.chat.completions.create({
model: "gemini-2.5-flash",
messages: [{ role: "user", content: "Say hello in three languages." }],
});
console.log(response.choices[0].message.content);The apiKey is the Behest API key you created in the dashboard. It is passed as the
Authorization: Bearer header on every request. All Behest-specific headers (tenant, project)
are injected automatically.
Constructor Options
const client = new BehestClient({
// Required
apiKey: string, // Your Behest API key
// Optional
projectSlug?: string, // Route requests to a specific project by slug
baseURL?: string, // Override the API gateway URL (default: https://api.behest.app/v1)
maxRetries?: number, // Number of retries on transient failures (default: 2)
timeout?: number, // Request timeout in milliseconds (default: 60000)
defaultHeaders?: Record<string, string>, // Extra headers sent on every request
});Environment variable pattern (recommended for production)
// .env
// BEHEST_API_KEY=bh_live_...
const client = new BehestClient({
apiKey: process.env.BEHEST_API_KEY,
});Chat Completions
Non-streaming
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a concise assistant." },
{ role: "user", content: "What is the capital of France?" },
],
temperature: 0.7,
max_tokens: 200,
});
const text = response.choices[0].message.content;
console.log(text);
// Usage statistics
console.log(response.usage?.total_tokens);Streaming
const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Write a haiku about autumn." }],
stream: true,
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}
console.log(); // newline after stream endsStreaming in a Next.js API route
// src/app/api/chat/route.ts
import { BehestClient } from "@behest/client-ts";
import { NextRequest } from "next/server";
const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });
export async function POST(req: NextRequest) {
const { message } = await req.json();
const stream = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: message }],
stream: true,
});
// Pipe the OpenAI SSE stream to the browser as a ReadableStream
const encoder = new TextEncoder();
const readable = new ReadableStream({
async start(controller) {
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content ?? "";
controller.enqueue(encoder.encode(`data: ${delta}\n\n`));
}
controller.close();
},
});
return new Response(readable, {
headers: { "Content-Type": "text/event-stream" },
});
}Error Handling
The SDK surfaces errors as typed classes. All Behest errors extend OpenAI.APIError, so your
existing OpenAI error handling continues to work. Behest-specific error classes give you richer
information without requiring manual status code inspection.
import { BehestClient } from "@behest/client-ts";
import OpenAI from "openai";
const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });
try {
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.choices[0].message.content);
} catch (error) {
if (error instanceof OpenAI.APIError) {
console.error(`Status: ${error.status}`);
console.error(`Code: ${error.code}`);
console.error(`Msg: ${error.message}`);
if (error.status === 401) {
// Invalid or expired API key — re-check your BEHEST_API_KEY env var
} else if (error.status === 429) {
// Rate limited or token budget exceeded
const retryAfter = error.headers?.["retry-after"];
console.log(`Retry after: ${retryAfter}s`);
} else if (error.status === 403) {
// PII Shield or Sentinel blocked the request
console.log("Blocked by guardrail:", error.code);
} else if (error.status === 502) {
// Upstream LLM provider error — not a Behest issue
}
}
}Error code reference
| Status | Code | Cause |
|---|---|---|
| 401 | BEHEST_AUTH_MISSING | No Authorization header |
| 401 | BEHEST_AUTH_INVALID | Invalid API key |
| 401 | BEHEST_AUTH_EXPIRED | JWT has expired |
| 404 | BEHEST_PROJECT_NOT_FOUND | Unknown project slug |
| 404 | BEHEST_MODEL_NOT_FOUND | Model not available for the project |
| 403 | BEHEST_PII_BLOCKED | PII Shield blocked a BLOCK-action entity |
| 403 | BEHEST_CONTENT_BLOCKED | Sentinel blocked a jailbreak or blocklist match |
| 429 | BEHEST_RATE_LIMIT | RPM limit exceeded |
| 429 | BEHEST_BUDGET_EXCEEDED | Token budget exhausted |
| 502 | BEHEST_PROVIDER_ERROR | Upstream LLM provider returned an error |
| 408 | BEHEST_PROVIDER_TIMEOUT | Provider did not respond in time |
TypeScript Types
The SDK re-exports all OpenAI types, so you never need to import from openai directly for
standard chat completion types.
import { BehestClient } from "@behest/client-ts";
import type { ChatCompletionMessageParam } from "openai/resources/chat/completions";
// Type-safe message array
const messages: ChatCompletionMessageParam[] = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is 2 + 2?" },
];
const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });
const response = await client.chat.completions.create({
model: "gemini-2.5-flash",
messages,
});
// Fully typed — IDE autocomplete works on all response fields
const choice = response.choices[0];
const content: string | null = choice.message.content;
const finishReason: string | null = choice.finish_reason;Migration from OpenAI SDK
This is the minimal change to switch existing OpenAI code to Behest:
Before (OpenAI):
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});After (Behest):
import { BehestClient } from "@behest/client-ts"; // changed
const client = new BehestClient({
apiKey: process.env.BEHEST_API_KEY, // changed
});
// Everything below is unchanged
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});Two lines change. All your existing API calls, response parsing, and TypeScript types continue to work without modification. See the full migration guide for multi-provider and advanced scenarios.
Advanced: Custom Headers
Use defaultHeaders to send additional headers on every request, or pass them per-call via the
second argument.
// Client-level: headers on every request
const client = new BehestClient({
apiKey: process.env.BEHEST_API_KEY,
defaultHeaders: {
'X-End-User-Id': currentUser.id, // track per-user usage in analytics
},
});
// Per-call: headers on one request only
const response = await client.chat.completions.create(
{ model: 'gpt-4o', messages: [...] },
{ headers: { 'X-Idempotency-Key': requestId } },
);Advanced: Timeout Configuration
const client = new BehestClient({
apiKey: process.env.BEHEST_API_KEY,
timeout: 30_000, // 30 seconds (default: 60 seconds)
maxRetries: 3, // retries on 429, 408, 502, 503 (default: 2)
});Retries use exponential backoff. For 429 responses, the SDK respects the Retry-After
response header when present.
Supported Models
The available models depend on which provider keys are configured for your project (BYOK).
The platform default (no BYOK required) is gemini-2.5-flash.
| Model ID | Provider | Notes |
|---|---|---|
gemini-2.5-flash | Platform default — no BYOK required | |
gemini-2.5-pro | Requires Google BYOK | |
gpt-4o | OpenAI | Requires OpenAI BYOK |
gpt-4o-mini | OpenAI | Requires OpenAI BYOK |
claude-sonnet-4-20250514 | Anthropic | Requires Anthropic BYOK |
claude-opus-4-20250514 | Anthropic | Requires Anthropic BYOK |
mistral-large-latest | Mistral | Requires Mistral BYOK |
Configure BYOK keys in your project's Provider Settings in the dashboard.
Version Compatibility
| @behest/client-ts | Node.js | openai peer dep |
|---|---|---|
| 1.x | 18, 20, 22 | ^1.0.0 |
The SDK declares openai as a peer dependency — install it explicitly alongside
@behest/client-ts.