Skip to main content

    TypeScript SDK — @behest/client-ts

    ⚠️ This page documents the v1.x legacy BehestClient API. For new integrations, use the v1.5 Behest class — it adds auth.mint(), dual-mode (API-key + local-signing), typed errors, threads, usage, and proper TTL/session handling.

    Start here instead:

    The BehestClient-based examples below still work (v1.x is supported through v2.0), but you'll see deprecation warnings and miss the new surface.

    The official Behest TypeScript/JavaScript SDK. It extends the OpenAI Node.js SDK with Behest authentication and header injection, so any code that works with the OpenAI SDK works with Behest by changing the client constructor.


    Installation

    bash
    npm install @behest/client-ts@beta openai

    The OpenAI SDK is a peer dependency. Install both together so your lockfile pins compatible versions. The Behest SDK requires Node.js 18+.


    Basic Usage

    typescript
    import { BehestClient } from "@behest/client-ts";
     
    const client = new BehestClient({
      apiKey: process.env.BEHEST_API_KEY,
    });
     
    const response = await client.chat.completions.create({
      model: "gemini-2.5-flash",
      messages: [{ role: "user", content: "Say hello in three languages." }],
    });
     
    console.log(response.choices[0].message.content);

    The apiKey is the Behest API key you created in the dashboard. It is passed as the Authorization: Bearer header on every request. All Behest-specific headers (tenant, project) are injected automatically.


    Constructor Options

    typescript
    const client = new BehestClient({
      // Required
      apiKey: string,               // Your Behest API key
     
      // Optional
      projectSlug?: string,         // Route requests to a specific project by slug
      baseURL?: string,             // Override the API gateway URL (default: https://api.behest.app/v1)
      maxRetries?: number,          // Number of retries on transient failures (default: 2)
      timeout?: number,             // Request timeout in milliseconds (default: 60000)
      defaultHeaders?: Record<string, string>, // Extra headers sent on every request
    });
    typescript
    // .env
    // BEHEST_API_KEY=bh_live_...
     
    const client = new BehestClient({
      apiKey: process.env.BEHEST_API_KEY,
    });

    Chat Completions

    Non-streaming

    typescript
    const response = await client.chat.completions.create({
      model: "gpt-4o",
      messages: [
        { role: "system", content: "You are a concise assistant." },
        { role: "user", content: "What is the capital of France?" },
      ],
      temperature: 0.7,
      max_tokens: 200,
    });
     
    const text = response.choices[0].message.content;
    console.log(text);
     
    // Usage statistics
    console.log(response.usage?.total_tokens);

    Streaming

    typescript
    const stream = await client.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: "Write a haiku about autumn." }],
      stream: true,
    });
     
    for await (const chunk of stream) {
      const delta = chunk.choices[0]?.delta?.content;
      if (delta) process.stdout.write(delta);
    }
    console.log(); // newline after stream ends

    Streaming in a Next.js API route

    typescript
    // src/app/api/chat/route.ts
    import { BehestClient } from "@behest/client-ts";
    import { NextRequest } from "next/server";
     
    const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });
     
    export async function POST(req: NextRequest) {
      const { message } = await req.json();
     
      const stream = await client.chat.completions.create({
        model: "gpt-4o",
        messages: [{ role: "user", content: message }],
        stream: true,
      });
     
      // Pipe the OpenAI SSE stream to the browser as a ReadableStream
      const encoder = new TextEncoder();
      const readable = new ReadableStream({
        async start(controller) {
          for await (const chunk of stream) {
            const delta = chunk.choices[0]?.delta?.content ?? "";
            controller.enqueue(encoder.encode(`data: ${delta}\n\n`));
          }
          controller.close();
        },
      });
     
      return new Response(readable, {
        headers: { "Content-Type": "text/event-stream" },
      });
    }

    Error Handling

    The SDK surfaces errors as typed classes. All Behest errors extend OpenAI.APIError, so your existing OpenAI error handling continues to work. Behest-specific error classes give you richer information without requiring manual status code inspection.

    typescript
    import { BehestClient } from "@behest/client-ts";
    import OpenAI from "openai";
     
    const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });
     
    try {
      const response = await client.chat.completions.create({
        model: "gpt-4o",
        messages: [{ role: "user", content: "Hello" }],
      });
      console.log(response.choices[0].message.content);
    } catch (error) {
      if (error instanceof OpenAI.APIError) {
        console.error(`Status: ${error.status}`);
        console.error(`Code:   ${error.code}`);
        console.error(`Msg:    ${error.message}`);
     
        if (error.status === 401) {
          // Invalid or expired API key — re-check your BEHEST_API_KEY env var
        } else if (error.status === 429) {
          // Rate limited or token budget exceeded
          const retryAfter = error.headers?.["retry-after"];
          console.log(`Retry after: ${retryAfter}s`);
        } else if (error.status === 403) {
          // PII Shield or Sentinel blocked the request
          console.log("Blocked by guardrail:", error.code);
        } else if (error.status === 502) {
          // Upstream LLM provider error — not a Behest issue
        }
      }
    }

    Error code reference

    StatusCodeCause
    401BEHEST_AUTH_MISSINGNo Authorization header
    401BEHEST_AUTH_INVALIDInvalid API key
    401BEHEST_AUTH_EXPIREDJWT has expired
    404BEHEST_PROJECT_NOT_FOUNDUnknown project slug
    404BEHEST_MODEL_NOT_FOUNDModel not available for the project
    403BEHEST_PII_BLOCKEDPII Shield blocked a BLOCK-action entity
    403BEHEST_CONTENT_BLOCKEDSentinel blocked a jailbreak or blocklist match
    429BEHEST_RATE_LIMITRPM limit exceeded
    429BEHEST_BUDGET_EXCEEDEDToken budget exhausted
    502BEHEST_PROVIDER_ERRORUpstream LLM provider returned an error
    408BEHEST_PROVIDER_TIMEOUTProvider did not respond in time

    TypeScript Types

    The SDK re-exports all OpenAI types, so you never need to import from openai directly for standard chat completion types.

    typescript
    import { BehestClient } from "@behest/client-ts";
    import type { ChatCompletionMessageParam } from "openai/resources/chat/completions";
     
    // Type-safe message array
    const messages: ChatCompletionMessageParam[] = [
      { role: "system", content: "You are a helpful assistant." },
      { role: "user", content: "What is 2 + 2?" },
    ];
     
    const client = new BehestClient({ apiKey: process.env.BEHEST_API_KEY });
     
    const response = await client.chat.completions.create({
      model: "gemini-2.5-flash",
      messages,
    });
     
    // Fully typed — IDE autocomplete works on all response fields
    const choice = response.choices[0];
    const content: string | null = choice.message.content;
    const finishReason: string | null = choice.finish_reason;

    Migration from OpenAI SDK

    This is the minimal change to switch existing OpenAI code to Behest:

    Before (OpenAI):

    typescript
    import OpenAI from "openai";
     
    const client = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
    });
     
    const response = await client.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: "Hello" }],
    });

    After (Behest):

    typescript
    import { BehestClient } from "@behest/client-ts"; // changed
     
    const client = new BehestClient({
      apiKey: process.env.BEHEST_API_KEY, // changed
    });
     
    // Everything below is unchanged
    const response = await client.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: "Hello" }],
    });

    Two lines change. All your existing API calls, response parsing, and TypeScript types continue to work without modification. See the full migration guide for multi-provider and advanced scenarios.


    Advanced: Custom Headers

    Use defaultHeaders to send additional headers on every request, or pass them per-call via the second argument.

    typescript
    // Client-level: headers on every request
    const client = new BehestClient({
      apiKey: process.env.BEHEST_API_KEY,
      defaultHeaders: {
        'X-End-User-Id': currentUser.id,  // track per-user usage in analytics
      },
    });
     
    // Per-call: headers on one request only
    const response = await client.chat.completions.create(
      { model: 'gpt-4o', messages: [...] },
      { headers: { 'X-Idempotency-Key': requestId } },
    );

    Advanced: Timeout Configuration

    typescript
    const client = new BehestClient({
      apiKey: process.env.BEHEST_API_KEY,
      timeout: 30_000, // 30 seconds (default: 60 seconds)
      maxRetries: 3, // retries on 429, 408, 502, 503 (default: 2)
    });

    Retries use exponential backoff. For 429 responses, the SDK respects the Retry-After response header when present.


    Supported Models

    The available models depend on which provider keys are configured for your project (BYOK). The platform default (no BYOK required) is gemini-2.5-flash.

    Model IDProviderNotes
    gemini-2.5-flashGooglePlatform default — no BYOK required
    gemini-2.5-proGoogleRequires Google BYOK
    gpt-4oOpenAIRequires OpenAI BYOK
    gpt-4o-miniOpenAIRequires OpenAI BYOK
    claude-sonnet-4-20250514AnthropicRequires Anthropic BYOK
    claude-opus-4-20250514AnthropicRequires Anthropic BYOK
    mistral-large-latestMistralRequires Mistral BYOK

    Configure BYOK keys in your project's Provider Settings in the dashboard.


    Version Compatibility

    @behest/client-tsNode.jsopenai peer dep
    1.x18, 20, 22^1.0.0

    The SDK declares openai as a peer dependency — install it explicitly alongside @behest/client-ts.

    Enterprise Token FinOps: Enforce hard budgets and attribute costs per session.

    Learn more