Skip to main content
ChatJS uses a gateway to route all AI requests (model listing, chat completions, and image generation) through a single backend. Set gateway in chat.config.ts to choose which backend your app talks to.

Available Gateways

GatewayModelsAuthImage GenerationBest For
Vercel AI Gateway (default)120+AI_GATEWAY_API_KEY or auto OIDCDedicated image modelsVercel deployments
OpenRouterHundredsOPENROUTER_API_KEYVia multimodal modelsBroadest model access
OpenAIOpenAI onlyOPENAI_API_KEYgpt-image-1 and othersDirect OpenAI access
OpenAI CompatibleVariesOPENAI_COMPATIBLE_API_KEY (optional)Provider-dependentOllama, LM Studio, vLLM, Azure
Need a provider not listed here? See Custom Gateway.

Choosing a Gateway

  • Vercel AI Gateway is the default. It aggregates 120+ models from multiple providers behind a single key. Zero-config on Vercel deployments.
  • OpenRouter gives access to hundreds of models with per-token pricing. Good when you want the widest selection or models not on Vercel’s gateway.
  • OpenAI connects directly to the OpenAI API. Use this when you only need OpenAI models and want type-safe model IDs.
  • OpenAI Compatible works with any endpoint that follows the OpenAI API format (local servers, self-hosted inference, or cloud services).

Configuration

Set the gateway in chat.config.ts:
chat.config.ts
const config: ConfigInput = {
  ai: {
    gateway: "openrouter", // "vercel" | "openrouter" | "openai" | "openai-compatible"
    // ...
  },
};
The gateway field lives inside the ai key and defaults to "vercel" when omitted.

How Model Fetching Works

Every gateway implements fetchModels() which returns the list of available models. The app calls this at runtime and caches the result for 1 hour. When no API key is available, the app falls back to a static snapshot in models.generated.ts. Refresh this snapshot periodically:
bun fetch:models

Snapshot Gateway Validation

The snapshot file records which gateway generated it via a generatedForGateway export. When you run bun check-env, it compares this value against config.ai.gateway and warns if they don’t match:
⚠️  models.generated.ts was built for "vercel" but config uses "openrouter".
    Run `bun fetch:models` to update the fallback snapshot.
This is caught at two levels:
  • Build time: bun check-env prints the warning shown above.
  • Runtime: each gateway checks generatedForGateway before using the fallback. If it doesn’t match, the fallback is skipped and an empty model list is returned instead of stale model IDs.
After changing gateway in chat.config.ts, always run bun fetch:models to keep the fallback snapshot in sync.
See Auto-Updating Models for the full implementation pattern.