Skip to main content
The OpenAI Compatible gateway connects ChatJS to any endpoint that follows the OpenAI API format. This is the fastest way to connect to local inference servers and alternative cloud providers.

Compatible Providers

This gateway works with:
  • Ollama — local models on your machine
  • LM Studio — local model management with a GUI
  • vLLM — high-throughput self-hosted inference
  • Azure OpenAI — OpenAI models on Azure
  • Any other endpoint that implements /v1/chat/completions and /v1/models

Setup

  1. Set the base URL and (optionally) an API key in .env.local:
OPENAI_COMPATIBLE_BASE_URL=http://localhost:11434/v1  # Ollama example
OPENAI_COMPATIBLE_API_KEY=                             # Optional, depends on provider
  1. Set the gateway in your config:
chat.config.ts
const config: ConfigInput = {
  ai: {
    gateway: "openai-compatible",
    workflows: {
      chat: "llama3.2",
      title: "llama3.2",
      // ...
    },
    // ...
  },
};

Authentication

VariableDescription
OPENAI_COMPATIBLE_BASE_URLRequired. The base URL of the API (e.g., http://localhost:11434/v1)
OPENAI_COMPATIBLE_API_KEYOptional. API key if the provider requires authentication

Available Models

Depends entirely on your provider. Models are fetched at runtime from {OPENAI_COMPATIBLE_BASE_URL}/models and cached for 1 hour.

Ollama

After installing Ollama, pull models and they appear automatically:
ollama pull llama3.2
ollama pull codellama

LM Studio

Download models through the LM Studio UI, then start the local server. Models appear at http://localhost:1234/v1/models.
OPENAI_COMPATIBLE_BASE_URL=http://localhost:1234/v1

Image Generation

Image generation support depends on the provider. If the endpoint supports the OpenAI image generation API format, createImageModel() will work. Otherwise, multimodal language models with image output capabilities can still generate images inline.