Compatible Providers
This gateway works with:- Ollama — local models on your machine
- LM Studio — local model management with a GUI
- vLLM — high-throughput self-hosted inference
- Azure OpenAI — OpenAI models on Azure
- Any other endpoint that implements
/v1/chat/completionsand/v1/models
Setup
- Set the base URL and (optionally) an API key in
.env.local:
- Set the gateway in your config:
chat.config.ts
Authentication
| Variable | Description |
|---|---|
OPENAI_COMPATIBLE_BASE_URL | Required. The base URL of the API (e.g., http://localhost:11434/v1) |
OPENAI_COMPATIBLE_API_KEY | Optional. API key if the provider requires authentication |
Available Models
Depends entirely on your provider. Models are fetched at runtime from{OPENAI_COMPATIBLE_BASE_URL}/models and cached for 1 hour.
Ollama
After installing Ollama, pull models and they appear automatically:LM Studio
Download models through the LM Studio UI, then start the local server. Models appear athttp://localhost:1234/v1/models.
Image Generation
Image generation support depends on the provider. If the endpoint supports the OpenAI image generation API format,createImageModel() will work. Otherwise, multimodal language models with image output capabilities can still generate images inline.
Related
- Gateways Overview for the full gateway comparison
- Custom Gateway if you need more control than this generic gateway provides