This page covers Anthropic-compatible providers. For Codex/OpenAI connections and multi‑connection setup, see LLM Connections.
Supported Providers
| Provider | Base URL | API Key Required | Notes |
|---|---|---|---|
| Anthropic | https://api.anthropic.com | Yes | Default provider. No model override needed. |
| OpenRouter | https://openrouter.ai/api | Yes | Access multiple AI providers through one API. |
| Vercel AI Gateway | https://ai-gateway.vercel.sh | Yes | Unified gateway for AI model routing. |
| Ollama | http://localhost:11434 | No | Run models locally. Requires Ollama 0.14+. |
| Custom | Any URL | Depends on provider | Any Anthropic-compatible endpoint. |
Setting Up a Provider
During First Launch
- In the setup wizard, select API Key
- Enter your API key
- Select a Base URL preset from the dropdown (Anthropic, OpenRouter, Vercel AI Gateway, or Custom)
- Optionally specify a Model name (required for non-Anthropic providers)
- The connection is tested automatically before saving
In Settings
- Open Settings (gear icon or
Cmd+,) - Click on the API Connection section
- Change your API key, base URL, or model as needed
Model Names
For Anthropic, no model override is needed — Craft Agents uses its built-in model routing (Sonnet, Opus, Haiku) automatically. For OpenRouter and Vercel AI Gateway, models use theprovider/model-name format:
When the Model field is left empty for non-Anthropic providers, Craft Agents defaults to Anthropic model name formatting. This works for providers that support Anthropic model names natively but may not work for all providers.
Provider Details
OpenRouter
OpenRouter gives you access to hundreds of AI models through a single API key. It handles billing, rate limiting, and fallbacks across providers.- Get your API key at openrouter.ai/keys
- Select the OpenRouter preset in the Base URL dropdown
- Set your model (e.g.
anthropic/claude-sonnet-4)
Ollama (Local Models)
Ollama runs open-source models locally on your machine. No API key is required, and data never leaves your computer. Requirements:- Ollama 0.14 or newer (for Anthropic-compatible API format)
- A model pulled locally
- Select the Custom preset in the Base URL dropdown
- Enter
http://localhost:11434as the URL - Leave the API key empty
- Set the model name (e.g.
llama3.2)
Ollama requires version 0.14+ for compatibility with Craft Agents. Earlier versions do not support the Anthropic Messages API format. Update with
ollama update if needed.Vercel AI Gateway
Vercel AI Gateway provides a unified endpoint for routing requests to multiple AI providers with built-in observability and caching.- Get your API key from your Vercel dashboard
- Select the Vercel AI Gateway preset
- Set your model using
provider/model-nameformat
Custom Endpoint
For any API that implements the Anthropic Messages format:- Select the Custom preset
- Enter the full base URL of your endpoint
- Enter your API key (if required)
- Specify the model name your endpoint expects
/v1/messages endpoint.
How It Works
When you configure a non-default provider, Craft Agents stores:- The API key in the encrypted credentials file (
~/.craft-agent/credentials.enc) - The base URL and default model in the LLM connection configuration
ANTHROPIC_BASE_URL environment variable to the underlying Claude Code SDK.
Troubleshooting
Connection test fails
Connection test fails
Verify:
- The base URL is correct and accessible from your machine
- Your API key is valid and has sufficient permissions
- The endpoint supports the Anthropic Messages API format (
/v1/messages)
Model not found errors
Model not found errors
Check that the model name matches exactly what your provider expects:
- OpenRouter/Vercel: Use
provider/model-nameformat (e.g.anthropic/claude-sonnet-4) - Ollama: Use the local model name (e.g.
llama3.2) - Custom: Check your provider’s documentation for valid model identifiers
Authentication errors
Authentication errors
- Ensure your API key is correct and hasn’t expired
- For Ollama: no API key should be set (leave it empty)
- Check if your key has available credits/quota
Ollama not connecting
Ollama not connecting
- Verify Ollama is running:
ollama list - Check you’re on version 0.14+:
ollama --version - Ensure the model is pulled:
ollama pull llama3.2 - Verify the URL is
http://localhost:11434(note: HTTP, not HTTPS)
Rate limiting
Rate limiting
If you hit rate limits, check your provider’s usage limits and consider upgrading your plan or using a different provider.