mirror of
https://github.com/instructkr/claw-code.git
synced 2026-04-27 22:24:58 +08:00
4.1 KiB
4.1 KiB
Supported Providers
claw-code currently supports the following LLM providers. This is a snapshot of the current code state and may change. The canonical source of truth is MODEL_REGISTRY and provider routing logic in rust/crates/api/src/providers/mod.rs.
Note: A declarative
providers/models/websearchconfig insettings.jsonis tracked as pinpoint #285 and is not yet implemented. Until then, provider/model selection is determined by:
- The model name prefix (e.g.,
claude-,grok-,openai/,qwen/,kimi-)- Environment variables (e.g.,
ANTHROPIC_API_KEY,XAI_API_KEY,DASHSCOPE_API_KEY,OPENAI_API_KEY)- Hard-coded heuristics in
MODEL_REGISTRYanddetect_provider_kind()
Anthropic
- Status: Primary supported provider
- Models:
claude-opus-4-6(alias:opus) — 200K context, 32K max outputclaude-sonnet-4-6(alias:sonnet) — 200K context, 64K max outputclaude-haiku-4-5-20251213(alias:haiku) — 200K context, 64K max output
- Auth:
ANTHROPIC_API_KEYenv var, or OAuth bearer viaclaw login(ANTHROPIC_AUTH_TOKEN) - Base URL:
https://api.anthropic.com(override:ANTHROPIC_BASE_URL) - Known issues: Subject to upstream stream-init failures (see #290, #291)
xAI (Grok)
- Status: Supported via OpenAI-compatible client
- Models:
grok-3(aliases:grok,grok-3) — 131K context, 64K max outputgrok-3-mini(aliases:grok-mini,grok-3-mini) — 131K context, 64K max outputgrok-2— context/output limits not yet registered in token metadata
- Auth:
XAI_API_KEY - Base URL:
https://api.x.ai/v1(override:XAI_BASE_URL) - Known issues: None currently tracked
Alibaba DashScope (Qwen / Kimi)
- Status: Supported via OpenAI-compatible client pointed at DashScope compatible-mode endpoint
- Models:
qwen/*andqwen-*prefix — routes to DashScope (e.g.,qwen-plus,qwen-max,qwen-turbo,qwen/qwen3-coder)kimi-k2.5(alias:kimi) — 256K context, 16K max outputkimi-k1.5— 256K context, 16K max outputkimi/*andkimi-*prefix — routes to DashScope
- Auth:
DASHSCOPE_API_KEY - Base URL:
https://dashscope.aliyuncs.com/compatible-mode/v1(override:DASHSCOPE_BASE_URL) - Known issues: None currently tracked
OpenAI / OpenAI-Compatible Endpoints
- Status: Supported via OpenAI-compatible client; also covers local providers (Ollama, LM Studio, vLLM, OpenRouter)
- Models:
openai/prefix (e.g.,openai/gpt-4.1-mini) or baregpt-*prefix - Auth:
OPENAI_API_KEY - Base URL:
https://api.openai.com/v1(override:OPENAI_BASE_URL— also used for local providers) - Local provider routing: When
OPENAI_BASE_URLis set andOPENAI_API_KEYis present, unknown model names (e.g.,qwen2.5-coder:7b) also route here - Known issues: Declarative per-model config tracked in #285
Web Search
- Status: Hard-coded heuristics; declarative
websearchconfig tracked in #285
Provider Selection Order
When the model name has no recognized prefix, detect_provider_kind() falls through in this order:
- Model prefix match (
claude-→ Anthropic,grok-→ xAI,openai/orgpt-→ OpenAI,qwen/orqwen-→ DashScope,kimi/orkimi-→ DashScope) OPENAI_BASE_URL+OPENAI_API_KEYset → OpenAI-compat- Anthropic credentials found → Anthropic
OPENAI_API_KEYfound → OpenAIXAI_API_KEYfound → xAIOPENAI_BASE_URLset (no key) → OpenAI-compat (for keyless local providers)- Default fallback → Anthropic
Reporting Provider Issues
For provider-specific bugs (e.g., 500 empty_stream from upstream), see TROUBLESHOOTING.md for mitigation steps.
For pinpointing a missing provider feature, file via ISSUE_TEMPLATE/pinpoint.md.
Related Pinpoints
- #245 — Provider declarative config
- #246 — Backend swap
- #285 — Provider/model/websearch source of truth
- #290 — Stream-init failure envelope
- #291 — Repeat-failure circuit-breaker