brainLLM Providers

Supported providers, default models, and operational behavior.

DragonClaw supports two API patterns:

  • OpenAI-compatible chat endpoints

  • Anthropic Messages API

Supported providers

  • DeepSeekdeepseek-chat

  • Qwenqwen-max

  • Kimimoonshot-v1-128k

  • GLMglm-4-flash

  • OpenAIgpt-4o

  • Anthropic — Claude models

  • OpenRouter — any routed model

  • Local — Ollama, vLLM, LM Studio, and similar servers

Why Chinese providers fit DragonClaw

  • Lower token cost

  • Better Chinese fluency

  • Lower latency in Asia-Pacific

  • Fewer geoblocking issues

Local model support

Point DragonClaw to any OpenAI-compatible endpoint:

Reliability features

All LLM calls include:

  • 60-second timeout

  • 3 retries with backoff on retryable failures

  • Structured provider-aware errors

  • Logging for provider, model, latency, and response size

Last updated