Model Failover

TL;DR: Configure a fallback model in ~/.openclaw/openclaw.json to automatically switch providers on errors or rate limits.

Applies to: Windows, macOS, Linux
Audience: Developers, ops, enterprise
Last updated: 2026-02-23

developer ops models updates

Table of contents

  1. Why Use Failover?
  2. How It Works
  3. Configure Failover
  4. Testing Failover
  5. Next Steps
  6. FAQ

Why Use Failover?

Failover provides:

How It Works

When the primary model fails:

  1. OpenClaw detects the error (HTTP 429/5xx, auth failure, etc.)
  2. It retries with the next model in the list
  3. Once the primary recovers, it switches back

Configure Failover

Edit ~/.openclaw/openclaw.json:

{
  "agent": {
    "model": "anthropic/claude-opus-4-6",
    "modelFailover": [
      "openai/gpt-4o",
      "local/llama3-70b"
    ]
  }
}

You can also set per-provider failover:

{
  "models": {
    "anthropic": {
      "model": "claude-opus-4-6",
      "failover": ["openai/gpt-4o"]
    },
    "openai": {
      "model": "gpt-4o",
      "failover": ["local/llama3-70b"]
    }
  }
}

Testing Failover

To test:

  1. Temporarily break the primary (e.g., revoke API key)
  2. Run an agent command
  3. Observe logs for “Falling back to…”
  4. Restore the primary and verify switchback

Next Steps

FAQ

Q: Does failover work with local models?
A: Yes. You can failover from cloud to local, or between local models.

Q: Can I weight models?
A: Not directly. Use failover and custom routing logic in a skill if needed.

Q: What if all models fail?
A: OpenClaw will log the error and stop. Configure monitoring to alert.

Q: How do I know which model was used?
A: Check the dashboard session logs or CLI output.

Recommended next

Workspace Explained