OpenClaw
OpenClaw is a self-hosted AI assistant that can connect chat apps and agent workflows to LLM providers. The pragmatic Isartor setup is to register Isartor as a custom OpenAI-compatible OpenClaw provider and let OpenClaw use that provider as its primary model path.
This is similar in spirit to the LiteLLM integration docs, but with one important difference:
- LiteLLM is a multi-model gateway and catalog
- Isartor is a prompt firewall / gateway that currently exposes the upstream model you configured in Isartor itself
So the best OpenClaw UX is: configure the model in Isartor first, then let isartor connect openclaw mirror that model into OpenClaw's provider config.
Pragmatic setup
# 1. Configure Isartor's upstream provider/model
isartor set-key -p groq
isartor check
# 2. Start Isartor
isartor up --detach
# 3. Make sure OpenClaw is onboarded
openclaw onboard --install-daemon
# 4. Register Isartor as an OpenClaw provider
isartor connect openclaw
# 5. Verify OpenClaw sees the provider/model and auth
openclaw models status --agent main --probe
# 6. Smoke test a prompt
openclaw agent --agent main -m "Hello from OpenClaw through Isartor"
What isartor connect openclaw does
It writes or updates your OpenClaw config (default: ~/.openclaw/openclaw.json) with:
models.providers.isartor- a single managed model entry matching Isartor's current upstream model
agents.defaults.model.primary = "isartor/<your-model>"- the
main/ default agent model override when one is present - a refresh of stale per-agent
models.jsonregistries so OpenClaw regenerates them with the latestbaseUrlandapiKey
Example generated provider block:
models: {
providers: {
isartor: {
baseUrl: "http://localhost:8080/v1",
apiKey: "isartor-local",
api: "openai-completions",
models: [
{
id: "openai/gpt-oss-120b",
name: "Isartor (openai/gpt-oss-120b)"
}
]
}
}
}
And the default model becomes:
agents: {
defaults: {
model: {
primary: "isartor/openai/gpt-oss-120b"
}
}
}
Base URL and auth path
OpenClaw must talk to Isartor's OpenAI-compatible /v1 surface.
- Correct base URL:
http://localhost:8080/v1 - Wrong base URL:
http://localhost:8080
Why this matters:
- OpenClaw appends
/chat/completionsfor OpenAI-compatible custom providers - Isartor exposes that route as
/v1/chat/completions - using the root gateway URL can produce
404errors such asgateway unknown L0 via chat/completions
isartor connect openclaw writes the /v1 path for you, so prefer the connector over hand-editing the provider block.
Reconnecting after changing the gateway API key
OpenClaw stores custom-provider state in two places:
~/.openclaw/openclaw.json- per-agent
models.jsonregistries under~/.openclaw/agents/<agentId>/agent/
Those per-agent registries can keep an old apiKey or baseUrl even after openclaw.json changes. That is why you can still see 401 after fixing the key in the top-level config.
The supported fix is simply:
isartor connect openclaw --gateway-api-key <your-key>
openclaw models status --agent main --probe
openclaw agent --agent main -m "Hello from OpenClaw through Isartor"
The connector now refreshes openclaw.json, updates the main / default agent model override, and removes stale per-agent models.json files so OpenClaw regenerates them with the new auth.
Why this is the best fit
The upstream LiteLLM/OpenClaw docs assume the gateway can expose a multi-model catalog and route among many providers behind one endpoint.
Isartor is different today:
- OpenClaw talks to Isartor over the OpenAI-compatible
/v1/chat/completionssurface - Isartor forwards using its configured upstream provider/model
- OpenClaw model refs should therefore mirror the model currently configured in Isartor
That means:
- if you change Isartor's provider/model later, rerun
isartor connect openclaw - if you change Isartor's gateway API key later, rerun
isartor connect openclaw --gateway-api-key ... - do not expect
isartor/openai/...andisartor/anthropic/...fallbacks to behave like LiteLLM provider switching unless Isartor itself grows multi-provider routing later
Options
| Flag | Default | Description |
|---|---|---|
--model | Isartor's configured upstream model | Override the single model ID exposed to OpenClaw |
--config-path | auto-detected | Path to openclaw.json |
--gateway-api-key | (none) | Gateway key if auth is enabled |
Files written
~/.openclaw/openclaw.json— managed OpenClaw provider config~/.openclaw/agents/<agentId>/agent/models.json— regenerated by OpenClaw after Isartor clears stale custom-provider cachesopenclaw.json.isartor-backup— backup, when a prior config existed
Disconnecting
isartor connect openclaw --disconnect
If a backup exists, Isartor restores it. Otherwise it removes only the managed models.providers.isartor entry and related isartor/... default-model references.
Recommended user workflow
For day-to-day use:
- Pick your upstream provider with
isartor set-key - Validate with
isartor check - Keep Isartor running with
isartor up --detach - Let OpenClaw use
isartor/<configured-model>as its primary model - Use
openclaw models status --agent main --probewhenever you want to confirm what OpenClaw currently sees
If you later switch Isartor from, for example, Groq to OpenAI or Azure:
isartor set-key -p openai
isartor check
isartor connect openclaw
That refreshes OpenClaw's provider model to match the new Isartor config.
What Isartor does for OpenClaw
| Benefit | How |
|---|---|
| Cache repeated agent prompts | OpenClaw often repeats the same context and system framing. L1a exact cache resolves those instantly. |
| Catch paraphrases | L1b semantic cache resolves similar follow-ups locally when safe. |
| Compress repeated instructions | L2.5 trims repeated context before cloud fallback. |
| Keep one stable gateway URL | OpenClaw only needs isartor/<model> while Isartor owns the upstream provider configuration. |
| Observability | isartor stats --by-tool lets you track OpenClaw cache hits, latency, and savings. |
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| OpenClaw cannot reach the provider | Isartor not running | Run isartor up --detach first |
| OpenClaw onboarding/custom provider returns 404 | Base URL points at http://localhost:8080 instead of http://localhost:8080/v1 | Use isartor connect openclaw or update the custom provider base URL to end with /v1 |
| OpenClaw still shows the old model | Isartor model changed after initial connect | Re-run isartor connect openclaw |
| Auth errors (401) after reconnecting | OpenClaw is still using stale per-agent provider state | Re-run isartor connect openclaw --gateway-api-key <key> so Isartor refreshes openclaw.json and clears stale per-agent models.json registries |
| "Model is not allowed" | OpenClaw allowlist still excludes the managed model | Re-run isartor connect openclaw so the managed model is re-added to the allowlist |