OpenClaw

OpenClaw is a self-hosted AI assistant that can connect chat apps and agent workflows to LLM providers. The pragmatic Isartor setup is to register Isartor as a custom OpenAI-compatible OpenClaw provider and let OpenClaw use that provider as its primary model path.

This is similar in spirit to the LiteLLM integration docs, but with one important difference:

  • LiteLLM is a multi-model gateway and catalog
  • Isartor is a prompt firewall / gateway that currently exposes the upstream model you configured in Isartor itself

So the best OpenClaw UX is: configure the model in Isartor first, then let isartor connect openclaw mirror that model into OpenClaw's provider config.

Pragmatic setup

# 1. Configure Isartor's upstream provider/model
isartor set-key -p groq
isartor check

# 2. Start Isartor
isartor up --detach

# 3. Make sure OpenClaw is onboarded
openclaw onboard --install-daemon

# 4. Register Isartor as an OpenClaw provider
isartor connect openclaw

# 5. Verify OpenClaw sees the provider/model and auth
openclaw models status --agent main --probe

# 6. Smoke test a prompt
openclaw agent --agent main -m "Hello from OpenClaw through Isartor"

What isartor connect openclaw does

It writes or updates your OpenClaw config (default: ~/.openclaw/openclaw.json) with:

  1. models.providers.isartor
  2. a single managed model entry matching Isartor's current upstream model
  3. agents.defaults.model.primary = "isartor/<your-model>"
  4. the main / default agent model override when one is present
  5. a refresh of stale per-agent models.json registries so OpenClaw regenerates them with the latest baseUrl and apiKey

Example generated provider block:

models: {
  providers: {
    isartor: {
      baseUrl: "http://localhost:8080/v1",
      apiKey: "isartor-local",
      api: "openai-completions",
      models: [
        {
          id: "openai/gpt-oss-120b",
          name: "Isartor (openai/gpt-oss-120b)"
        }
      ]
    }
  }
}

And the default model becomes:

agents: {
  defaults: {
    model: {
      primary: "isartor/openai/gpt-oss-120b"
    }
  }
}

Base URL and auth path

OpenClaw must talk to Isartor's OpenAI-compatible /v1 surface.

  • Correct base URL: http://localhost:8080/v1
  • Wrong base URL: http://localhost:8080

Why this matters:

  • OpenClaw appends /chat/completions for OpenAI-compatible custom providers
  • Isartor exposes that route as /v1/chat/completions
  • using the root gateway URL can produce 404 errors such as gateway unknown L0 via chat/completions

isartor connect openclaw writes the /v1 path for you, so prefer the connector over hand-editing the provider block.

Reconnecting after changing the gateway API key

OpenClaw stores custom-provider state in two places:

  1. ~/.openclaw/openclaw.json
  2. per-agent models.json registries under ~/.openclaw/agents/<agentId>/agent/

Those per-agent registries can keep an old apiKey or baseUrl even after openclaw.json changes. That is why you can still see 401 after fixing the key in the top-level config.

The supported fix is simply:

isartor connect openclaw --gateway-api-key <your-key>
openclaw models status --agent main --probe
openclaw agent --agent main -m "Hello from OpenClaw through Isartor"

The connector now refreshes openclaw.json, updates the main / default agent model override, and removes stale per-agent models.json files so OpenClaw regenerates them with the new auth.

Why this is the best fit

The upstream LiteLLM/OpenClaw docs assume the gateway can expose a multi-model catalog and route among many providers behind one endpoint.

Isartor is different today:

  • OpenClaw talks to Isartor over the OpenAI-compatible /v1/chat/completions surface
  • Isartor forwards using its configured upstream provider/model
  • OpenClaw model refs should therefore mirror the model currently configured in Isartor

That means:

  • if you change Isartor's provider/model later, rerun isartor connect openclaw
  • if you change Isartor's gateway API key later, rerun isartor connect openclaw --gateway-api-key ...
  • do not expect isartor/openai/... and isartor/anthropic/... fallbacks to behave like LiteLLM provider switching unless Isartor itself grows multi-provider routing later

Options

FlagDefaultDescription
--modelIsartor's configured upstream modelOverride the single model ID exposed to OpenClaw
--config-pathauto-detectedPath to openclaw.json
--gateway-api-key(none)Gateway key if auth is enabled

Files written

  • ~/.openclaw/openclaw.json — managed OpenClaw provider config
  • ~/.openclaw/agents/<agentId>/agent/models.json — regenerated by OpenClaw after Isartor clears stale custom-provider caches
  • openclaw.json.isartor-backup — backup, when a prior config existed

Disconnecting

isartor connect openclaw --disconnect

If a backup exists, Isartor restores it. Otherwise it removes only the managed models.providers.isartor entry and related isartor/... default-model references.

For day-to-day use:

  1. Pick your upstream provider with isartor set-key
  2. Validate with isartor check
  3. Keep Isartor running with isartor up --detach
  4. Let OpenClaw use isartor/<configured-model> as its primary model
  5. Use openclaw models status --agent main --probe whenever you want to confirm what OpenClaw currently sees

If you later switch Isartor from, for example, Groq to OpenAI or Azure:

isartor set-key -p openai
isartor check
isartor connect openclaw

That refreshes OpenClaw's provider model to match the new Isartor config.

What Isartor does for OpenClaw

BenefitHow
Cache repeated agent promptsOpenClaw often repeats the same context and system framing. L1a exact cache resolves those instantly.
Catch paraphrasesL1b semantic cache resolves similar follow-ups locally when safe.
Compress repeated instructionsL2.5 trims repeated context before cloud fallback.
Keep one stable gateway URLOpenClaw only needs isartor/<model> while Isartor owns the upstream provider configuration.
Observabilityisartor stats --by-tool lets you track OpenClaw cache hits, latency, and savings.

Troubleshooting

SymptomCauseFix
OpenClaw cannot reach the providerIsartor not runningRun isartor up --detach first
OpenClaw onboarding/custom provider returns 404Base URL points at http://localhost:8080 instead of http://localhost:8080/v1Use isartor connect openclaw or update the custom provider base URL to end with /v1
OpenClaw still shows the old modelIsartor model changed after initial connectRe-run isartor connect openclaw
Auth errors (401) after reconnectingOpenClaw is still using stale per-agent provider stateRe-run isartor connect openclaw --gateway-api-key <key> so Isartor refreshes openclaw.json and clears stale per-agent models.json registries
"Model is not allowed"OpenClaw allowlist still excludes the managed modelRe-run isartor connect openclaw so the managed model is re-added to the allowlist