TutorialsApr 25, 20266 min read

Point your coding agent at qlaud — Claude Code, Codex, Aider, Cline, Cursor

Every coding-agent CLI works with qlaud through a base-URL swap. Same agent, your spend cap, full per-tool token visibility — at a fraction of the direct-provider cost. Verified configs for the five tools real developers use.

qlaud teamEngineering

Every coding agent worth using — Claude Code, Codex, Aider, Cline, Cursor — supports a base-URL override. Point them at qlaud and you get a spend cap, per-tool token usage, and the option to swap models without touching the agent. The configs below are verified against each tool's current docs (April 2026). Pick the agent you use, copy two lines.

Claude Code

Two environment variables. Claude Code reads ANTHROPIC_BASE_URL and ANTHROPIC_API_KEY on startup; with these set it hits qlaud instead of api.anthropic.com. We expose /v1/messages as native passthrough, so cache_control, image content blocks, thinking blocks, and the anthropic-beta: context-management-… header all flow through unchanged.

export ANTHROPIC_BASE_URL=https://api.qlaud.ai
export ANTHROPIC_API_KEY=qlk_live_<your_key>
claude

Want to try a non-Anthropic model from inside Claude Code without leaving the CLI? Pass it as the model: any provider-native id our catalog supports (gpt-5.4, kimi-k2.6, MiniMax-M2, deepseek-chat) routes correctly even from the Anthropic surface — we translate transparently.

OpenAI Codex CLI

Codex configures providers in ~/.codex/config.toml as named blocks. Add a qlaud provider that reads its API key from an env var, then start Codex with --model qlaud/gpt-5.4 (or any model id from the catalog).

# ~/.codex/config.toml
[model_providers.qlaud]
name = "qlaud"
base_url = "https://api.qlaud.ai/v1"
env_key = "QLAUD_API_KEY"
wire_api = "chat"

# then in your shell:
export QLAUD_API_KEY=qlk_live_<your_key>
codex --model qlaud/gpt-5.4

Aider

Aider routes through LiteLLM under the hood, which means you address OpenAI-compatible endpoints with the openai/ prefix on the model id. Set the OpenAI base URL + key env vars, then pass the prefixed model.

export OPENAI_API_BASE=https://api.qlaud.ai/v1
export OPENAI_API_KEY=qlk_live_<your_key>
aider --model openai/gpt-5.4

# any catalog model — provider-native id with the openai/ prefix
aider --model openai/claude-sonnet-4-6
aider --model openai/kimi-k2.6
aider --model openai/deepseek-chat

Cline (VS Code extension)

Cline configures providers through its in-extension settings panel only — there's no settings.json incantation. Open the Cline sidebar, click the gear icon, and fill in:

  • API Provider: OpenAI Compatible
  • Base URL: https://api.qlaud.ai/v1
  • API Key: qlk_live_… (your qlaud key)
  • Model ID: any provider-native id from the catalog — gpt-5.4, claude-sonnet-4-6, kimi-k2.6, etc.

Cline's OpenAI-compatible mode handles tool calls, streaming, and cost display correctly — we surface usage on every response, so Cline's built-in cost meter shows the right numbers.

Cursor

Cursor exposes a base-URL override under Settings → Models → Override OpenAI Base URL.

  • Override OpenAI Base URL: https://api.qlaud.ai/v1
  • OpenAI API Key: qlk_live_… (your qlaud key)
  • Add a custom model id matching any catalog entry — gpt-5.4, kimi-k2.6, MiniMax-M2.

Cursor's agent mode (Cmd-K, Composer) will use the overridden base URL for every model you mark as "custom". Built-in Cursor models (cursor-small, etc.) continue to route through Cursor's own infra — only your custom-added models hit qlaud.

What you get from the swap

  • One spend cap, all agents. The same qlk_live_… key works in Claude Code, Codex, Aider, Cline, and Cursor — and the max_spend_usd on that key applies across all of them combined.
  • Per-key, per-model usage. GET /v1/keys/:keyId/usage returns every request that key has made, broken down by model and tool. Cheaper than building per-tool observability, more granular than provider dashboards.
  • Model swaps with no SDK change. Try kimi-k2.6 in your Aider session today, switch to claude-sonnet-4-6 tomorrow — same key, same env vars, just change the model string.
  • Open-model pricing on agentic loops. kimi-k2.6, MiniMax-M2, deepseek-chat through qlaud cost a fraction of closed-frontier models for tasks they're competitive on. Run a throwaway refactor on the cheap one; switch up only when the task warrants it.

Get started

Sign up for qlaud, mint a key with a sane spend cap, and paste two lines into your agent's config. The first $5 will keep your agent humming for days.

#Claude Code#Codex#Aider#Cline#Cursor#CLI#agentic coding

Frequently asked questions

+Does qlaud work with Claude Code?

Yes — point ANTHROPIC_BASE_URL at https://api.qlaud.ai and ANTHROPIC_API_KEY at your qlaud key. We expose the Anthropic Messages API as native passthrough, so cache_control, image blocks, thinking blocks, and tool_use are preserved exactly. Claude Code can't tell the difference from api.anthropic.com.

+Will I lose context-management or tool features?

No. We forward every Anthropic header (anthropic-beta, anthropic-version) and preserve the full request body. Claude Code's context-management and tool-call features depend on those headers, and they pass through verbatim.

+Why route my coding agent through qlaud at all?

Three reasons. (1) Hard spend cap per key — your agent can't run away with your wallet. (2) Per-key usage breakdown — see exactly which model and which session burned tokens. (3) Multi-provider access from one key — switch from Claude to GPT-5.4 to Kimi K2.6 to DeepSeek V3 by changing only the model string, no SDK or env changes.

+Can I use qlaud for my team and bill each developer separately?

Yes — mint one qlaud key per developer with their own cap, and pull /v1/usage at month-end for a per-developer breakdown. The same pattern works for billing your end-users if you're shipping a product that wraps a coding agent.

Keep reading

Ship an AI agent on qlaud in under a minute.

Hardware-isolated microVM per sandbox, ~190 ms round-trip, 80 ms fork(), full Python REPL persistence. Free tier includes $200 credit.

Get started free
Point your coding agent at qlaud — Claude Code, Codex, Aider, Cline, Cursor — qlaud