Skip to Content
prxy.monster v1 is in early access. See what shipped →
IntegrationsUsing prxy.monster with Continue.dev

Using prxy.monster with Continue.dev

Continue.dev  is an open-source VS Code / JetBrains coding assistant. Provider configuration lives in ~/.continue/config.json (or config.yaml in newer versions). Each provider entry accepts an apiBase field — set it to prxy.monster and the chat / autocomplete / edit features route through.

Configure

Edit ~/.continue/config.json (or config.yaml):

{ "models": [ { "title": "Claude (via prxy.monster)", "provider": "anthropic", "model": "claude-sonnet-4-6", "apiKey": "prxy_live_xxxxxxxxxxxxxxxxxxxxxxxx", "apiBase": "https://api.prxy.monster" }, { "title": "GPT-4o (via prxy.monster)", "provider": "openai", "model": "gpt-4o", "apiKey": "prxy_live_xxxxxxxxxxxxxxxxxxxxxxxx", "apiBase": "https://api.prxy.monster/v1" } ] }

For YAML config (~/.continue/config.yaml):

models: - title: Claude (via prxy.monster) provider: anthropic model: claude-sonnet-4-6 apiKey: prxy_live_xxxxxxxxxxxxxxxxxxxxxxxx apiBase: https://api.prxy.monster - title: GPT-4o (via prxy.monster) provider: openai model: gpt-4o apiKey: prxy_live_xxxxxxxxxxxxxxxxxxxxxxxx apiBase: https://api.prxy.monster/v1

Reload Continue (in VS Code: Command Palette → “Continue: Reload”). The new model entries appear in the model dropdown.

Continue’s config format has shifted between versions (JSON → YAML, schema additions). Verify the exact field name with the Continue docs  for your installed version. apiBase is stable across recent releases.

Code change

None — Continue is a VS Code / JetBrains extension; you don’t modify its source.

Verify

curl https://api.prxy.monster/health

Open Continue’s chat panel, send any message — successful response confirms routing.

What you get

  • Pattern memory — repeated coding problem types (“how do I write a Zod schema for…”, “convert this to async”) get learned and re-injected.
  • Semantic cache — similar code questions across projects return cached answers.
  • Cost guards — hard daily cap on your prxy + provider spend.
  • Infinite context — long Continue chat sessions stop hitting the wall.

Autocomplete model

Continue uses a separate model entry for tab autocomplete. You can route that through prxy.monster too — but autocomplete is latency-sensitive (target: under 200ms). prxy.monster adds 30-60ms per call, so for autocomplete specifically, weigh the latency hit.

If you do route autocomplete through:

tabAutocompleteModel: title: Claude Haiku autocomplete provider: anthropic model: claude-haiku-4-5 apiKey: prxy_live_xxx apiBase: https://api.prxy.monster

Recommend disabling mcp-optimizer and patterns for autocomplete (skip them in your pipeline) to keep latency tight.

For Continue chat (default):

PRXY_PIPE=semantic-cache,patterns,ipc,cost-guard

For Continue autocomplete (latency-sensitive):

PRXY_PIPE=exact-cache,cost-guard

You can run two prxy.monster API keys — one for chat with the full pipeline, one for autocomplete with a lean pipeline.

Common issues

  • “Continue can’t find my model” — check the JSON / YAML is valid. Continue silently swallows malformed config sections.
  • Custom prompts — Continue’s slash commands (/edit, /comment, etc.) all route through the configured model. They get the same caching / pattern benefits.
  • Local LLMs through Continue — if you use Continue with Ollama locally, you can also point Ollama at the prxy.monster local edition for caching. See Local quickstart.

Full example

Drop-in ~/.continue/config.json snippet: see the JSON above. No external example repo needed — the config is the integration.

Last updated on