Skip to Content
prxy.monster v1 is in early access. See what shipped →
IntegrationsUsing prxy.monster with Google GenAI

Using prxy.monster with Google GenAI

Google ships two SDK families: the older @google/generative-ai and the newer unified @google/genai. Both can target Google’s own OpenAI-compatible endpoint, which means they can also target prxy.monster’s OpenAI-compatible endpoint by changing one URL.

For native Gemini features that don’t fit the OpenAI shape (Live API, multimodal vision streaming, file uploads), use Google’s SDK directly against Google. For text + tool-use chat, route through prxy.monster.

Google’s GenAI SDKs ship breaking changes frequently. Verify the exact constructor option name with the @google/genai docs  for your installed version. The pattern below works for the current OpenAI-compat path; the option name may be httpOptions.baseUrl, apiEndpoint, or similar depending on version.

Install

npm install @google/genai # or pip install google-genai

Configure

prxy.monster proxies Gemini via the OpenAI-compatible shape — your client sends OpenAI-format requests with model: "gemini-2.0-flash" (or another Gemini model) and we translate.

The simplest path is to use the OpenAI SDK pointed at prxy.monster — see OpenAI SDK guide — and set model: "gemini-2.0-flash":

export OPENAI_BASE_URL=https://api.prxy.monster/v1 export OPENAI_API_KEY=prxy_live_xxxxxxxxxxxxxxxxxxxxxxxx
import OpenAI from 'openai'; const client = new OpenAI(); const r = await client.chat.completions.create({ model: 'gemini-2.0-flash', messages: [{ role: 'user', content: 'hi' }], });

prxy.monster routes the request to Google’s Gemini API on the back end.

Using @google/genai directly

If you want to keep using the Google SDK, point its OpenAI-compatible mode at prxy.monster:

import { GoogleGenAI } from '@google/genai'; // Verify the exact option name in your @google/genai version. // The current pattern uses httpOptions.baseUrl for OpenAI-compat routing. const ai = new GoogleGenAI({ apiKey: process.env.OPENAI_API_KEY, // your prxy key httpOptions: { baseUrl: 'https://api.prxy.monster/v1', }, }); const r = await ai.models.generateContent({ model: 'gemini-2.0-flash', contents: 'hi', });

Verify

curl https://api.prxy.monster/health

What you get

  • Same semantic cache, pattern memory, cost guards that apply to OpenAI / Anthropic also apply to Gemini calls.
  • Cross-model fallback (when router ships in v1.1) — fall back to Claude or GPT if Gemini is rate-limited.
PRXY_PIPE=semantic-cache,patterns,ipc,cost-guard

(mcp-optimizer is a no-op for Gemini today — it triggers on Anthropic/OpenAI tool definitions.)

Common issues

  • Native Gemini features (Live API, file API, native multimodal streaming) — these don’t fit the OpenAI shape. Use Google’s SDK directly against Google for those endpoints.
  • Embedding modelstext-embedding-004 and other Gemini embedding models route through too.
  • System instructions — pass them as role: "system" in the messages array. prxy.monster’s translator hoists them correctly to Gemini’s systemInstruction field.

Full example

Plain Node script (uses OpenAI SDK pointed at prxy.monster, calling Gemini): adapted from examples/openai-quickstart  — change model: 'gpt-4o-mini' to model: 'gemini-2.0-flash'.

Last updated on