Skip to Content
prxy.monster v1 is in early access. See what shipped →
IntegrationsUsing prxy.monster with Vercel AI Elements

Using prxy.monster with Vercel AI Elements

Vercel AI Elements  is a set of React UI primitives (Conversation, Message, PromptInput, etc.) built on top of the Vercel AI SDK. Because the UI calls the SDK, and the SDK calls prxy.monster, the integration is handled entirely at the SDK layer — no UI changes.

Install

npm install ai @ai-sdk/anthropic @ai-sdk/react @vercel/ai-elements

Configure

Set the standard env vars on the server side (your Next.js route handler / API route):

export ANTHROPIC_BASE_URL=https://api.prxy.monster export ANTHROPIC_API_KEY=prxy_live_xxxxxxxxxxxxxxxxxxxxxxxx

Code change

None. Your /api/chat route uses streamText (or similar) from the AI SDK. The SDK reads the env vars and routes through prxy.monster. The UI components on the client render whatever streams back — they don’t know or care where it came from.

// app/api/chat/route.ts import { anthropic } from '@ai-sdk/anthropic'; import { streamText, convertToModelMessages } from 'ai'; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: anthropic('claude-sonnet-4-6'), // → prxy.monster messages: convertToModelMessages(messages), }); return result.toUIMessageStreamResponse(); }
// app/page.tsx — UI is unchanged from the AI Elements docs 'use client'; import { useChat } from '@ai-sdk/react'; import { Conversation, Message, PromptInput } from '@vercel/ai-elements'; export default function Chat() { const { messages, sendMessage } = useChat(); return ( <Conversation> {messages.map((m) => ( <Message key={m.id} from={m.role}> {m.parts.map((p, i) => p.type === 'text' ? <span key={i}>{p.text}</span> : null )} </Message> ))} <PromptInput onSubmit={(text) => sendMessage({ text })} /> </Conversation> ); }

Verify

Your chat page works → routing is confirmed. To prove patterns/cache are firing:

curl https://api.prxy.monster/v1/pipeline \ -H "Authorization: Bearer prxy_live_xxxxxxxxxxxxxxxxxxxxxxxx"

Returns the active module pipeline.

What you get

  • Streaming UX preserved — the AI Elements components handle text-delta and tool-call chunks; prxy.monster passes them through unmodified.
  • Cache hits look identical — replayed as synthetic SSE so the UI shows them streaming in just like a fresh response (just much faster).
  • Pattern memory — successful conversations contribute to your prxy.monster pattern store automatically.

For chat UIs:

PRXY_PIPE=mcp-optimizer,semantic-cache,patterns,ipc

ipc is especially valuable for AI Elements chats — long conversations stop hitting the wall, the UI keeps rendering forever.

Common issues

  • Markdown / code-block rendering — the Response component from AI Elements parses markdown. Cache hits return the same markdown. Nothing changes.
  • Tool / function-call renderingToolMessage / Reasoning components get the same chunks they would from a direct provider call.

Full example

examples/nextjs-vercel-ai  uses the AI SDK directly. Wrapping it with AI Elements components is purely additive — same backend wiring.

Verify with the AI Elements docs  for your installed version. Component names occasionally evolve, but the SDK-layer integration pattern is stable.

Last updated on