Skip to Content
prxy.monster v1 is in early access. See what shipped →
ConceptsStorage adapters

Storage adapters

Modules access persistent state through a single StorageAdapter interface. The interface is implemented twice — once for cloud, once for local. Modules don’t know which one they’re using.

The interface

export interface StorageAdapter { kind: 'cloud' | 'local'; kv: KvStore; db: Database; blob: BlobStore; health(): Promise<HealthStatus>; }

Three sub-interfaces:

kv — key-value with TTL

interface KvStore { get(key: string): Promise<string | null>; set(key: string, value: string, ttlSeconds?: number): Promise<void>; setNx(key: string, value: string, ttlSeconds: number): Promise<boolean>; del(key: string): Promise<void>; ttl(key: string): Promise<number>; }

Used for: rate limits, hot caches, distributed locks (setNx), cost counters.

BackendImplementation
CloudUpstash Redis (REST)
LocalIn-memory Map<string, { value, expiresAt }> with a 30s sweeper
interface Database { from(table: string): QueryBuilder; raw(sql: string, params?: unknown[]): Promise<unknown[]>; }

The QueryBuilder mimics Supabase’s chainable API — .select().eq().order().limit() etc. — and adds one extension: vectorSearch(col, embedding, opts) for nearest-neighbor lookups.

BackendImplementation
CloudPostgres (Neon) + pgvector (vector(1536), ivfflat index)
Localbetter-sqlite3 + sqlite-vec when available, JSON-cosine fallback otherwise

blob — large content

interface BlobStore { put(key: string, content: Buffer | string): Promise<void>; get(key: string): Promise<Buffer | null>; delete(key: string): Promise<void>; list(prefix: string): Promise<string[]>; }

Used for: compressed conversation archives, evicted message bodies, attachments.

BackendImplementation
Cloud (default)Cloudflare R2 (zero egress fees)
Cloud (alt)AWS S3 — opt in via BLOB_BACKEND=s3
Localfs/promises writing to ~/.prxy/blob/{key}

The BLOB_BACKEND env var picks between R2 (default) and AWS S3. R2 stays the SaaS default because it has zero egress fees — that’s the win the cloud product depends on. S3 is opt-in for AWS-heavy customers and for the AWS self-deploy template, where keeping all storage in one provider is the natural choice. Both implementations share the same BlobStore contract so modules don’t change.

Configuration is straightforward:

# R2 (default) BLOB_BACKEND=r2 R2_ACCOUNT_ID=... R2_ACCESS_KEY_ID=... R2_SECRET_ACCESS_KEY=... R2_BUCKET=prxy-evictions # S3 (opt-in) BLOB_BACKEND=s3 AWS_REGION=us-east-1 S3_BUCKET=prxy-evictions # AWS credentials via SDK default chain — env vars, shared config, IAM role.

Vector search compatibility

The vector dimensionality is set to 1536, which works with:

  • OpenAI text-embedding-3-small (1536)
  • Voyage voyage-3-lite (1024 — pgvector pads)
  • Voyage voyage-3 (1024 — pgvector pads)

Local sqlite-vec supports the same dim. If sqlite-vec isn’t compiled (Alpine, some ARM hosts, Windows), the adapter falls back to a pure-JS cosine scan over a JSON-encoded embedding column. Quality is identical, latency is worse (linear scan vs index).

Falling back gracefully

When the storage backend is unavailable:

  • kv.get() returns null instead of throwing.
  • kv.set() swallows the error and logs.
  • db queries return { data: null, error } and the calling module treats it as a no-op.

This is by design — a Redis outage shouldn’t take the whole gateway offline. A semantic cache miss is acceptable. A 503 to the client is not.

Lifecycle

// At gateway boot: const storage = await initStorage(); // → CloudAdapter if LOCAL_MODE !== 'true', else LocalAdapter // Runs migrations, connects to backends, validates health. // During a request: const ctx = { storage, request, ... }; await module.pre(ctx); // Module reads/writes through ctx.storage. // At gateway shutdown: await storage.shutdown(); // Closes connections, flushes the local sqlite write-ahead log, etc.

init() and shutdown() are concrete-class methods, not part of the interface. This stops modules from accidentally calling them.

See also

Last updated on