The Universal Translation Layer for Large Language Model APIs
LLM Bridge is a powerful TypeScript library that provides seamless translation between different LLM provider APIs (OpenAI Chat Completions, OpenAI Responses API, Anthropic Claude, Google Gemini) while preserving zero data loss and enabling perfect reconstruction of original requests.
When building Infinite Chat API, we needed a way to create a proxy that supports multiple LLM providers. However, this is a difficult challenge, as I wrote in this blog post: The API layer for using intelligence is completely broken..
The particular challenges are in:
- Manipulating and creating proxies for different LLM providers
- Multi-modality
- Tool call chains
- Error handling
LLM Bridge provides a universal format that acts as a common language between all major LLM providers, enabling:
β
Perfect Translation - Convert between OpenAI, Anthropic, Google, and OpenAI Responses API formats
β
Zero Data Loss - Every field is preserved with _original reconstruction
β
Streaming Support - Parse and emit SSE streams across all providers
β
Extended Thinking - Anthropic thinking blocks, Google thought parts, OpenAI reasoning
β
Structured Output - JSON schema response formats across all providers
β
Multimodal Support - Images, documents, and rich content across providers
β
Tool Calling - Function calling translation between different formats
β
Error Handling - Unified error types with provider-specific translation
β
Type Safety - Full TypeScript support with strict typing
npm install llm-bridgeimport { toUniversal, fromUniversal, translateBetweenProviders } from 'llm-bridge'
// Convert OpenAI request to universal format
const openaiRequest = {
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant" },
{ role: "user", content: "Hello!" }
],
temperature: 0.7,
max_tokens: 1000
}
const universal = toUniversal("openai", openaiRequest)
console.log(universal.provider) // "openai"
console.log(universal.model) // "gpt-4o"
console.log(universal.system) // "You are a helpful assistant"
// Convert universal format back to any provider
const anthropicRequest = fromUniversal("anthropic", universal)
const googleRequest = fromUniversal("google", universal)
// Or translate directly between providers
const anthropicRequest2 = translateBetweenProviders("openai", "anthropic", openaiRequest)// Convert OpenAI Responses API format
const responsesRequest = {
model: "gpt-4o",
input: [
{ role: "user", content: "What is the weather?" }
],
tools: [
{ type: "function", name: "get_weather", parameters: { type: "object", properties: { location: { type: "string" } } } }
],
max_output_tokens: 1000
}
const universal = toUniversal("openai-responses", responsesRequest)
const anthropicRequest = fromUniversal("anthropic", universal)import { parseOpenAIStream, emitAnthropicStream } from 'llm-bridge'
// Parse an OpenAI SSE stream into universal events
const universalEvents = parseOpenAIStream(openaiSSEStream)
// Re-emit as Anthropic SSE format
const anthropicStream = emitAnthropicStream(universalEvents)
// Or use the handler for full stream translation
import { handleUniversalStreamRequest } from 'llm-bridge'
const outputStream = handleUniversalStreamRequest(
inputStream,
"openai", // source provider
"anthropic", // target provider
async (event) => event // optional transform
)// Anthropic extended thinking
const anthropicRequest = {
model: "claude-sonnet-4-20250514",
max_tokens: 16000,
thinking: { type: "enabled", budget_tokens: 10000 },
messages: [{ role: "user", content: "Solve this complex problem..." }]
}
const universal = toUniversal("anthropic", anthropicRequest)
console.log(universal.thinking) // { enabled: true, budget_tokens: 10000 }
// Convert to Google format with thinking
const googleRequest = fromUniversal("google", universal)
// Includes thinkingConfig: { thinkingBudget: 10000 }// OpenAI structured output
const openaiRequest = {
model: "gpt-4o",
messages: [{ role: "user", content: "Extract the name and age" }],
response_format: {
type: "json_schema",
json_schema: {
name: "person",
schema: { type: "object", properties: { name: { type: "string" }, age: { type: "number" } } }
}
}
}
const universal = toUniversal("openai", openaiRequest)
// universal.structured_output contains the normalized schema
// Convert to Google format
const googleRequest = fromUniversal("google", universal)
// Includes generationConfig: { responseMimeType: "application/json", responseSchema: {...} }// Round-trip conversion with zero data loss
const original = { /* your OpenAI request */ }
const universal = toUniversal("openai", original)
const reconstructed = fromUniversal("openai", universal)
console.log(reconstructed === original) // Perfect equality!LLM Bridge converts between provider-specific formats through a universal intermediate format:
OpenAI Chat ββ Universal ββ Anthropic
β β
OpenAI Responses ββ Universal ββ Google
| Provider | Format | Features |
|---|---|---|
| OpenAI Chat Completions | openai |
Messages, tools, developer role, reasoning_effort, structured output |
| OpenAI Responses API | openai-responses |
Input items, reasoning config, built-in tools (web_search, file_search) |
| Anthropic Claude | anthropic |
Messages, tools, extended thinking, cache_control, URL images |
| Google Gemini | google |
Contents, function declarations, thinkingConfig, structured output |
Parse and emit Server-Sent Events (SSE) streams for all providers:
- Parsers: Convert provider SSE streams β universal stream events
- Emitters: Convert universal stream events β provider SSE streams
- Handler: Full stream translation pipeline with optional transforms
Handle images and documents seamlessly across providers:
// OpenAI format with image
const openaiMultimodal = {
model: "gpt-4o",
messages: [{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{ type: "image_url", image_url: { url: "data:image/jpeg;base64,..." } }
]
}]
}
// Translate to Anthropic (base64 source) or Google (inlineData)
const anthropicRequest = translateBetweenProviders("openai", "anthropic", openaiMultimodal)
const googleRequest = translateBetweenProviders("openai", "google", openaiMultimodal)Seamlessly translate tool calls between different provider formats:
const openaiWithTools = {
model: "gpt-4o",
messages: [
{
role: "assistant",
tool_calls: [{
id: "call_123",
type: "function",
function: { name: "get_weather", arguments: '{"location": "SF"}' }
}]
},
{
role: "tool",
content: '{"temperature": 72}',
tool_call_id: "call_123"
}
],
tools: [{
type: "function",
function: {
name: "get_weather",
description: "Get weather info",
parameters: { type: "object", properties: { location: { type: "string" } } }
}
}]
}
// Translate to Google Gemini (functionCall/functionResponse)
const geminiRequest = translateBetweenProviders("openai", "google", openaiWithTools)
// Translate to Anthropic (tool_use/tool_result blocks)
const anthropicRequest = translateBetweenProviders("openai", "anthropic", openaiWithTools)Unified error handling with provider-specific error translation:
import { buildUniversalError, translateError } from 'llm-bridge'
const error = buildUniversalError("rate_limit_error", "Rate limit exceeded", "openai", { retryAfter: 60 })
const anthropicError = translateError(error.universal, "anthropic")
const googleError = translateError(error.universal, "google")Automatically detect which provider format you're working with:
import { detectProvider } from 'llm-bridge'
detectProvider("https://api.openai.com/v1/chat/completions", body) // "openai"
detectProvider("https://api.anthropic.com/v1/messages", body) // "anthropic"
detectProvider("https://generativelanguage.googleapis.com/...", body) // "google"
detectProvider("https://api.openai.com/v1/responses", body) // "openai-responses"toUniversal(provider, body)- Convert provider format to universalfromUniversal(provider, universal)- Convert universal to provider formattranslateBetweenProviders(from, to, body)- Direct provider-to-provider translationdetectProvider(url, body)- Auto-detect provider from URL and request format
parseOpenAIStream(stream)- Parse OpenAI Chat Completions SSE streamparseAnthropicStream(stream)- Parse Anthropic Messages SSE streamparseGoogleStream(stream)- Parse Google Gemini SSE streamparseOpenAIResponsesStream(stream)- Parse OpenAI Responses API SSE streamemitOpenAIStream(events)- Emit OpenAI SSE formatemitAnthropicStream(events)- Emit Anthropic SSE formatemitGoogleStream(events)- Emit Google SSE formathandleUniversalStreamRequest(stream, source, target, transform?)- Full stream translation pipeline
getModelDetails(model)- Get model information and capabilitiesgetModelCosts(model)- Get pricing information for modelcountUniversalTokens(universal)- Estimate token usagecreateObservabilityData(universal)- Generate telemetry data
buildUniversalError(type, message, provider, options)- Create universal errortranslateError(error, targetProvider)- Translate error between providersparseProviderError(error, provider)- Parse provider-specific errors
Run the comprehensive test suite:
npm testOur test suite includes:
- β 355+ passing tests
- β Provider format conversion (OpenAI, Anthropic, Google, OpenAI Responses)
- β Cross-provider round-trip translation
- β Streaming parser and emitter tests
- β Extended thinking and structured output
- β Multimodal content handling
- β Tool calling lifecycle across all providers
- β Error handling and validation
- β Edge cases and malformed input
- β Type safety verification
MIT License - see LICENSE file for details.
- π Documentation
- π Report Issues
- π¬ Discussions
Made with β€οΈ by team supermemory