Skip to content

OpenAI SDK Migration Guide

Arbitex Gateway implements the OpenAI-compatible API. If you are using the OpenAI Python SDK, Node.js SDK, or any OpenAI-compatible client, you can route traffic through Arbitex with a single configuration change: swap base_url and api_key. Everything else — request format, response format, streaming, function calling, tool use — works without modification.


# Before: direct to OpenAI
from openai import OpenAI
client = OpenAI(
api_key="sk-..."
)
# After: route through Arbitex
from openai import OpenAI
client = OpenAI(
api_key="arb_live_your-arbitex-key",
base_url="https://api.arbitex.ai/v1"
)

That is the complete migration. All other code is unchanged.

// Before: direct to OpenAI
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// After: route through Arbitex
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.ARBITEX_API_KEY,
baseURL: 'https://api.arbitex.ai/v1',
});
Terminal window
# Before
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [...]}'
# After
curl https://api.arbitex.ai/v1/chat/completions \
-H "Authorization: Bearer $ARBITEX_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [...]}'

The recommended approach is to configure the base URL and API key via environment variables. This lets you switch between OpenAI and Arbitex without code changes:

Terminal window
# .env (production)
OPENAI_BASE_URL=https://api.arbitex.ai/v1
OPENAI_API_KEY=arb_live_your-arbitex-key
# Python — reads OPENAI_BASE_URL and OPENAI_API_KEY automatically
from openai import OpenAI
client = OpenAI() # base_url and api_key from environment
// Node.js — reads OPENAI_BASE_URL and OPENAI_API_KEY automatically
const client = new OpenAI();

The OpenAI SDK checks OPENAI_BASE_URL environment variable if baseURL is not set in code. This means switching between environments (OpenAI direct vs. Arbitex) requires only an environment variable change.


API keys are issued from the Arbitex Cloud portal at cloud.arbitex.ai. Keys follow the format:

  • arb_live_* — Production keys (live traffic, policy enforcement active)
  • arb_test_* — Test keys (sandbox environment)

Keys are scoped to your organization. The key’s role determines what configuration changes the bearer can make.


Arbitex Gateway proxies to 9 LLM providers. When you use an OpenAI-compatible request with an Arbitex key, the Gateway routes the request to the appropriate provider based on the model field.

Model prefixProvider
gpt-*, o1-*, o3-*OpenAI
claude-*Anthropic
gemini-*Google
azure/*Azure OpenAI
bedrock/*AWS Bedrock
groq/*Groq
mistral/*Mistral
cohere/*Cohere
ollama/*Ollama (self-hosted)

Your requests to Arbitex are in OpenAI format regardless of which underlying model you’re targeting. Arbitex handles format translation.

All standard OpenAI chat completion parameters pass through:

ParameterSupported
modelYes — routed to provider
messagesYes
temperature, top_p, max_tokensYes
streamYes — see Streaming below
tools / tool_choiceYes — see Function calling below
functions (legacy)Yes — translated to tools
response_formatYes
logprobs, top_logprobsProvider-dependent
n (multiple completions)Yes

Streaming works without changes. If your code uses stream=True (Python) or stream: true (Node.js / HTTP), the Gateway passes streaming responses through using server-sent events (SSE).

stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)

DLP scanning and policy enforcement occur before the streaming response begins. If a policy rule blocks the request, you receive an error before the stream opens — there is no partial stream followed by a block.

Function calling and tool use work without changes:

response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's the weather in Boston?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}]
)

The Arbitex Gateway passes tool definitions to the underlying model provider and returns tool call responses in OpenAI format.


When traffic passes through Arbitex, the request and response are inspected by the 3-tier DLP pipeline and the Policy Engine before delivery.

If a policy rule blocks a request, you receive a 400 response with a structured error:

{
"error": {
"message": "Request blocked by policy",
"type": "policy_violation",
"code": "policy_block",
"param": null
}
}

The OpenAI SDK raises this as an openai.BadRequestError. Handle it the same way you would handle any API error.

try:
response = client.chat.completions.create(...)
except openai.BadRequestError as e:
if e.code == "policy_block":
# Handle policy enforcement
pass

No changes to your error handling are required — policy blocks surface through the same error path as any other 400.


All requests through Arbitex are logged in the tamper-evident audit trail. Each request produces an audit entry with:

  • Timestamp and request ID
  • Organization and user ID
  • Model requested and provider routed to
  • Enforcement action (if a policy rule matched)
  • DLP findings (category, not content)

Your organization’s admin can view logs in the Cloud portal or export them to your SIEM via SIEM integration.