OpenAI SDK Migration Guide
Arbitex Gateway implements the OpenAI-compatible API. If you are using the OpenAI Python SDK, Node.js SDK, or any OpenAI-compatible client, you can route traffic through Arbitex with a single configuration change: swap base_url and api_key. Everything else — request format, response format, streaming, function calling, tool use — works without modification.
The change
Section titled “The change”Python SDK
Section titled “Python SDK”# Before: direct to OpenAIfrom openai import OpenAI
client = OpenAI( api_key="sk-...")# After: route through Arbitexfrom openai import OpenAI
client = OpenAI( api_key="arb_live_your-arbitex-key", base_url="https://api.arbitex.ai/v1")That is the complete migration. All other code is unchanged.
Node.js / TypeScript SDK
Section titled “Node.js / TypeScript SDK”// Before: direct to OpenAIimport OpenAI from 'openai';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY,});// After: route through Arbiteximport OpenAI from 'openai';
const client = new OpenAI({ apiKey: process.env.ARBITEX_API_KEY, baseURL: 'https://api.arbitex.ai/v1',});HTTP / curl
Section titled “HTTP / curl”# Beforecurl https://api.openai.com/v1/chat/completions \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "gpt-4o", "messages": [...]}'
# Aftercurl https://api.arbitex.ai/v1/chat/completions \ -H "Authorization: Bearer $ARBITEX_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "gpt-4o", "messages": [...]}'Environment variable pattern
Section titled “Environment variable pattern”The recommended approach is to configure the base URL and API key via environment variables. This lets you switch between OpenAI and Arbitex without code changes:
# .env (production)OPENAI_BASE_URL=https://api.arbitex.ai/v1OPENAI_API_KEY=arb_live_your-arbitex-key# Python — reads OPENAI_BASE_URL and OPENAI_API_KEY automaticallyfrom openai import OpenAIclient = OpenAI() # base_url and api_key from environment// Node.js — reads OPENAI_BASE_URL and OPENAI_API_KEY automaticallyconst client = new OpenAI();The OpenAI SDK checks OPENAI_BASE_URL environment variable if baseURL is not set in code. This means switching between environments (OpenAI direct vs. Arbitex) requires only an environment variable change.
Get your API key
Section titled “Get your API key”API keys are issued from the Arbitex Cloud portal at cloud.arbitex.ai. Keys follow the format:
arb_live_*— Production keys (live traffic, policy enforcement active)arb_test_*— Test keys (sandbox environment)
Keys are scoped to your organization. The key’s role determines what configuration changes the bearer can make.
Compatibility
Section titled “Compatibility”Arbitex Gateway proxies to 9 LLM providers. When you use an OpenAI-compatible request with an Arbitex key, the Gateway routes the request to the appropriate provider based on the model field.
| Model prefix | Provider |
|---|---|
gpt-*, o1-*, o3-* | OpenAI |
claude-* | Anthropic |
gemini-* | |
azure/* | Azure OpenAI |
bedrock/* | AWS Bedrock |
groq/* | Groq |
mistral/* | Mistral |
cohere/* | Cohere |
ollama/* | Ollama (self-hosted) |
Your requests to Arbitex are in OpenAI format regardless of which underlying model you’re targeting. Arbitex handles format translation.
Request compatibility
Section titled “Request compatibility”All standard OpenAI chat completion parameters pass through:
| Parameter | Supported |
|---|---|
model | Yes — routed to provider |
messages | Yes |
temperature, top_p, max_tokens | Yes |
stream | Yes — see Streaming below |
tools / tool_choice | Yes — see Function calling below |
functions (legacy) | Yes — translated to tools |
response_format | Yes |
logprobs, top_logprobs | Provider-dependent |
n (multiple completions) | Yes |
Streaming
Section titled “Streaming”Streaming works without changes. If your code uses stream=True (Python) or stream: true (Node.js / HTTP), the Gateway passes streaming responses through using server-sent events (SSE).
stream = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}], stream=True,)for chunk in stream: print(chunk.choices[0].delta.content or "", end="", flush=True)DLP scanning and policy enforcement occur before the streaming response begins. If a policy rule blocks the request, you receive an error before the stream opens — there is no partial stream followed by a block.
Function calling and tool use
Section titled “Function calling and tool use”Function calling and tool use work without changes:
response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What's the weather in Boston?"}], tools=[{ "type": "function", "function": { "name": "get_weather", "description": "Get current weather for a location", "parameters": { "type": "object", "properties": { "location": {"type": "string"} }, "required": ["location"] } } }])The Arbitex Gateway passes tool definitions to the underlying model provider and returns tool call responses in OpenAI format.
Policy enforcement
Section titled “Policy enforcement”When traffic passes through Arbitex, the request and response are inspected by the 3-tier DLP pipeline and the Policy Engine before delivery.
If a policy rule blocks a request, you receive a 400 response with a structured error:
{ "error": { "message": "Request blocked by policy", "type": "policy_violation", "code": "policy_block", "param": null }}The OpenAI SDK raises this as an openai.BadRequestError. Handle it the same way you would handle any API error.
try: response = client.chat.completions.create(...)except openai.BadRequestError as e: if e.code == "policy_block": # Handle policy enforcement passNo changes to your error handling are required — policy blocks surface through the same error path as any other 400.
Audit trail
Section titled “Audit trail”All requests through Arbitex are logged in the tamper-evident audit trail. Each request produces an audit entry with:
- Timestamp and request ID
- Organization and user ID
- Model requested and provider routed to
- Enforcement action (if a policy rule matched)
- DLP findings (category, not content)
Your organization’s admin can view logs in the Cloud portal or export them to your SIEM via SIEM integration.
See also
Section titled “See also”- Quickstart — Full setup from scratch including API key creation
- Chat completions API reference — Complete API reference for the
/v1/chat/completionsendpoint - Policy Engine overview — How requests are evaluated against policy rules
- DLP overview — What is detected in transit