Skip to content

Quickstart

Arbitex Gateway is a proxy layer that sits between your application and any AI model provider. Every request you send passes through the gateway’s routing, data loss prevention (DLP) inspection, and policy evaluation pipeline before reaching the model — and every interaction is recorded in a tamper-evident audit log. You configure which providers are available, which content policies apply, and how traffic fails over when a provider is unavailable.

Before you begin:

  • An Arbitex Gateway API key — retrieve it from Settings > API Keys in the admin portal
  • At least one configured provider credential — the gateway routes traffic to AI providers on your behalf; it needs your provider API keys stored in Settings > Providers
  • Supported providers: Anthropic, OpenAI, Google Gemini, Azure OpenAI, AWS Bedrock, Groq, Mistral, Cohere, Ollama, or any OpenAI-compatible endpoint

All requests go through the gateway endpoint instead of directly to the model provider. Replace your existing provider base URL with the Arbitex Gateway URL:

Terminal window
ARBITEX_BASE_URL=https://gateway.arbitex.ai/v1
ARBITEX_API_KEY=your-arbitex-api-key

If you are using an OpenAI-compatible SDK, set base_url to the gateway endpoint and replace the provider API key with your Arbitex API key. The gateway translates requests to the correct provider format transparently.

The gateway accepts requests in the standard OpenAI chat completions format. Specify the provider and model using the model field with the format provider/model-id.

Python (using the OpenAI SDK):

from openai import OpenAI
client = OpenAI(
base_url="https://gateway.arbitex.ai/v1",
api_key="your-arbitex-api-key",
)
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-20250514",
messages=[
{"role": "user", "content": "Summarize the key provisions of SOX Section 404."}
],
)
print(response.choices[0].message.content)

curl:

Terminal window
curl https://gateway.arbitex.ai/v1/chat/completions \
-H "Authorization: Bearer your-arbitex-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-sonnet-4-20250514",
"messages": [
{"role": "user", "content": "Summarize the key provisions of SOX Section 404."}
]
}'

The response is in the standard OpenAI chat completions format. Your application code requires no other changes.

Step 3: Verify the request in the audit log

Section titled “Step 3: Verify the request in the audit log”

Every request generates an audit log entry. Confirm the request was processed by retrieving the audit log:

Terminal window
curl "https://gateway.arbitex.ai/api/admin/orgs/{org_id}/audit-log?limit=1" \
-H "Authorization: Bearer your-arbitex-api-key"
{
"entries": [
{
"request_id": "req_01HZ8X9K2P3QR4ST5UV6WX7YZ",
"timestamp": "2026-03-08T14:32:01.847Z",
"user_id": "usr_alex_johnson",
"model": "claude-sonnet-4-20250514",
"provider": "anthropic",
"routing_mode": "Single",
"outcome": "ALLOW",
"dlp_findings": [],
"credint_hit": false,
"siem_forwarded": true
}
]
}

The outcome field shows the Policy Engine’s enforcement decision for the request. dlp_findings lists any sensitive data detected by the 3-tier DLP pipeline. A value of "ALLOW" with an empty dlp_findings array means the request passed through the gateway without triggering any policy rules.

  • Routing — how to configure providers, failover chains, and routing modes
  • DLP Overview — how the 3-tier DLP pipeline inspects requests and responses
  • Policy Engine overview — how enforcement rules are organized and evaluated
  • Audit Log — what gets logged, how to verify log integrity, and how to connect your SIEM