Skip to content

Deployment

Arbitex Gateway supports three deployment topologies. Each provides the same enforcement capabilities — DLP inspection, Policy Engine evaluation, audit logging, and multi-provider routing — with different data residency and operational characteristics.

flowchart LR
subgraph SaaS["SaaS (Cloud-Managed)"]
C1["Client Apps"] --> AG1["Arbitex Gateway\n(Arbitex Cloud)"]
AG1 --> P1["LLM Providers"]
end
subgraph Hybrid["Hybrid Outpost"]
C2["Client Apps"] --> OP["Arbitex Outpost\n(Your VPC)"]
OP --> P2["LLM Providers"]
OP -. "mTLS\npolicy sync\naudit sync" .-> CP["Arbitex Cloud\n(Control Plane)"]
end
subgraph SelfHosted["Self-Hosted"]
C3["Client Apps"] --> AG3["Arbitex Platform\n(Your Infrastructure)"]
AG3 --> P3["LLM Providers"]
end

SaaSHybrid OutpostSelf-Hosted
Data pathPrompts and responses traverse Arbitex CloudPrompts and responses stay in your VPCAll data on your infrastructure
Control planeArbitex-operatedArbitex-operated (Cloud API)You operate
DLP engineRuns in Arbitex CloudRuns locally in your VPCRuns on your infrastructure
Policy managementCloud Portal UI + APISynced from Cloud every 60s via mTLSAdmin API on your platform instance
Audit logArbitex Cloud (90-day buffer) + your SIEMLocal buffer, batch-forwarded to Cloud every 30sLocal PostgreSQL + your SIEM
User managementCloud Portal; Entra ID SCIMManaged via Cloud PortalLocal platform; Entra ID SCIM
Offline resilienceN/ACached policy bundle allows enforcement during Cloud outageAlways-on (no external dependencies)
Plan tiersDeveloper Free, Developer Pro, Team, EnterpriseEnterprise OutpostN/A

The default deployment. Your applications send requests to the Arbitex Gateway endpoint in Arbitex Cloud. The gateway handles routing, DLP, policy enforcement, and audit logging. You manage configuration through the Cloud Portal or the admin API.

  1. Your application sends POST /v1/chat/completions to https://gateway.arbitex.ai/v1
  2. The gateway authenticates the request, runs DLP inspection, evaluates the policy chain, and routes to the provider
  3. The response passes through output scanning (if configured) and is returned to your application
  4. An audit log entry is written atomically
DirectionDestinationPortProtocol
Outbound from your appsgateway.arbitex.ai443HTTPS
Outbound from your appsapi.arbitex.ai (admin API)443HTTPS

No inbound connections to your infrastructure are required.


The Hybrid Outpost runs the data plane inside your VPC. Prompts and responses never leave your network — only metadata (audit entries, policy sync requests) crosses the boundary to Arbitex Cloud via mTLS. This topology is for organizations with data residency requirements that prevent AI traffic from traversing third-party infrastructure.

  1. Your application sends requests to the Outpost endpoint inside your VPC (default port 8300)
  2. The Outpost runs DLP inspection and policy evaluation locally using cached policy bundles
  3. The Outpost forwards the request directly to the LLM provider from your network
  4. Audit metadata is written to a local buffer (JSONL) and batch-forwarded to Arbitex Cloud every 30 seconds
  5. Policy bundles are synced from Arbitex Cloud every 60 seconds via mTLS (ETag-based; no transfer on no change)

Each Outpost deployment consists of the following services:

ComponentDocker servicePortPurpose
Proxy / API gatewaybackend8100 (admin API), 8300 (proxy)Core request pipeline: authentication, DLP orchestration, Policy Engine, provider proxying, audit logging
Admin UIfrontend3100React/Nginx admin portal — proxies /api/ to the backend. Exposes policy management, SIEM config, group and quota management
DLP NER enginener-gpu8200GPU-accelerated zero-shot NER microservice (GLiNER model). Detects unstructured PII: names, addresses, medical terms. Optional — Outpost functions without it at reduced detection coverage
DLP contextual validatordeberta-validator8201DeBERTa NLI microservice. Second-pass contextual validation to suppress false positives from Tier 1/2. Requires GPU. Disabled by default (DLP_DEBERTA_ENABLED=false)
Databasepostgres5432PostgreSQL 16 — stores org config, policy packs, audit log, group memberships, SIEM connector state. Isolated to backend-net
Cacheredis6379Rate limiting hot path (token bucket via INCR/EXPIRE). Syncs to PostgreSQL every 60 seconds. Isolated to backend-net
flowchart TD
subgraph VPC["Your VPC — backend-net (internal)"]
PG["PostgreSQL :5432"]
RD["Redis :6379"]
NER["NER GPU :8200"]
DEB["DeBERTa :8201"]
BE["Backend (proxy + API)\n:8100 admin / :8300 proxy"]
end
subgraph VPC2["Your VPC — frontend-net"]
FE["Admin UI :3100"]
end
Apps["Client Apps"] -->|":8300 HTTP/S"| BE
AdminBrowser["Admin Browser"] -->|":3100 HTTPS"| FE
FE -->|"/api/ reverse proxy"| BE
BE --> PG
BE --> RD
BE -->|"POST /detect"| NER
BE -->|"POST /validate"| DEB
BE -->|"direct from your network"| LLM["LLM Providers\napi.openai.com\napi.anthropic.com"]
BE -."mTLS outbound only\nPolicy sync every 60s\nAudit sync every 30s".-> Cloud["Arbitex Cloud\napi.arbitex.ai"]
BE -."mTLS outbound only\nCert renewal".-> CloudCerts["Arbitex Cloud\ncloud.arbitex.ai"]

backend-net is an internal Docker network — PostgreSQL and Redis are not reachable from outside the Docker environment. frontend-net connects only the backend and frontend; the frontend cannot reach the database directly.

Minimum requirements for a production Outpost without GPU microservices (Tier 1 DLP only):

ComponentMemoryCPUStorage
postgres512 MB1 core20 GB+ (audit log retention)
redis256 MB0.5 core
backend1 GB2 cores
frontend256 MB0.5 core
Total (no GPU)~2 GB~4 cores20 GB+

With GPU microservices enabled (Tier 2 + Tier 3 DLP):

ComponentMemoryCPUGPU
ner-gpu4 GB1 core1× NVIDIA GPU (any CUDA-capable)
deberta-validator4 GB1 core1× NVIDIA GPU (can share with NER)
Additional (GPU)+8 GB+2 cores1–2× GPU

NER and DeBERTa can share a single GPU if VRAM > 8 GB. Both services require CUDA and NVIDIA Container Toolkit. GPU microservices have a cold start time of approximately 60 seconds for model loading.

Docker Compose — for VMs and bare-metal deployments. The docker-compose.yml in the Outpost repository starts the full stack. For GPU microservices, ensure the NVIDIA Container Toolkit is installed and docker compose up has GPU access.

Kubernetes / Helm — the charts/arbitex-outpost/ Helm chart provides production-ready deployment with configurable replica count, PodDisruptionBudget, and optional HorizontalPodAutoscaler. GPU nodes require the nvidia.com/gpu resource label.

DirectionDestinationPortProtocolPurpose
Inbound (VPC internal)backend8300HTTP/HTTPSClient AI requests
Inbound (VPC internal)frontend3100HTTPSAdmin portal
Outboundapi.arbitex.ai443HTTPS + mTLSPolicy sync, audit sync
Outboundcloud.arbitex.ai443HTTPS + mTLSCertificate renewal
OutboundLLM providers443HTTPSModel proxying (direct from your network)

The Outpost initiates all connections to Arbitex Cloud — no inbound connections from Arbitex to your VPC are required.

Certificate management and rotation lifecycle

Section titled “Certificate management and rotation lifecycle”

Each Outpost receives a unique mTLS certificate bundle at registration time. Certificates authenticate the Outpost to Arbitex Cloud for policy sync, audit sync, and cert renewal.

Certificate files on the Outpost:

PathPurposeMode
/app/certs/outpost.pemOutpost leaf certificate (public)0644
/app/certs/outpost.keyOutpost private key0400
/app/certs/ca.pemArbitex Platform CA certificate — used to verify Cloud server identity0644

Rotation lifecycle:

sequenceDiagram
participant O as Outpost
participant C as Arbitex Cloud
Note over O: D-30: cert renewal window opens
O->>C: POST /v1/internal/cert-renew (mTLS with current cert)
C->>C: Issue new 90-day leaf cert signed by Platform CA
C-->>O: new_cert.pem + private_key (encrypted)
O->>O: Write new cert to /app/certs/ (atomic replace)
O->>O: Reload TLS context (no restart required)
Note over O: Old cert remains valid for 30 days overlap
Note over O: D-0: old cert expires
O->>O: Remove old cert, new cert is sole identity

Key points:

  • Validity: 90 days per certificate
  • Renewal window: Opens 30 days before expiry. The Outpost checks for renewal eligibility on every policy sync cycle.
  • Overlap period: Old and new certs are both valid for 30 days after renewal, preventing hard cutover issues during gradual rollouts.
  • Atomic replacement: The cert file is written atomically (temp file + rename) to prevent partial reads during rotation.
  • No restart required: The backend TLS context reloads the new cert on the next connection establishment without a service restart.
  • If renewal fails: The Outpost logs a warning and retries on the next sync cycle. If the cert expires before renewal succeeds, mTLS connections to Arbitex Cloud fail — policy sync and audit sync stop. Local policy enforcement continues using the cached bundle; audit events buffer locally and replay when connectivity is restored.

Certificate renewal alert: Monitor the cert expiry date using the Outpost health endpoint (GET /health returns cert_expires_at). Configure an alert when cert_expires_at is within 7 days.

If the Outpost loses connectivity to Arbitex Cloud:

  • Policy enforcement continues using the last cached policy bundle. The cache is persisted to a PVC (Kubernetes) or a named Docker volume. The Outpost serves requests indefinitely from cache.
  • Audit entries buffer locally in JSONL format on a mounted volume. When connectivity is restored, the buffer is forwarded to Cloud in sequential order, preserving the audit chain HMAC integrity.
  • Certificate renewal fails while offline. Plan for connectivity before the 90-day cert expiry. Extended offline operation (> 60 days) risks cert expiry.
VariableDefaultDescription
OUTPOST_IDUUID assigned at registration
PLATFORM_MANAGEMENT_URLhttps://api.arbitex.aiPlatform management plane endpoint
POLICY_SYNC_INTERVAL60Seconds between policy sync checks
AUDIT_SYNC_INTERVAL30Seconds between audit batch uploads
DLP_ENABLEDtrueEnable/disable the local DLP pipeline
DLP_NER_BACKENDdefaultNER backend: default (CPU Presidio), microservice (GPU :8200)
DLP_DEBERTA_ENABLEDfalseEnable Tier 3 contextual validation (requires GPU microservice)
AUDIT_HMAC_KEYBase64-encoded HMAC key (32+ bytes) for audit chain integrity

The full Arbitex platform deployed on your infrastructure. You operate the control plane, data plane, and all supporting services. No connection to Arbitex Cloud is required.

ServicePortDescription
PostgreSQL5432Primary database
Redis6379Rate limiting, JWT revocation cache
Backend (FastAPI)8000Gateway API, admin API, DLP, policy engine
Frontend (React)3000Admin portal
NER microservice8200GPU-accelerated named entity recognition (optional)
DeBERTa validator8201GPU-accelerated contextual validation (optional)

The Docker Compose configuration uses two networks:

  • backend-net (internal): PostgreSQL, Redis, Backend, NER, DeBERTa — not reachable from outside Docker
  • frontend-net: Backend, Frontend — the frontend reverse-proxies /api/ to the backend

Tier 2 (NER) and Tier 3 (DeBERTa) DLP microservices require NVIDIA GPU access. If GPUs are not available, the DLP pipeline runs Tier 1 (pattern matching) only. See DLP Overview for tier capabilities.


RequirementRecommended topology
Fastest deployment, no infrastructure to manageSaaS
Data residency — prompts must not leave your networkHybrid Outpost
Full operational control, no external dependenciesSelf-hosted
Compliance frameworks requiring data sovereigntyHybrid Outpost
Air-gapped or disconnected environmentsSelf-hosted

  • Quickstart — Send your first request through the gateway
  • Routing — Provider configuration, fallback chains, and health monitoring
  • Audit Log — How audit entries are stored, verified, and forwarded to SIEM
  • DLP Pipeline — technical reference — Tier 1/2/3 detector details, entity taxonomy, threshold tuning
  • SIEM Integration — Forwarding Outpost audit events to Splunk or Sentinel