Skip to content

Outpost deployment guide

This guide is the single reference for all Outpost deployment modes. Choose the section that matches your environment:

For a full technical component and architecture reference, see Outpost deployment architecture.


  • Docker Engine 24+ or Kubernetes 1.27+ with Helm 3
  • Outbound HTTPS access from the host to api.arbitex.ai (port 443) — required for policy sync, audit sync, and cert renewal
  • Outbound HTTPS access to your AI provider APIs (OpenAI, Anthropic, etc.) — required for request forwarding
  • Admin access to the Arbitex Cloud portal

Every Outpost instance must be registered in the Cloud portal before it can connect to the management plane.

  1. Log into the Cloud portal as an org admin.
  2. Navigate to Settings > Outposts > Register Outpost.
  3. Enter a name (e.g., production-vpc-us-east) and click Register.
  4. The portal generates three values. Download them immediately — they are not shown again:
    • OUTPOST_ID — the UUID assigned to this outpost
    • Certificate bundle — a .zip containing outpost.pem, outpost.key, and ca.pem
  5. Extract the certificate bundle to a certs/ directory on the deployment host.

Create a .env file with the required configuration. Start from .env.example in the repository:

Terminal window
cp .env.example .env

Set the required variables:

Terminal window
# Required — from the portal registration
OUTPOST_ID=<uuid from portal>
ORG_ID=<your org UUID>
PLATFORM_MANAGEMENT_URL=https://api.arbitex.ai
# Required — HMAC key for audit log integrity
# Generate a strong random key:
# python3 -c "import secrets; print(secrets.token_hex(32))"
AUDIT_HMAC_KEY=<64-char hex string>
# mTLS certificates (from the certificate bundle)
OUTPOST_CERT_PATH=certs/outpost.pem
OUTPOST_KEY_PATH=certs/outpost.key
OUTPOST_CA_PATH=certs/ca.pem

Recommended additional variables for production:

Terminal window
# Policy bundle HMAC verification (recommended)
POLICY_HMAC_KEY=<64-char hex string>
# Provider key encryption (recommended if storing provider API keys in config)
PROVIDER_KEY_ENCRYPTION_KEY=<Fernet key see below>

Generate a Fernet key for provider key encryption:

Terminal window
python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
VariableDefaultDescription
DLP_ENABLEDtrueEnable DLP scanning
DLP_NER_ENABLEDtrueEnable Tier 2 NER scanning
DLP_NER_DEVICEautoDevice selection: auto, cpu, cuda
DLP_DEBERTA_ENABLEDfalseEnable Tier 3 DeBERTa (requires model file)

For Tier 3 DeBERTa, download the ONNX model file and set DEBERTA_MODEL_PATH to its path on the host.


The Outpost writes to two persistent directories. Create them before starting the container:

Terminal window
mkdir -p policy_cache audit_buffer

These directories must survive container restarts. In Docker Compose they are bind-mounted as volumes. In Kubernetes they are PersistentVolumeClaims.

The audit_buffer directory holds the local HMAC-chained audit log. Do not delete it while the Outpost is running — unsynced events are stored here.


Use the docker-compose.outpost.yml file in the repository:

Terminal window
docker compose -f docker-compose.outpost.yml up -d

The Compose file configures:

  • Port 8300 — AI proxy (exposed to your application network)
  • Port 8301 — Admin API (bound to 127.0.0.1 only)
  • Volume mounts for policy_cache, audit_buffer, and certs
  • env_file: .env to pass your configuration

Verify the container is running:

Terminal window
docker compose -f docker-compose.outpost.yml ps

Add the Arbitex Helm repository:

Terminal window
helm repo add arbitex https://charts.arbitex.ai
helm repo update

Create a Kubernetes Secret with the mTLS certificates:

Terminal window
kubectl create secret generic arbitex-outpost-certs \
--from-file=outpost.pem=certs/outpost.pem \
--from-file=outpost.key=certs/outpost.key \
--from-file=ca.pem=certs/ca.pem \
-n arbitex

Create a values.yaml override file:

outpost:
id: "<OUTPOST_ID>"
orgId: "<ORG_ID>"
platformManagementUrl: "https://api.arbitex.ai"
secrets:
auditHmacKey: "<AUDIT_HMAC_KEY>"
policyHmacKey: "<POLICY_HMAC_KEY>"
certSecretName: "arbitex-outpost-certs"
# Resource profile — standard CPU deployment
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "2"
replicaCount: 2

If CredInt is enabled (CREDINT_ENABLED=true, the default), the memory limit must be at least 2Gi to accommodate the ~470 MB bloom filter.

Install the chart:

Terminal window
helm install arbitex-outpost arbitex/arbitex-outpost \
-n arbitex \
--create-namespace \
-f values.yaml

Terminal window
# Liveness — process is running
curl http://localhost:8300/healthz
# Readiness — policy bundle is loaded
curl http://localhost:8300/readyz

The readiness endpoint returns 200 only after the Outpost has loaded its first policy bundle. This may take up to 30 seconds on first start while the initial sync completes.

If /readyz returns 503 after 60 seconds, check the container logs:

Terminal window
docker compose -f docker-compose.outpost.yml logs --tail=50 outpost

Look for:

  • "Connected to management plane" — mTLS handshake succeeded
  • "Policy bundle loaded" — first sync complete
  • Any ERROR or CRITICAL lines

The Outpost polls the management plane every 60 seconds for policy bundle updates. On the first start, it fetches the bundle immediately. Verify in the Cloud portal:

  1. Navigate to Settings > Outposts.
  2. Find your outpost. The Last seen column should show a recent timestamp.
  3. The Policy version column shows the bundle version currently loaded.

If the outpost does not appear as active within 2 minutes, check:

  • OUTPOST_ID and ORG_ID match the values from the portal registration
  • The host can reach api.arbitex.ai on port 443 (curl -v https://api.arbitex.ai/healthz)
  • The mTLS certificates are valid and match the OUTPOST_ID registered in the portal

With the Outpost running, send a test request through the proxy:

Terminal window
curl -s -X POST http://localhost:8300/v1/chat/completions \
-H "Authorization: Bearer <your-api-key>" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Say hello"}]
}'

A successful response confirms:

  • The proxy is running and accepting requests
  • Provider credentials are configured
  • The DLP pipeline is not blocking the request

DirectionProtocolDestinationPortPurpose
InboundTCPApplication network8300AI proxy
OutboundHTTPSapi.arbitex.ai443Management plane (policy sync, audit sync, heartbeat, cert renewal)
OutboundHTTPSProvider API endpoints443Request forwarding

Port 8301 (admin API) is bound to 127.0.0.1 — it is not accessible from the network and requires kubectl port-forward or ssh tunneling for remote access.

If your environment routes outbound traffic through a proxy:

Terminal window
HTTPS_PROXY=https://proxy.internal:3128
NO_PROXY=localhost,127.0.0.1

Set NO_PROXY to exclude local addresses so the admin API health checks are not routed through the proxy.


Certificates are issued by the Cloud portal during registration. They are valid for 90 days. The Outpost’s CertRotationClient begins renewal automatically when the certificate is within 30 days of expiry.

Rotation is fully automatic. The CertRotationClient runs hourly:

  1. Checks the certificate expiry date
  2. If within 30 days, POSTs a renewal request to the management plane
  3. Writes the new certificate to staging paths (.new suffix)
  4. Verifies the staged certificate before swapping
  5. Swaps atomically (os.replace) — no downtime or restart required

If rotation fails (network error, management plane unreachable), it retries on the next hourly cycle. The existing certificate remains in use until a successful rotation completes.

If you need to replace the certificate manually (e.g., after a key compromise):

  1. Issue a new certificate bundle from the Cloud portal (Settings > Outposts > [Outpost] > Reissue Certificate).
  2. Copy the new outpost.pem and outpost.key to the certs/ directory.
  3. The Outpost detects the new certificate on the next hourly check and validates it. No restart is required.

To upgrade to a new Outpost version:

Terminal window
# Pull the new image
docker compose -f docker-compose.outpost.yml pull
# Restart with the new image
docker compose -f docker-compose.outpost.yml up -d

Docker Compose replaces the running container. The policy_cache and audit_buffer volumes are preserved across the upgrade.

Terminal window
helm upgrade arbitex-outpost arbitex/arbitex-outpost \
-n arbitex \
-f values.yaml \
--set image.tag=<new-version>

The Helm chart is configured with a rolling update strategy and PodDisruptionBudget: minAvailable: 1 — at least one replica remains available during the upgrade.


The air-gap bundle is a self-contained tarball for environments with no outbound internet access. It includes the Docker image, install script, default policy bundle, and all required dependencies.

Download the air-gap bundle from Cloud Console → Outposts → [Outpost] → Download air-gap bundle, or request it from your Customer Success contact. The bundle is named:

arbitex-outpost-airgap-<version>.tar.gz

A companion SHA256 checksum file is provided alongside the tarball.

On a connected machine, verify the bundle before transferring to the air-gapped host:

Terminal window
sha256sum -c arbitex-outpost-airgap-<version>.tar.gz.sha256
# arbitex-outpost-airgap-<version>.tar.gz: OK

Use a secure transfer method appropriate for your environment — USB drive, secure file transfer appliance, or approved courier media. Do not transfer via untrusted networks.

Terminal window
tar -xzf arbitex-outpost-airgap-<version>.tar.gz
cd airgap-<version>/
bash install.sh

The installer performs these steps automatically:

  1. Validates SHA256 checksums on the included image tarball
  2. Loads the Docker image into the local daemon with docker load
  3. Prompts for outpost ID, platform URL, and HMAC keys
  4. Writes a .env file with your configuration
  5. Optionally installs a systemd service for automatic startup on host reboot
  6. Starts the outpost and verifies the health check

Prerequisites the installer checks:

  • Docker Engine 24+ with Compose V2
  • sha256sum utility
  • Sufficient disk space (≥ 4 GB for the image)

To run the installer non-interactively (for automated provisioning), pre-set the required variables:

Terminal window
OUTPOST_ID="<uuid-from-cert-bundle>" \
PLATFORM_MANAGEMENT_URL="https://management-plane.internal" \
AUDIT_HMAC_KEY="$(openssl rand -hex 32)" \
POLICY_HMAC_KEY="$(openssl rand -hex 32)" \
bash install.sh

If you accepted the systemd service option during install, the outpost is managed as arbitex-outpost.service:

Terminal window
systemctl status arbitex-outpost
systemctl restart arbitex-outpost
journalctl -u arbitex-outpost -f

To install the systemd service manually after the fact:

Terminal window
# In the extracted airgap directory
bash install.sh --systemd-only

The container image includes a minimal fallback GeoIP database. For accurate IP geolocation enrichment, mount a current MaxMind GeoLite2-City.mmdb:

Terminal window
# In .env
GEOIP_MMDB_PATH=/app/geoip/GeoLite2-City.mmdb
GEOIP_DOWNLOAD_ON_START=false # prevent startup download attempts

Mount the file:

# In docker-compose.outpost.yml override
volumes:
- /path/to/GeoLite2-City.mmdb:/app/geoip/GeoLite2-City.mmdb:ro

In fully disconnected environments, the management plane sync is unavailable. The outpost will:

  • Serve the last cached policy bundle from the policy_cache/ volume
  • Continue enforcing the cached policy indefinitely (no automatic expiry in air-gap mode)
  • Log sync failures at the warning level but continue operating

To push a new policy bundle manually via the admin API:

Terminal window
# Push a signed bundle from the air-gap package
curl -X POST http://localhost:8301/admin/api/policy/push \
-H "X-API-Key: <OUTPOST_API_KEY>" \
-H "Content-Type: application/json" \
-d @default-policy-bundle.json

The default-policy-bundle.json included in the air-gap package is signed with the Platform’s signing key. Set POLICY_BUNDLE_VERIFY=false only if you are loading unsigned bundles during initial air-gap testing.

For Kubernetes air-gap deployments, load the image into your private registry:

Terminal window
# Load the image
docker load < outpost-image-<version>.tar.gz
# Tag for your registry
docker tag arbitex/outpost:<version> registry.internal/arbitex/outpost:<version>
# Push to internal registry
docker push registry.internal/arbitex/outpost:<version>

Then deploy with the Helm chart, pointing to your internal registry:

values.yaml
image:
repository: registry.internal/arbitex/outpost
tag: "<version>"
pullPolicy: IfNotPresent
imagePullSecrets:
- name: registry-credentials

All configuration is via environment variables. The outpost validates required values at startup and exits with a descriptive error if any required variable is missing or empty.

VariableRequiredDefaultDescription
OUTPOST_IDyesUUID from the cert bundle, issued by the Cloud portal
ORG_IDyesOrganisation UUID (from Settings → General)
PLATFORM_MANAGEMENT_URLyesManagement plane URL (e.g. https://api.arbitex.ai)
VariableDefaultDescription
OUTPOST_CERT_PATHcerts/outpost.pemClient certificate path
OUTPOST_KEY_PATHcerts/outpost.keyPrivate key path
OUTPOST_CA_PATHcerts/ca.pemPlatform CA certificate path
VariableRequiredDescription
AUDIT_HMAC_KEYyesHMAC-SHA256 key for audit log chain integrity. Generate: openssl rand -hex 32
POLICY_HMAC_KEYyes*Key for policy bundle HMAC verification. *Optional when INSECURE_SKIP_HMAC=true
OUTPOST_API_KEYrecommendedAdmin interface authentication key
PROVIDER_KEY_ENCRYPTION_KEYrecommendedFernet key for encrypting stored provider API keys. Generate: python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
VariableDefaultDescription
DLP_ENABLEDtrueEnable the full DLP pipeline
DLP_NER_ENABLEDtrueEnable Tier 2 NER (Presidio/spaCy)
DLP_NER_MODELen_core_web_smspaCy model name
DLP_NER_DEVICEautoauto, cpu, or cuda
DLP_DEBERTA_ENABLEDfalseEnable Tier 3 DeBERTa contextual detection
DEBERTA_MODEL_PATHPath to DeBERTa ONNX model file (model.onnx)
VariableDefaultDescription
AUDIT_SYNC_INTERVAL_SECONDS30Interval between audit sync attempts to Platform
MAX_AUDIT_BUFFER_ENTRIES100000Ring buffer cap — oldest entries pruned when exceeded
POLICY_SYNC_INTERVAL60Policy re-sync interval in seconds
VariableDefaultDescription
GEOIP_MMDB_PATHPath to MaxMind GeoIP2 City MMDB
GEOIP_MMDB_FALLBACK_PATH/app/geoip/GeoLite2-City.mmdbBundled fallback DB path (baked into image)
GEOIP_DOWNLOAD_ON_STARTfalseDownload MaxMind DB at startup if not present
GEOIP_DOWNLOAD_URLURL for MaxMind DB download (requires license key in URL)
GEOIP_ANON_DB_PATHPath to MaxMind Anonymous IP DB
VariableDefaultDescription
BUDGET_ENFORCEMENT_ENABLEDtrueEnable local budget cap enforcement
VariableDefaultDescription
SIEM_DIRECT_ENABLEDfalseEnable direct push to SIEM (Splunk/Sentinel/Elastic)
SIEM_DIRECT_TYPEsplunk_hecsplunk_hec, azure_sentinel, or elastic
SIEM_DIRECT_URLSIEM endpoint URL
SIEM_DIRECT_TOKENSIEM authentication token
SIEM_DIRECT_BUFFER_CAPACITY10000Event buffer capacity before flush
SIEM_DIRECT_DEAD_LETTER_PATHPath for dead-letter queue storage
VariableDefaultDescription
OAUTH_SCOPE_ENFORCEMENTfalseEnable path-based OAuth scope enforcement (opt-in)
OAUTH_JWT_PUBLIC_KEYPEM public key for JWT signature verification
OAUTH_JWKS_URLJWKS endpoint URL (alternative to static key)
OAUTH_JWKS_CACHE_TTL3600JWKS cache TTL in seconds
VariableDefaultDescription
PROMPT_HOLD_TIMEOUT_SECONDS300Seconds a PROMPT-action request is held pending admin decision
VariableDefaultDescription
MAX_REQUEST_BODY_MB10Maximum request body size in MB
INSECURE_ALLOW_HTTPfalseAllow HTTP upstream connections — dev only
VariableDefaultDescription
INSECURE_SKIP_HMACfalseSkip HMAC key requirement at startup
POLICY_BUNDLE_VERIFYtrueVerify policy bundle HMAC signatures

Outpost not connecting to management plane

Section titled “Outpost not connecting to management plane”

Symptoms: /readyz returns 503, Cloud portal shows outpost as offline, logs show connection errors.

Check:

  1. Outbound connectivity: curl -v https://api.arbitex.ai/healthz
  2. Certificate validity: openssl verify -CAfile certs/ca.pem certs/outpost.pem
  3. OUTPOST_ID matches the UUID from the portal registration exactly

Symptoms: pending_audit_events in the heartbeat payload is growing, Cloud portal shows audit events delayed.

Check:

  1. Management plane connectivity (same as above)
  2. audit_buffer/ directory is writable by the container
  3. AUDIT_HMAC_KEY is set and non-empty

Symptoms: Requests with PII content are not being blocked or redacted as expected.

Check:

  1. DLP_ENABLED=true (default)
  2. Policy bundle is loaded (check /readyz)
  3. The relevant DLP rules are active in the Cloud portal (Admin > DLP Rules)
  4. Test the DLP pipeline directly using the DLP rule testing tools