Skip to content

Air-gap deployment guide

This guide covers deploying Arbitex Outpost on hosts with no internet access. The air-gap workflow has two phases: building the offline package on a machine that has internet access, then installing it on the air-gapped target host.


Package builder (internet-connected machine)

Section titled “Package builder (internet-connected machine)”

Run scripts/make-airgap.sh on a machine with:

  • Docker Engine 24 or later with Compose V2 (docker compose version)
  • Access to the arbitex-outpost repository
  • (Optional) A MaxMind GeoLite2-City MMDB file at geoip/GeoLite2-City.mmdb — required for GeoIP-based routing and compliance features
  • Docker Engine 24 or later with Compose V2
  • A user account in the docker group, or root
  • systemd (optional — for automatic startup on boot)
  • Sufficient disk space: the package tarball is approximately 5–6 GB depending on configuration
  • Sufficient RAM: 2 GB minimum; 4 GB recommended for Tier 3 (DeBERTa) DLP

Before the Outpost can connect to the management plane, you need three certificate files obtained from the Arbitex Cloud portal when you register the outpost:

FilePurpose
outpost.pemOutpost mTLS client certificate
outpost.keyPrivate key for the client certificate
ca.pemArbitex CA certificate for server verification

Register the outpost at cloud.arbitex.ai and download the certificate bundle before starting the installation.


On the internet-connected build machine, from the arbitex-outpost project root:

Terminal window
bash scripts/make-airgap.sh

To build a specific version:

Terminal window
bash scripts/make-airgap.sh 1.2.0

If no version is provided, the script uses the current git tag if one exists on HEAD, or falls back to the current date (YYYYMMDD).

The script runs the following steps automatically:

  1. Verifies Docker Engine 24+ and Compose V2 are available
  2. Builds the CPU Docker image: arbitex/outpost:<VERSION>
  3. Builds the GPU Docker image: arbitex/outpost:<VERSION>-gpu (--build-arg INFERENCE_MODE=gpu)
  4. Saves both images to a single compressed tarball: dist/outpost-image-<VERSION>.tar.gz
  5. Stages the following files into a packaging directory:
    • docker-compose.outpost.yml
    • .env.example
    • airgap-install.sh (renamed to install.sh in the package)
    • charts/arbitex-outpost/ (full Helm chart, if present)
    • The image tarball
    • default-policy-bundle.json (bootstrap bundle — an empty placeholder so the outpost can start before reaching the management plane)
    • GeoLite2-City.mmdb (if present at geoip/GeoLite2-City.mmdb)
  6. Generates a image.sha256 checksum for the image tarball
  7. Archives everything into the final package: dist/arbitex-outpost-airgap-<VERSION>.tar.gz
  8. Writes a SHA-256 checksum file: dist/arbitex-outpost-airgap-<VERSION>.tar.gz.sha256

If geoip/GeoLite2-City.mmdb is present when the script runs, it is included in the package. GeoIP-based routing and compliance features (geographic restriction rules, anonymous IP detection) require this file.

If the MMDB is not included, GeoIP features are disabled on the air-gapped host. GeoIP can be added later by placing the MMDB at /opt/arbitex-outpost/geoip/GeoLite2-City.mmdb and setting MAXMIND_DB_PATH=/app/geoip/GeoLite2-City.mmdb in .env, then restarting the outpost.

FileDescription
dist/arbitex-outpost-airgap-<VERSION>.tar.gzFull air-gap package — transfer this to the target host
dist/arbitex-outpost-airgap-<VERSION>.tar.gz.sha256SHA-256 checksum — transfer alongside the tarball

Transfer the tarball and checksum file to the air-gapped target host using any available method (SCP, removable media, internal file transfer):

Terminal window
scp dist/arbitex-outpost-airgap-<VERSION>.tar.gz \
dist/arbitex-outpost-airgap-<VERSION>.tar.gz.sha256 \
user@target-host:/tmp/

On the target host, verify the package integrity before proceeding:

Terminal window
cd /tmp
sha256sum -c arbitex-outpost-airgap-<VERSION>.tar.gz.sha256

Expected output: arbitex-outpost-airgap-<VERSION>.tar.gz: OK

If verification fails, the package is corrupt or was tampered with during transfer. Do not proceed.

Extract and run the installer:

Terminal window
tar -xzf arbitex-outpost-airgap-<VERSION>.tar.gz
cd airgap-<VERSION>/
bash install.sh

The installer (install.sh) runs interactively and prompts for configuration. The steps are:

The installer re-verifies the Docker image tarball checksum against the embedded image.sha256 file. If verification fails, installation stops.

Loading Docker image...
[arbitex] Loading image from: outpost-image-<VERSION>.tar.gz

Both the CPU and GPU images are loaded into the local Docker daemon from the tarball (docker load). No network access is required.

This step may take 2–5 minutes depending on disk and CPU speed.

The installer prompts for the following values. All values can be pre-set as environment variables before running install.sh to enable unattended installation.

PromptEnvironment variableRequiredNotes
Outpost IDOUTPOST_IDYesUUID from cloud.arbitex.ai/outposts
Platform URLPLATFORM_MANAGEMENT_URLNoDefaults to https://api.arbitex.ai
Audit HMAC keyAUDIT_HMAC_KEYYesLong random string; generate with openssl rand -hex 32
Emergency admin keyOUTPOST_EMERGENCY_ADMIN_KEYNoProvides local admin access when management plane is unreachable; leave blank to disable
GPU modeGPU_MODENoEnter y to use the GPU image; defaults to CPU

The audit HMAC key seeds the tamper-evident chain on the local audit log. It must be retained — audit log verification requires the original key.

Default install directory: /opt/arbitex-outpost

Override with the INSTALL_DIR environment variable before running the installer.

The installer places the included default-policy-bundle.json at policy_cache/policy_bundle.json. This is an empty bootstrap bundle:

{
"version": "bootstrap-offline",
"providers": {},
"routing_rules": {},
"dlp_rules": [],
...
}

On first boot, the outpost loads this bundle so it can start without network access. Once the outpost reaches the management plane (even briefly), the bootstrap bundle is replaced with the live policy bundle. Until then, the outpost routes requests using the empty bootstrap bundle — which means no DLP rules and no provider routing are active. Add the mTLS certificates before allowing user traffic.

If GeoLite2-City.mmdb was included in the package, the installer copies it to ${INSTALL_DIR}/geoip/ and sets MAXMIND_DB_PATH=/app/geoip/GeoLite2-City.mmdb in .env.

If the MMDB is absent, a warning is printed and GeoIP features are disabled.

The installer creates three required directories under the install path:

DirectoryPurpose
certs/mTLS certificate files
audit_buffer/Local HMAC-chained audit log
policy_cache/Cached policy bundle

These directories are created empty. The outpost will not connect to the management plane until you place certificate files in certs/.

If systemd is running on the host, the installer offers to install a systemd service for automatic startup on boot. This requires root.

Service details:

PropertyValue
Service namearbitex-outpost
Service file/etc/systemd/system/arbitex-outpost.service
Working directory${INSTALL_DIR}
Start commanddocker compose -f docker-compose.outpost.yml up
Stop commanddocker compose -f docker-compose.outpost.yml down
Restart policyAlways, with 10-second delay

To skip systemd installation, answer N at the prompt. The installer will start the outpost directly with docker compose up -d.


Before the outpost can reach the management plane, place the certificate files downloaded from the cloud portal:

Terminal window
cp outpost.pem /opt/arbitex-outpost/certs/
cp outpost.key /opt/arbitex-outpost/certs/
cp ca.pem /opt/arbitex-outpost/certs/

The outpost reads these paths from environment variables (defaults shown):

VariableDefault
OUTPOST_CERT_PATHcerts/outpost.pem
OUTPOST_KEY_PATHcerts/outpost.key
OUTPOST_CA_PATHcerts/ca.pem

If you installed systemd, restart the service after placing the certificates:

Terminal window
sudo systemctl restart arbitex-outpost

Otherwise restart with Docker Compose:

Terminal window
cd /opt/arbitex-outpost
docker compose -f docker-compose.outpost.yml restart

Check that the outpost is running and healthy:

Terminal window
# Health probe (liveness)
curl http://localhost:8300/healthz
# Readiness probe (fails until policy bundle is loaded)
curl http://localhost:8300/readyz

Both should return HTTP 200 once the outpost has loaded its policy bundle. If /readyz returns 503, the outpost is running but has not yet loaded a policy bundle — check that the policy cache bootstrap file is present and that the outpost can reach the management plane.

Test the AI proxy endpoint:

Terminal window
curl http://localhost:8300/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"ping"}]}'

The package includes both a CPU image and a GPU image. Mode is selected at configuration time during installation. To change mode after installation:

  1. Edit /opt/arbitex-outpost/.env and update the image tag referenced in docker-compose.outpost.yml
  2. Restart the outpost

CPU mode (default): Uses arbitex/outpost:<VERSION>. No GPU required. DeBERTa Tier 3 DLP runs on CPU — enable with DLP_DEBERTA_ENABLED=true and DEBERTA_MODEL_PATH=<path>. CPU DeBERTa inference adds 50–500 ms per request depending on text length.

GPU mode: Uses arbitex/outpost:<VERSION>-gpu. Requires:

  • NVIDIA GPU with CUDA support
  • NVIDIA Container Toolkit installed (nvidia-docker2)
  • docker-compose.outpost.yml configured to pass the GPU runtime

GPU mode reduces DeBERTa inference to 10–50 ms and enables higher request throughput.

See Outpost deployment architecture for DLP resource requirements by mode.


Pre-set environment variables to skip all prompts:

Terminal window
export OUTPOST_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
export PLATFORM_MANAGEMENT_URL="https://api.arbitex.ai"
export AUDIT_HMAC_KEY="$(openssl rand -hex 32)"
export OUTPOST_EMERGENCY_ADMIN_KEY="$(openssl rand -hex 24)"
export GPU_MODE="N"
export INSTALL_DIR="/opt/arbitex-outpost"
bash install.sh