Kubernetes deployment guide
This guide covers deploying Arbitex to Kubernetes using the official Helm charts. It covers the Platform (API + frontend + DLP microservices) and the Outpost (hybrid data plane).
Architecture overview
Section titled “Architecture overview”A full Arbitex deployment consists of two independent Helm releases:
┌─────────────────────────────────────────────────────────────┐│ Kubernetes cluster ││ ││ namespace: arbitex ││ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ││ │ api │ │ frontend │ │ ner-gpu │ │ deberta │ ││ │ :8000 │ │ :8080 │ │ :8200 │ │ :8201 │ ││ └──────────┘ └──────────┘ └──────────┘ └──────────┘ ││ │ │ GPU pool GPU pool ││ NGINX Ingress (port 443) ││ ↓ ││ api.arbitex.ai │└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐│ namespace: arbitex-outpost (customer cluster) ││ ┌──────────────────────────────────────────────────┐ ││ │ arbitex-outpost (replicas: 2) │ ││ │ proxy :8300 admin :8301 (pod-local only) │ ││ │ policy cache PVC + audit buffer PVC │ ││ └──────────────────────────────────────────────────┘ ││ │ mTLS ││ ↓ ││ api.arbitex.ai (management plane sync) │└─────────────────────────────────────────────────────────────┘The Platform chart deploys 4 workloads: API, frontend, NER GPU microservice, and DeBERTa validator. The Outpost chart deploys a single proxy workload with HA (2 replicas by default).
Part 1: Platform deployment
Section titled “Part 1: Platform deployment”Prerequisites
Section titled “Prerequisites”- Kubernetes 1.25+
- Helm 3.12+
kubectlconfigured for your target cluster- NGINX ingress controller installed (
helm install ingress-nginx ingress-nginx/ingress-nginx) - PostgreSQL 14+ database accessible from the cluster
- Redis 7+ accessible from the cluster
- GPU nodes with the NVIDIA device plugin for DLP microservices (see GPU node pools)
- Container registry access to
arbitexacr.azurecr.io(imagePullSecret or registry credential) - TLS certificate for
api.arbitex.aistored as a Kubernetes secret
Helm chart location
Section titled “Helm chart location”arbitex-platform/deploy/helm/arbitex-platform/├── Chart.yaml├── values.yaml # base values — all environments├── values-dev.yaml # development overrides├── values-staging.yaml # staging overrides├── values-prod.yaml # production overrides└── templates/ ├── deployment-api.yaml ├── deployment-frontend.yaml ├── deployment-ner-gpu.yaml ├── deployment-deberta.yaml ├── job-alembic-migrate.yaml ├── ingress.yaml └── ...Install — development
Section titled “Install — development”Development disables GPU services and uses relaxed Pod Security Standards:
helm upgrade --install arbitex-dev ./deploy/helm/arbitex-platform \ -f values.yaml \ -f values-dev.yaml \ --set api.env.DATABASE_URL="postgresql+asyncpg://arbitex:arbitex@postgres-dev:5432/arbitex_dev" \ --set api.env.REDIS_URL="redis://redis-dev:6379/0" \ --set api.env.SECRET_KEY="change-me-for-dev-only" \ --namespace arbitex-dev \ --create-namespaceDev overrides:
namespace.podSecurityStandard: baseline(allows debug sidecars)nerGpu.enabled: false— no GPU node requireddeberta.enabled: falseingress.hosts[0].host: api.arbitex.localingress.tls: [](no TLS in dev)
Install — staging
Section titled “Install — staging”helm upgrade --install arbitex-staging ./deploy/helm/arbitex-platform \ -f values.yaml \ -f values-staging.yaml \ --set api.env.DATABASE_URL="postgresql+asyncpg://..." \ --set api.env.REDIS_URL="redis://..." \ --set api.env.SECRET_KEY="..." \ --namespace arbitex-staging \ --create-namespaceStaging overrides:
- GPU services enabled (
nerGpu.enabled: true,deberta.enabled: true) ingress.hosts[0].host: api-staging.arbitex.ai- TLS from secret
arbitex-staging-tls
Install — production
Section titled “Install — production”Production uses sealed secrets or External Secrets Operator for credentials — do not pass them via --set in production pipelines:
helm upgrade --install arbitex ./deploy/helm/arbitex-platform \ -f values.yaml \ -f values-prod.yaml \ --namespace arbitex \ --create-namespaceAfter a CI build, pin the image digests to the promoted artifacts:
helm upgrade arbitex ./deploy/helm/arbitex-platform \ -f values.yaml \ -f values-prod.yaml \ --set api.image.digest="sha256:<api-digest>" \ --set frontend.image.digest="sha256:<frontend-digest>" \ --set nerGpu.image.digest="sha256:<ner-digest>" \ --set deberta.image.digest="sha256:<deberta-digest>" \ --namespace arbitexConfiguration reference
Section titled “Configuration reference”The base values.yaml documents all supported keys. Key sections:
API service (api.*)
Section titled “API service (api.*)”| Key | Default | Description |
|---|---|---|
api.replicas | 1 | Pod replica count |
api.image.repository | arbitexacr.azurecr.io/platform-api | Container image |
api.image.tag | latest | Image tag |
api.image.digest | "" | SHA256 digest — overrides tag when set |
api.resources.requests.cpu | 250m | CPU request |
api.resources.requests.memory | 512Mi | Memory request |
api.resources.limits.cpu | 1000m | CPU limit |
api.resources.limits.memory | 1Gi | Memory limit |
api.env.DATABASE_URL | "" | Async PostgreSQL connection string (required) |
api.env.REDIS_URL | "" | Redis connection string (required) |
api.env.SECRET_KEY | "" | JWT signing secret (required) |
api.env.ENVIRONMENT | production | Environment label |
api.env.NER_GPU_URL | auto | NER GPU service URL (auto-resolved from chart name) |
api.env.DEBERTA_URL | auto | DeBERTa service URL (auto-resolved from chart name) |
NER GPU microservice (nerGpu.*)
Section titled “NER GPU microservice (nerGpu.*)”| Key | Default | Description |
|---|---|---|
nerGpu.enabled | true | Enable NER GPU DLP microservice |
nerGpu.replicas | 1 | Replica count |
nerGpu.resources.requests.nvidia.com/gpu | 1 | GPU request |
nerGpu.nodeSelector.accelerator | nvidia | Node selector for GPU pool |
nerGpu.tolerations | nvidia.com/gpu:NoSchedule | Toleration for GPU taint |
DeBERTa validator (deberta.*)
Section titled “DeBERTa validator (deberta.*)”Same structure as nerGpu.*. Runs the DeBERTa v3 Tier 3 DLP model on GPU.
Ingress (ingress.*)
Section titled “Ingress (ingress.*)”| Key | Default | Description |
|---|---|---|
ingress.className | nginx | Ingress class name |
ingress.hosts | api.arbitex.ai | Hostname and path routing |
ingress.tls[0].secretName | arbitex-tls | Kubernetes TLS secret name |
ingress.annotations | ssl-redirect, proxy-body-size 50m | NGINX annotations |
Alembic migration (alembic.*)
Section titled “Alembic migration (alembic.*)”| Key | Default | Description |
|---|---|---|
alembic.runAsInitContainer | true | Run DB migration as init container before API starts |
alembic.ttlSecondsAfterFinished | 300 | Job TTL after completion |
GPU node pools
Section titled “GPU node pools”The NER GPU and DeBERTa services require nodes with NVIDIA GPUs. Configure your cluster:
AKS (Azure Kubernetes Service)
Section titled “AKS (Azure Kubernetes Service)”# Create a GPU node poolaz aks nodepool add \ --resource-group myRG \ --cluster-name myAKS \ --name gpupool \ --vm-size Standard_NC6s_v3 \ --node-count 1 \ --node-taints nvidia.com/gpu=present:NoSchedule \ --labels accelerator=nvidiaThe chart’s nodeSelector: { accelerator: nvidia } and tolerations target this pool automatically.
Self-managed clusters
Section titled “Self-managed clusters”- Label GPU nodes:
Terminal window kubectl label node <gpu-node> accelerator=nvidia - Taint GPU nodes (optional but recommended to prevent non-GPU workloads on expensive nodes):
Terminal window kubectl taint node <gpu-node> nvidia.com/gpu=present:NoSchedule - Install the NVIDIA device plugin:
Terminal window kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.5/nvidia-device-plugin.yml
Disabling GPU services
Section titled “Disabling GPU services”If you don’t have GPU nodes, disable both services:
nerGpu: enabled: falsedeberta: enabled: falseThe API falls back to CPU-based DLP processing. DeBERTa Tier 3 is unavailable; DLP runs regex + NER (CPU) only.
Required secrets
Section titled “Required secrets”Never put credentials directly in values files committed to source control. Use one of:
Kubernetes Secrets (minimum viable):
kubectl create secret generic arbitex-secrets \ --from-literal=DATABASE_URL="postgresql+asyncpg://..." \ --from-literal=REDIS_URL="redis://..." \ --from-literal=SECRET_KEY="$(openssl rand -hex 32)" \ --namespace arbitexReference in values:
api: envFrom: - secretRef: name: arbitex-secretsAzure Key Vault (production — CSI driver):
api: env: SECRETS_BACKEND: vault VAULT_URL: "https://my-keyvault.vault.azure.net/"See Security hardening guide for the full Key Vault CSI driver setup.
Verify the deployment
Section titled “Verify the deployment”kubectl get pods -n arbitex# Expected: api, frontend, ner-gpu, deberta all Running
kubectl get ingress -n arbitex# Expected: ADDRESS shows the external IP
curl https://api.arbitex.ai/healthz# Expected: 200 OKCheck migration ran:
kubectl logs -n arbitex -l job-name=arbitex-alembic-migrate --tail=20# Expected: "INFO [alembic.runtime.migration] Running upgrade ..."Part 2: Outpost deployment
Section titled “Part 2: Outpost deployment”Prerequisites
Section titled “Prerequisites”- Kubernetes 1.25+ (separate cluster from Platform, or separate namespace)
- Helm 3.12+
- Outpost registered in the Arbitex admin console — you need the outpost UUID
- mTLS certificates issued by Arbitex (
outpost.pem,outpost.key,ca.pem) - GPU node (optional — for DeBERTa Tier 3 DLP)
Helm chart location
Section titled “Helm chart location”arbitex-outpost/charts/arbitex-outpost/├── Chart.yaml├── values.yaml # all configuration keys with comments├── values.schema.json # JSON schema validation└── templates/ ├── deployment.yaml ├── service.yaml ├── configmap.yaml ├── secret.yaml ├── pvc.yaml ├── hpa.yaml └── poddisruptionbudget.yamlStep 1: Create the mTLS certificate secret
Section titled “Step 1: Create the mTLS certificate secret”The outpost requires three certificate files issued by Arbitex:
kubectl create secret generic outpost-certs \ --from-file=outpost.pem=./certs/outpost.pem \ --from-file=outpost.key=./certs/outpost.key \ --from-file=ca.pem=./certs/ca.pem \ --namespace arbitex-outpostStep 2: Create the secrets secret
Section titled “Step 2: Create the secrets secret”kubectl create secret generic outpost-secrets \ --from-literal=OUTPOST_AUDIT_HMAC_KEY="$(openssl rand -base64 32)" \ --from-literal=OUTPOST_ADMIN_API_KEY="$(openssl rand -hex 32)" \ --namespace arbitex-outpostStep 3: Install the chart
Section titled “Step 3: Install the chart”Minimal install (no GPU):
helm upgrade --install arbitex-outpost ./charts/arbitex-outpost \ --set outpost.id="your-outpost-uuid" \ --set outpost.platformManagementUrl="https://api.arbitex.ai" \ --set outpost.orgId="your-org-uuid" \ --set certs.existingSecret="outpost-certs" \ --set outpost.auditHmacKey="$(kubectl get secret outpost-secrets -o jsonpath='{.data.OUTPOST_AUDIT_HMAC_KEY}' | base64 -d)" \ --set outpost.adminApiKey="$(kubectl get secret outpost-secrets -o jsonpath='{.data.OUTPOST_ADMIN_API_KEY}' | base64 -d)" \ --namespace arbitex-outpost \ --create-namespaceGPU-enabled install (DeBERTa Tier 3):
helm upgrade --install arbitex-outpost ./charts/arbitex-outpost \ --set outpost.id="your-outpost-uuid" \ --set outpost.platformManagementUrl="https://api.arbitex.ai" \ --set outpost.orgId="your-org-uuid" \ --set certs.existingSecret="outpost-certs" \ --set outpost.auditHmacKey="..." \ --set outpost.adminApiKey="..." \ --set outpost.gpuEnabled=true \ --set outpost.dlpDebertaEnabled=true \ --set outpost.debertaModelPath="/app/models/deberta/model.onnx" \ --namespace arbitex-outpost \ --create-namespaceOutpost values reference
Section titled “Outpost values reference”Identity and platform connection
Section titled “Identity and platform connection”| Key | Required | Default | Description |
|---|---|---|---|
outpost.id | Yes | "" | Outpost UUID from cloud.arbitex.ai |
outpost.platformManagementUrl | Yes | "" | Platform management plane URL |
outpost.orgId | Yes | "" | Organization UUID |
outpost.auditHmacKey | Yes | "" | Base64-encoded HMAC key (min 32 bytes) |
outpost.adminApiKey | Yes | "" | Emergency admin API key |
certs.existingSecret | Yes | "" | K8s secret with outpost.pem, outpost.key, ca.pem |
Storage
Section titled “Storage”| Key | Default | Description |
|---|---|---|
outpost.policyCacheSize | 1Gi | PVC size for policy cache snapshots |
outpost.auditBufferSize | 5Gi | PVC size for offline audit buffer (~1KB/event, 5Gi ≈ 5M events) |
outpost.maxAuditBufferEntries | 100000 | Ring buffer limit before oldest events are pruned |
Sync intervals
Section titled “Sync intervals”| Key | Default | Description |
|---|---|---|
outpost.auditSyncIntervalSeconds | 30 | How often audit events push to Platform |
outpost.policySyncIntervalSeconds | 60 | Policy cache refresh interval |
DLP configuration
Section titled “DLP configuration”| Key | Default | Description |
|---|---|---|
outpost.dlpEnabled | true | Enable DLP pipeline |
outpost.dlpNerDevice | auto | NER inference device: auto, cpu, or cuda |
outpost.dlpDebertaEnabled | false | Enable DeBERTa Tier 3 (requires GPU + model path) |
outpost.debertaModelPath | "" | Path to ONNX model inside the container |
outpost.gpuEnabled | false | Switch resource profile to GPU (nvidia.com/gpu: 1) |
High availability
Section titled “High availability”| Key | Default | Description |
|---|---|---|
replicaCount | 2 | Pod replica count (min 2 for PDB) |
autoscaling.enabled | false | Enable HPA |
autoscaling.minReplicas | 2 | HPA min replicas |
autoscaling.maxReplicas | 10 | HPA max replicas |
autoscaling.targetCPUUtilizationPercentage | 70 | HPA CPU target |
podDisruptionBudget.minAvailable | 1 | PDB minimum available during rollouts |
Example values files
Section titled “Example values files”outpost-values-dev.yaml
Section titled “outpost-values-dev.yaml”replicaCount: 1
outpost: id: "dev-outpost-uuid" platformManagementUrl: "https://api-staging.arbitex.ai" orgId: "dev-org-uuid" auditHmacKey: "" # set via --set-string adminApiKey: "" # set via --set-string logLevel: "debug" policyCacheSize: "256Mi" auditBufferSize: "512Mi" dlpEnabled: true dlpNerDevice: "cpu" dlpDebertaEnabled: false gpuEnabled: false
certs: existingSecret: "outpost-certs-dev"
autoscaling: enabled: false
podDisruptionBudget: minAvailable: 0 # single-node dev — PDB would block rolloutsoutpost-values-prod.yaml
Section titled “outpost-values-prod.yaml”replicaCount: 2
outpost: id: "" # set via CI --set platformManagementUrl: "https://api.arbitex.ai" orgId: "" # set via CI --set auditHmacKey: "" # set via external-secrets adminApiKey: "" # set via external-secrets logLevel: "info" policyCacheSize: "1Gi" auditBufferSize: "5Gi" maxAuditBufferEntries: 100000 auditSyncIntervalSeconds: 30 policySyncIntervalSeconds: 60 dlpEnabled: true dlpNerDevice: "auto" dlpDebertaEnabled: true debertaModelPath: "/app/models/deberta/model.onnx" gpuEnabled: true
certs: existingSecret: "outpost-certs"
resources: requests: memory: "512Mi" cpu: "250m" limits: memory: "2Gi" cpu: "2"
gpuResources: requests: memory: "2Gi" cpu: "500m" nvidia.com/gpu: "1" limits: memory: "8Gi" cpu: "4" nvidia.com/gpu: "1"
autoscaling: enabled: true minReplicas: 2 maxReplicas: 10 targetCPUUtilizationPercentage: 70
podDisruptionBudget: minAvailable: 1Accessing the outpost admin API
Section titled “Accessing the outpost admin API”The admin interface runs on port 8301 and is intentionally not exposed via a Kubernetes Service. Access it via port-forward:
kubectl port-forward \ -n arbitex-outpost \ deployment/arbitex-outpost \ 8301:8301Then access the admin API locally:
curl http://localhost:8301/admin/api/healthProduction readiness checklist
Section titled “Production readiness checklist”Before going live:
Platform:
- Image digests pinned in values-prod.yaml (no
latesttags in production) -
DATABASE_URL,REDIS_URL,SECRET_KEYstored in Key Vault or Sealed Secrets — not in ConfigMaps - TLS certificate in
arbitex-tlssecret, ingress TLS configured -
alembic.runAsInitContainer: true(database migration runs before API starts) -
podSecurityStandard: restrictedin production namespace - GPU nodes available and labeled/tainted for NER GPU + DeBERTa pods
-
OTEL_EXPORTER_OTLP_ENDPOINTset for distributed tracing (see Distributed tracing guide)
Outpost:
- mTLS certificate secret created before chart install
-
outpost.idandoutpost.orgIdset to correct production values -
auditHmacKeyandadminApiKeystored in external secret manager -
replicaCount: 2with PDBminAvailable: 1 - Persistent volumes provisioned for policy cache and audit buffer
- Outpost reachable from client workloads on port 8300
See also
Section titled “See also”- Outpost deployment guide — Docker Compose deployment + air-gap setup
- Distributed tracing guide — OTLP configuration for Platform + Outpost
- Security hardening guide — Key Vault, mTLS, CSRF Redis config
- Outpost software updates — Helm upgrade procedure + rollback