Skip to content

Security Overview

Arbitex is designed for organizations that process sensitive data through AI systems. This page describes the security controls built into the platform. Arbitex Gateway and the Arbitex Cloud control plane are engineered against industry security frameworks; see Compliance Frameworks for how these controls map to specific regulatory requirements.


All connections to and from Arbitex use TLS. There are no plaintext fallbacks in production.

SegmentProtection
Customer → Gateway (api.arbitex.ai)TLS 1.2+ via Cloudflare edge
Cloudflare edge → AKS originCloudflare Authenticated Origin Pulls (mTLS) — cryptographic proof that traffic originates from Cloudflare, not arbitrary internet sources
Outpost → Cloud control planeMutual TLS (/outpost/* paths); client certificate verified against full chain, not just issuer presence
Internal service-to-serviceTLS enforced on all private endpoints (PostgreSQL, Redis, Key Vault)

The Cloudflare origin TLS certificate is issued by Cloudflare Origin CA (1-year term, auto-renewed by cert-manager). The internal CA issues certificates for service-to-service and Outpost mutual TLS.

All data stored by Arbitex is encrypted at rest using Azure-managed encryption keys:

StoreEncryption
PostgreSQL (platform + cloud DBs)Azure Flexible Server — storage encryption enabled by default
RedisTLS enforced; no persistence to unencrypted disk
Azure Files (GeoIP/CredInt databases, object storage)Azure Storage Service Encryption (AES-256)
Azure Key VaultFIPS 140-2 Level 3 HSM-backed key storage
Audit logsLog Analytics — Azure-managed encryption; 90-day active + 2-year archive tier

Customer-managed encryption keys (BYOK) are scoped under Epic H and are not yet available.


Cloudflare Pro provides the outer perimeter:

  • WAF — OWASP ruleset with custom rules for AI proxy traffic patterns
  • DDoS protection — Unmetered at the network layer
  • Bot management — Automated traffic filtering at the edge
  • Rate limiting — Per-path limits enforced at Cloudflare before traffic reaches origin

Inside the Azure Kubernetes Service cluster, Calico network policies enforce default-deny between all pods. Each service pair that needs to communicate has an explicit allowlist policy. No lateral movement is possible from a compromised pod to an unrelated service.

Pod Security Standards (restricted profile) are enforced on all namespaces. The GPU inference namespace uses the baseline profile (required for CUDA workloads).

All data services (PostgreSQL, Redis, Key Vault) are accessible only via Azure Private Endpoints. There are no public-facing database or cache endpoints. Traffic between the AKS cluster and data services never traverses the public internet.

Internal Arbitex staff tooling (int.arbitex.ai) is isolated from the customer-facing control plane at the DNS level. The int.arbitex.ai subdomain is NS-delegated to a private RFC 1918 nameserver — the entire subtree is unreachable from the public internet. External DNS resolution returns SERVFAIL. Access requires a VPN or Cloudflare Zero Trust connector.


Arbitex uses a three-tier certificate authority hierarchy:

Root CA
YubiHSM (offline hardware security module)
Root key never leaves hardware
Signs intermediate CA only
Intermediate CA
Azure Key Vault (separate HSM)
Issues leaf certificates via cert-manager (step-ca issuer)
Online — used for certificate signing operations
├── Service leaf certificates (internal mTLS)
├── Outpost client certificates (issued at registration)
└── Cloudflare Origin CA certificate (edge ↔ origin mTLS)

The offline root ensures that a compromise of the online intermediate CA cannot forge a new root. The YubiHSM root is taken offline after the intermediate CA is signed.

Certificates are managed by cert-manager with automatic renewal. Expiry monitoring alerts at 14 days remaining. The cert-manager renewal failure alert triggers if a CertificateRequest remains in Failed state for more than 1 hour.


API keys issued by Arbitex follow the format arb_live_* (production) or arb_test_* (test environment). Keys are scoped to an organization and carry the permissions of the key holder’s role.

RoleCapabilities
adminFull organization administration — manage policy chains, compliance bundles, users, groups, SSO configuration, billing
userSubmit requests to the Gateway; view own usage; no configuration access

Roles are enforced at the API middleware layer. Admin endpoints require the require_admin dependency.

Arbitex supports two SSO mechanisms for enterprise organizations:

MFA policy is configurable per organization and per plan tier. Staff configure MFA requirements through the admin plane; enforcement runs in the platform authentication middleware. Supported MFA types: FIDO2/WebAuthn hardware keys, authenticator app (TOTP), SMS. Enterprise plans can require FIDO2 hardware keys.

Arbitex staff portal access requires FIDO2 hardware keys with no fallback — the highest-privilege surface has the strongest authentication requirement.

The Policy Engine provides group-based access controls over AI model access and behavior:

  • Model access restrictions per group (allow-list or deny-list at the provider/model level)
  • Policy rules that vary enforcement action by group membership
  • PROMPT governance requiring approval workflows for specific intents

See Policy Engine overview for configuration details.


Every action in Arbitex produces an audit log entry. The audit trail is designed for tamper-evidence and long-term retention.

Audit log entries are chained using HMAC signatures. Each entry includes an hmac_key_id field that identifies the signing key version. This allows detection of any modification, deletion, or reordering of audit entries after the fact.

TierDurationLocation
Active (hot)90 daysAzure Log Analytics
Archive2 yearsAzure Log Analytics archive tier

The 90-day tamper-evident buffer provides the primary window for compliance investigations. Customer SIEM is the record of truth for long-term retention — Arbitex exports audit events in OCSF format. See SIEM integration.

Every audit log entry includes:

  • Timestamp (ISO 8601)
  • Request ID for correlation
  • Organization ID
  • User ID (if authenticated)
  • Action type and target resource
  • Enforcement action (if a policy rule matched)
  • Policy rule ID and pack ID that matched
  • HMAC chain signature

For Compliance Bundle matches, entries additionally include the framework reference (e.g., PCI-DSS Requirement 3.4) and the matched content category (not the content itself).


Arbitex is designed against the following governance, risk, and compliance frameworks:

FrameworkVersionCoverage
NIST Cybersecurity Framework2.0Govern, Identify, Protect, Detect, Respond, Recover
NIST Privacy Framework1.0Identify-P, Govern-P, Control-P, Communicate-P, Protect-P
NIST AI Risk Management Framework1.0Govern, Map, Measure, Manage — applied to AI system risk
FrameworkRelationship
ISO 27001/27002Control crosswalk for EU customers and SOC 2 readiness
ISO 27701Privacy extension to ISO 27001

The NIST AI RMF is applied to the Arbitex AI proxy as an AI system. Key controls:

  • Govern — Policy Engine for organizational AI use policies
  • Map — Compliance Bundles map regulatory requirements to detection/enforcement
  • Measure — Audit trail measures AI system behavior; SIEM integration for monitoring
  • Manage — Kill switch, automatic health monitoring, PROMPT governance for response

Arbitex uses Azure Key Vault for all secrets in production. Applications refuse to start if Key Vault is unreachable and insecure defaults are detected (SECRETS_BACKEND=env is blocked in production environments). There are no hard-coded credentials in production images.

Container images used in production:

  • Base images pinned by SHA256 digest — no floating tags
  • curl is not present in production images (Python health checks used instead)
  • All images scanned with Trivy before deployment (critical/high findings block deployment)
  • Build and deploy pipelines require manual trigger — no automatic deployment on push

GitHub Actions uses OIDC federation to authenticate to Azure. There are no stored Azure credentials in GitHub. The trust relationship is defined at the Azure side and scoped to specific repository and workflow paths.