Policy Engine — admin guide
This guide walks you through managing policy enforcement using the admin UI. It covers creating a custom policy pack, writing rules, configuring your org policy chain, and testing with the PolicySimulator before rules go live.
For background on how the Policy Engine works — evaluation order, combining algorithms, action semantics — read Policy Engine overview first.
Prerequisites
Section titled “Prerequisites”- Admin role in your Arbitex organization
- Your org has at least one group synced via SCIM if you plan to write group-targeted rules (see Entra ID SCIM provisioning)
Step 1 — Create a policy pack
Section titled “Step 1 — Create a policy pack”Navigate to Admin → Policy Engine → Policy Packs. The PolicyPackList screen shows all existing packs in your organization — both custom packs you own and any compliance bundle packs you have activated.
Screenshot: PolicyPackList — showing existing packs with status badges, rule counts, and an “Add pack” button in the top-right corner.
Click Add pack. In the creation dialog:
| Field | Value |
|---|---|
| Name | A short descriptive name, for example Trading Desk Rules |
| Description | Optional. Describes the pack’s purpose. Shown in the chain editor. |
| Type | Custom — all packs you create directly are custom packs |
Click Create. The new pack appears in the list with zero rules and Inactive status. A pack is not active until you add it to your org policy chain (Step 3).
Step 2 — Add rules
Section titled “Step 2 — Add rules”Click the pack name to open the PolicyRuleEditor. The editor shows all rules in the pack, ordered by sequence number.
Screenshot: PolicyRuleEditor — showing the rule list on the left with sequence numbers, and the rule detail panel on the right with conditions and action fields.
Click Add rule. The rule form has three sections:
Conditions
Section titled “Conditions”Set the criteria that must match for the rule to fire. All non-empty conditions are AND-logic — every condition you fill in must match. Leave a condition empty to skip it.
| Condition | Notes |
|---|---|
| User groups | Type-ahead from your synced SCIM groups. OR logic — user must be in at least one listed group. |
| Entity types | Select from the detected entity type list (credit card, SSN, passport, etc.). Pair with Minimum confidence to control false positives. |
| Minimum confidence | 0.0–1.0. Applies to entity type matches. 0.85 is a reasonable default. |
| Content regex | Pattern is validated for ReDoS safety before save. Fires if the pattern appears anywhere in the prompt text. |
| Providers | Multi-select from providers configured in your org. |
| Models | Multi-select from model catalog. More specific than providers. |
| User risk score minimum | 0.0–1.0. Fires for users at or above the threshold based on CredInt/GeoIP data. |
| Channel | Interactive (browser/web app) and/or API (programmatic callers). Use Interactive only for PROMPT rules. |
| Intent complexity | Simple, Medium, or Complex. Computed during intake. Useful for cost-routing rules. |
Action
Section titled “Action”Select the action and fill in any action-specific fields.
| Action | When to use | Extra fields |
|---|---|---|
ALLOW | Whitelist — carve out exceptions | None |
BLOCK | Deny the request | Message — error text shown to caller |
CANCEL | Silent drop — no error to caller | None |
REDACT | Replace matched content, continue evaluation | Replacement string (default: [REDACTED]) |
ROUTE_TO | Override destination model | Model (exact) or Tier (haiku / sonnet / opus) |
PROMPT | Governance challenge — user justification required | Challenge message — shown in the dialog |
applies_to
Section titled “applies_to”Controls which direction of traffic the rule scans:
- Input — the user’s prompt before it is forwarded (default)
- Output — the model’s response before it is delivered
- Both — prompt and response in separate passes
Click Save rule. The rule appears in the list with its sequence number.
Tip: Use sequence numbers with gaps (10, 20, 30) to leave room for inserting rules later without renumbering.
Step 3 — Configure the org policy chain
Section titled “Step 3 — Configure the org policy chain”Navigate to Admin → Policy Engine → Policy Chain. The PolicyChainEditor shows your org’s active policy chain — the ordered list of packs that Arbitex evaluates for every request.
Screenshot: PolicyChainEditor — showing the drag-to-reorder pack list with sequence indicators, combining algorithm selector at the top, and an “Add pack” button for adding existing packs to the chain.
Add your pack to the chain
Section titled “Add your pack to the chain”Click Add pack to chain and select your pack from the list. Assign it a sequence number. Lower sequence numbers evaluate first.
Typical ordering:
- Custom whitelists and group-specific overrides (low sequence numbers)
- Compliance bundle packs
- Custom denial and redaction rules
- Catch-all block rule at the highest sequence number (optional, for deny-all posture)
Combining algorithm
Section titled “Combining algorithm”Two options:
first_applicable (recommended for most organizations) — the first terminal match wins. Behaves like a firewall: rules are evaluated in order, and the first matching terminal action stops evaluation.
deny_overrides — any BLOCK or CANCEL beats any ALLOW, regardless of sequence order. Use this when your compliance posture requires that no allowlist rule can ever bypass a denial. In this mode, evaluation continues scanning after an ALLOW match to check if any later rule has a BLOCK or CANCEL.
Click Save chain. The chain takes effect for new requests immediately.
Step 4 — Test with PolicySimulator
Section titled “Step 4 — Test with PolicySimulator”Before your pack is in the chain, use the PolicySimulator to verify rule behavior without affecting live traffic. The simulator runs a synthetic request through the full policy evaluation pipeline without forwarding to any provider.
Navigate to Admin → Policy Engine → Simulator.
Screenshot: PolicySimulator — showing the request form on the left (user, groups, provider, model, prompt text fields) and the evaluation result panel on the right (outcome badge, matched pack/rule, match reason, DLP findings).
Build a test scenario
Section titled “Build a test scenario”Fill in the synthetic request:
| Field | Description |
|---|---|
| User | Select or type a user ID. The simulator uses their actual group memberships from your directory sync, or you can override with manual group values. |
| Groups (override) | Comma-separated group names. Use this to test rules without needing a real user in the target groups. |
| Provider | The provider the request is simulating. |
| Model | The model identifier. |
| Channel | Interactive or API. Set Interactive to test PROMPT rules. |
| Prompt | The test prompt text. Include sensitive content or keywords to trigger specific rules. |
Click Simulate.
Reading the result
Section titled “Reading the result”The simulator returns exactly what would happen on a live request:
{ "outcome": "BLOCK", "matched_pack_id": "pack_01HZ_TRADING_DESK", "matched_rule_id": "rule_01HZ_BLOCK_OPENAI", "matched_rule_name": "Block OpenAI for no-openai group", "matched_sequence": 10, "match_reason": "user_groups matched ['no-openai']; provider=openai", "action_taken": "BLOCK", "message": "OpenAI access is not permitted for your group.", "dlp_findings": []}If no rule matched: "outcome": "ALLOW" with "matched_pack_id": null. This is the default permissive terminal. If your posture requires that unmatched requests are denied, check that your catch-all BLOCK rule at sequence=999 is in the chain and has no conditions set.
If the outcome is REDACT, the "redacted_prompt" field shows the exact prompt that would be forwarded to the provider after replacement.
If the outcome is PROMPT, the simulator shows the challenge message that would appear in the user’s governance dialog.
PROMPT governance flow — end-to-end
Section titled “PROMPT governance flow — end-to-end”When a PROMPT rule fires on a live interactive request (not in the simulator):
- Arbitex returns
HTTP 449 Retry Withto the frontend. - The
GovernancePromptDialogappears for the user showing theprompt_messagetext you configured. - The user types a justification and clicks Submit.
- The original request re-submits with
X-Governance-Justification(the justification text) andX-Governance-Challenge-Id(links to the original challenge in the audit log). - The re-submitted request passes through the full policy chain again. The PROMPT rule is bypassed on re-submission — if all other rules pass, the request proceeds.
If the user clicks Cancel in the dialog, the request is dropped with no error shown.
Managing existing packs and rules
Section titled “Managing existing packs and rules”Edit a rule
Section titled “Edit a rule”Click the rule name in the PolicyRuleEditor to open it for editing. All fields are editable. Changes take effect for new requests immediately.
Reorder rules within a pack
Section titled “Reorder rules within a pack”In the PolicyRuleEditor, drag rules to reorder them or directly edit the sequence number. Gaps between sequence numbers (10, 20, 30) make insertion easier without renumbering.
Remove a pack from the chain
Section titled “Remove a pack from the chain”In the PolicyChainEditor, click the Remove icon next to the pack. The pack is removed from the chain but not deleted — it remains in the PolicyPackList and can be re-added to the chain at any time.
Delete a pack
Section titled “Delete a pack”A pack can only be deleted when it is not in any chain. Remove it from the chain first, then open the pack in the PolicyPackList and click Delete.
See also
Section titled “See also”- Policy Engine overview — evaluation model, combining algorithms, PROMPT action details
- Policy Rule Reference — complete condition and action reference
- Policy Pack Setup — API — manage packs and rules via the REST API
- Compliance Bundles — activating pre-built compliance bundle packs
- Entra ID SCIM provisioning — setting up the group sync that powers
user_groupsconditions - PROMPT Governance — User Guide — what the governance dialog looks like from a user’s perspective