Skip to content

Model Risk Management

The Arbitex Model Registry is the central inventory for all AI models your organization uses through the platform. It provides risk tier classification aligned to OCC SR 11-7, a structured validation status workflow, automatic discovery from the audit log, and first-class integration with the Policy Engine’s model_risk_tier condition. This feature ships in platform-0056.


The model registry stores metadata for every AI model in use — whether accessed through the cloud gateway, an on-premises Outpost, or discovered automatically from audit events. Each registry entry captures:

  • Model identifier — provider ID (e.g., gpt-4o, claude-3-5-sonnet-20241022, llama-3.1-8b-instruct)
  • Provider — the AI vendor or hosting infrastructure
  • Risk tier — classification per OCC SR 11-7 guidance
  • Validation status — lifecycle state of formal model validation
  • Owner — the team or individual accountable for the model
  • Description and notes — free-text documentation
  • Tags — arbitrary key/value labels for filtering and policy targeting

Registry entries are used by the Policy Engine to enforce model_risk_tier conditions at request time, giving you governance over which users and groups may access models at each risk tier.

OCC SR 11-7 (“Supervisory Guidance on Model Risk Management”) requires financial institutions to classify models by the materiality of their use. Arbitex uses a four-tier scheme that maps directly to SR 11-7 risk categories:

TierLabelOCC SR 11-7 alignmentTypical use cases
tier_1CriticalHigh-risk models with material impact on business decisionsCredit scoring, fraud detection, algorithmic trading, regulatory capital models
tier_2HighSignificant models requiring full validation lifecycleCustomer segmentation, AML, risk appetite models
tier_3ModerateStandard models with defined validation requirementsInternal analytics, process automation, summarization in regulated workflows
tier_4LowInformational or experimental modelsInternal productivity tooling, sandboxed R&D, non-consequential summarization

Tier assignment is the first step in model governance. Once assigned, a model’s tier determines which validation steps are required before the model may be used in production, and which policy rules apply to it at request time.


Every model in the registry carries a validation_status field that tracks the lifecycle of formal model validation. Admins update this status as validation progresses.

draft → pending_validation → in_validation → validated → deprecated
(returned for remediation)
|
needs_remediation
StatusMeaning
draftModel registered but validation not yet initiated. Not approved for production use.
pending_validationValidation request submitted. Awaiting assignment to a model validator.
in_validationActive validation in progress. Model may be used in sandboxed or pilot contexts per policy.
needs_remediationValidation completed with findings. Model owner must address issues before re-submission.
validatedValidation complete. Model approved for use at its assigned risk tier.
deprecatedModel retired. Policy Engine can block or warn on deprecated model use.

Tier 1 and Tier 2 models require explicit validated status before Policy Engine rules will permit production access, when require_validated_models enforcement is enabled in the policy chain. Tier 3 and Tier 4 models may be permitted in pending_validation or in_validation states depending on your org’s policy configuration.


The platform can automatically populate the model registry by scanning the audit log for AI request events. Auto-discovery reads the model field from audit events and creates draft registry entries for any model identifier not already present.

To trigger auto-discovery:

POST /api/admin/models/discover

Request body:

{
"lookback_days": 30,
"auto_assign_tier": "tier_4",
"dry_run": false
}
FieldTypeDescription
lookback_daysintegerNumber of days of audit history to scan. Default: 30.
auto_assign_tierstringRisk tier to assign to newly discovered models. Default: tier_4.
dry_runbooleanIf true, returns discovered models without creating registry entries.

Response 200 OK:

{
"discovered": 7,
"created": 5,
"existing": 2,
"dry_run": false,
"models": [
{
"model_id": "gpt-4o-mini",
"provider": "openai",
"status": "created",
"validation_status": "draft",
"risk_tier": "tier_4"
}
]
}

After discovery, review the newly created entries and assign appropriate risk tiers. Auto-discovered models start at draft validation status and the configured auto_assign_tier.


Navigate to Admin > Model Registry in the Cloud Portal to view all registered models. The inventory page shows:

  • Model name and provider
  • Risk tier badge
  • Validation status badge
  • Owner assignment
  • Last seen date (from audit log)
  • Quick-action links to edit or view policy impact

Use the filter controls to narrow by tier, validation status, or provider.

POST /api/admin/models

Request body:

{
"model_id": "claude-3-5-sonnet-20241022",
"provider": "anthropic",
"display_name": "Claude 3.5 Sonnet (Oct 2024)",
"risk_tier": "tier_2",
"validation_status": "pending_validation",
"owner": "ai-risk-team@example.com",
"description": "Primary conversational model for customer-facing workflows",
"tags": {
"business_unit": "retail_banking",
"use_case": "customer_service"
}
}
PUT /api/admin/models/{id}

Use this endpoint to advance the validation status, re-assign the tier, or update owner and tags as your governance process progresses.

GET /api/admin/model-inventory-export

Returns the full model registry as a CSV file suitable for import into your GRC platform or for SR 11-7 examination submissions. See API Reference Batch 11 for full field documentation.


The model registry integrates directly with the Policy Engine through the model_risk_tier condition type. This lets you write rules that govern access based on a model’s registered risk tier.

The condition checks the risk tier of the requested model against the registry at evaluation time.

{
"condition_type": "model_risk_tier",
"operator": "gte",
"value": "tier_2"
}
FieldDescription
operatorComparison: eq, neq, lte, gte. Tiers are ordered tier_4 < tier_3 < tier_2 < tier_1.
valueThe tier to compare against: tier_1, tier_2, tier_3, or tier_4.

Example — require manager approval for Tier 1 models:

{
"name": "Require approval for critical models",
"condition_type": "model_risk_tier",
"operator": "eq",
"value": "tier_1",
"action": "require_approval",
"direction": "input"
}

Example — block unvalidated high-risk models:

{
"name": "Block non-validated Tier 1/2 models",
"condition_type": "model_risk_tier",
"operator": "lte",
"value": "tier_2",
"action": "block",
"direction": "input"
}

Combined with a validation_status tag condition, you can restrict production access to only validated Tier 1 models and permit in_validation models in designated test groups.

[
{
"name": "Block tier_1 for non-ML-team users",
"condition_type": "model_risk_tier",
"operator": "eq",
"value": "tier_1",
"action": "block",
"direction": "input"
},
{
"name": "Allow tier_1 for ML Risk team override",
"condition_type": "group_membership",
"operator": "in",
"value": ["ml-risk-reviewers"],
"action": "allow",
"direction": "input"
}
]

Place the allow rule earlier in the chain (lower sequence number) or use a user chain override so the ML Risk team’s exemption takes precedence. See Policy Engine Deep Dive for combining algorithm details.


RequirementArbitex capability
Model inventoryModel Registry with full CRUD API
Risk classificationFour-tier scheme (tier_1–tier_4) aligned to SR 11-7 materiality
Validation lifecyclevalidation_status workflow with audit trail
Ongoing monitoringAudit log integration; last_seen tracking; policy enforcement
Model access governancemodel_risk_tier policy condition for request-time enforcement
Inventory exportCSV export at /api/admin/model-inventory-export
Ownership assignmentowner field per model entry