Skip to content

DLP event monitoring

When the DLP pipeline detects sensitive content in a request or response, it creates a DLP event. This guide explains what events mean, how to read the portal event view, and when to contact your admin.

For the technical details of how the DLP pipeline detects content, see DLP pipeline — technical reference. For how DLP findings drive policy actions, see Policy Engine user guide.


A DLP event is a record created each time the pipeline detects sensitive content that matches a configured rule. Events are created regardless of whether the content was blocked, redacted, or only logged — the event records what was found and what action the policy took.

Events are immutable. The detection record cannot be deleted.


The DLP pipeline runs in three sequential tiers:

Tier 1 — Pattern matching. Regular expression matching against 65+ patterns for structured data: credit card numbers, SSNs, API keys, IBANs, AWS credentials, and similar identifiers. This tier uses checksum validation (Luhn for credit cards, checksum for IBANs) to reduce false positives.

Tier 2 — Named entity recognition (NER). A GLiNER zero-shot NER model identifies unstructured PII in free text: names, addresses, organization names, phone numbers, and other entities that don’t match simple patterns.

Tier 3 — Contextual validation (DeBERTa). A DeBERTa NLI model re-evaluates ambiguous detections from Tiers 1 and 2 using the surrounding text. This step reduces false positives for entities that look like PII but are not in context (for example, a number sequence that resembles an SSN but isn’t one).

Each tier that produces a detection contributes a finding to the event record. The event shows which tier detected the content via the Detector column.


Navigate to DLP → Events to view the event table.

FieldDescription
TimeTimestamp of when the detection occurred
Entity typeThe type of sensitive data detected (e.g., CREDIT_CARD, SSN, EMAIL_ADDRESS, API_KEY)
DetectorWhich detector identified the content (e.g., regex_credit_card, gliner_ner, deberta_validator)
SeverityRisk level of the detection: low, medium, high, or critical
StatusCurrent lifecycle state: detected, investigating, resolved, or false_positive
ActionWhat the policy did in response: block, redact, or log_only

Expand a row to see the matched text (the specific content that triggered the detection) and any resolution notes added by an admin.


Severity reflects the risk of the detected content type based on the rule configuration.

SeverityTypical content
CriticalHigh-value credentials (private keys, AWS secret keys, database connection strings), full payment card data with CVV
HighGovernment identifiers (SSN, passport numbers), complete financial account data (IBAN + BIC pairs)
MediumPartial PII (phone numbers, email addresses, IP addresses), partial card numbers, healthcare identifiers (DEA, NPI)
LowSingle-field identifiers with low confidence, generic patterns that may indicate PII

Severity is set per rule by your org admin. Contact your admin if the severity assigned to a detection type doesn’t match your org’s classification policy.


Each event moves through the following lifecycle. Admin actions drive status transitions — portal users can view status but cannot update it.

detected → investigating → resolved
↘ false_positive
StatusWhat it means
DetectedThe pipeline identified sensitive content. The policy action has been applied (block/redact/log). No admin review has occurred yet.
InvestigatingAn admin has flagged the event for review. The underlying request has already been handled — this status indicates the event is being examined for policy tuning or incident response.
ResolvedAn admin has completed review and marked the event as addressed. This may indicate a policy adjustment was made, a user was contacted, or no further action was required.
False positiveAn admin determined the detection was incorrect — the content was not actually sensitive.

As a portal user, you can:

  • View all events and their current status.
  • Expand rows to see detection details (matched text, resolution notes).
  • Filter events by status to focus on specific lifecycle stages.
  • Export event data if your org’s audit export is configured.

You cannot change the status of an event, add resolution notes, or dismiss detections. These actions are admin-only.


Use the Status filter above the event table to narrow the view:

FilterShows
All StatusesAll events regardless of state
DetectedNew events awaiting review
InvestigatingEvents currently under admin review
ResolvedEvents that have been reviewed and closed
False PositiveEvents determined to be incorrect detections

Filtering is local to the current view and does not affect event records.


Navigate to DLP → Trends to see aggregate detection metrics over a selected time range (7, 14, 30, or 90 days).

The trends view shows:

  • Total events — count of all detection events in the period.
  • Unique entity types — number of distinct entity categories detected.
  • Block rate — percentage of events where the action was block.
  • Most common entity — the entity type with the highest detection count.
  • Detection volume chart — daily event count over the period.
  • Entity type breakdown — horizontal bar chart showing the top 10 most-detected entity types.
  • Action breakdown — distribution of block, redact, and log_only outcomes.
  • Top detectors — which detectors generated the most events.

Use the trends view to understand detection patterns over time. A spike in CREDIT_CARD events, for example, may indicate a change in how users are using a particular model.


Escalate to your org admin when:

  • An event is marked detected and involves high or critical severity content. Your admin may need to investigate whether the request should have been blocked or whether a policy change is required.
  • You see a false positive that recurs. If the same content is repeatedly triggering a detection that your admin has previously marked false_positive, the underlying rule may need adjustment.
  • Block rate increases unexpectedly. A sudden increase in blocks may indicate a misconfigured policy or a change in user behavior that warrants review.
  • You receive a block response on a legitimate request. Your admin can review the detection that caused the block and adjust the rule or your group’s policy if appropriate.
  • Sensitive content appears in the matched text for an event you don’t recognize. This may indicate unauthorized use of a model on your account.

Your admin can change event status, add resolution notes, adjust DLP rules, and configure detection thresholds. Provide the event timestamp and entity type when reporting.