Datadog Logs connector
The Datadog connector is a P0 production connector that forwards Arbitex audit events to Datadog via the Logs Intake API v2. Events are wrapped in Datadog log envelopes and sent as JSON arrays for efficient batched ingestion.
How it works
Section titled “How it works”- Events are accumulated in an internal buffer (up to 100 events or 5 seconds, whichever comes first).
- Batches are sent as a JSON array to
POST https://http-intake.logs.{site}/api/v2/logs. - Each event is wrapped in a Datadog log envelope:
{"ddsource": "arbitex","ddtags": "env:production","hostname": "<system hostname>","service": "arbitex-platform","message": "<OCSF event JSON as string>"}
- The connector accepts HTTP 200 and 202 as success. On 429 or 503, it retries with exponential backoff (up to
max_retriesattempts). - On persistent failure, events are written to a dead letter JSONL file.
- The health check calls
GET https://api.{site}/api/v1/validatewith the API key. A 200 response indicates a valid key; 403 reports Error.
Supported Datadog sites
Section titled “Supported Datadog sites”| Site | DATADOG_SITE value | Region |
|---|---|---|
| US1 (default) | datadoghq.com | United States |
| EU1 | datadoghq.eu | European Union |
| US3 | us3.datadoghq.com | United States (US3) |
| US5 | us5.datadoghq.com | United States (US5) |
| AP1 | ap1.datadoghq.com | Asia Pacific |
| US1-FED | ddog-gov.com | US Government |
Set DATADOG_SITE to the appropriate value for your Datadog organization.
Configuration
Section titled “Configuration”| Variable | Required | Default | Description |
|---|---|---|---|
DATADOG_API_KEY | Yes | — | Datadog API key. Obtain from Organization Settings → API Keys. |
DATADOG_SITE | No | datadoghq.com | Datadog site domain (see table above). |
DATADOG_SOURCE | No | arbitex | Log source (ddsource) field — used for automatic pipeline matching. |
DATADOG_SERVICE | No | arbitex-platform | Service name (service) field — used in log explorer and APM correlation. |
DATADOG_TAGS | No | env:production | Comma-separated ddtags applied to all logs, e.g. env:production,region:us-east-1. |
DATADOG_BATCH_SIZE | No | 100 | Maximum events per batch send. |
DATADOG_FLUSH_INTERVAL | No | 5 | Maximum seconds between buffer flushes. |
DATADOG_MAX_RETRIES | No | 3 | Maximum retry attempts on transient failures. |
DATADOG_DEAD_LETTER_PATH | No | /var/log/arbitex/datadog_dead_letter.jsonl | Path for dead letter JSONL fallback. |
Datadog setup
Section titled “Datadog setup”Step 1 — Create an API key
Section titled “Step 1 — Create an API key”- In Datadog, go to Organization Settings → API Keys (or Account Settings → API Keys in some plans).
- Click New Key, enter a name (e.g.,
arbitex-siem), and copy the key value. - API keys have no scope restrictions for log ingestion — no additional permissions are needed.
Step 2 — Configure the connector
Section titled “Step 2 — Configure the connector”Set the environment variables on your Arbitex deployment:
DATADOG_API_KEY="your-api-key-here"DATADOG_SITE="datadoghq.com" # or your region's siteDATADOG_SOURCE="arbitex"DATADOG_SERVICE="arbitex-platform"DATADOG_TAGS="env:production,team:platform"Step 3 — Set up a log pipeline (recommended)
Section titled “Step 3 — Set up a log pipeline (recommended)”Datadog Log Pipelines parse the message field (which contains the OCSF JSON string) into structured attributes for searching and alerting.
- In Datadog, go to Logs → Configuration → Pipelines.
- Click Add a new pipeline.
- Set the filter to
source:arbitex. - Add a JSON Parser processor:
- Source attribute:
message - This extracts OCSF fields (
class_uid,time,severity_id,actor.user.uid, etc.) as top-level log attributes.
- Source attribute:
- (Optional) Add a Date Remapper processor:
- Source attribute:
time(epoch milliseconds) - This sets the official log timestamp from the OCSF
timefield.
- Source attribute:
- (Optional) Add a Severity Remapper to map
severity→status.
Step 4 — Create log indexes and retention
Section titled “Step 4 — Create log indexes and retention”By default, logs land in the default index. For compliance retention requirements, create a dedicated index:
- Go to Logs → Configuration → Indexes.
- Click New Index, name it
arbitex-audit. - Set a filter of
source:arbitex. - Set the retention period (e.g., 90 days for SOC 2, 1 year for HIPAA).
Verifying the connector
Section titled “Verifying the connector”In the Arbitex admin UI, go to Admin → SIEM. The Datadog connector row shows:
- Healthy — API key validation returned 200
- Error — API key is invalid (403)
- Degraded — API validation returned an unexpected status
- Not configured —
DATADOG_API_KEYis not set
Click Send test event to send a synthetic OCSF event. In Datadog Log Explorer, search:
source:arbitex @api.operation:siem_test_eventThe event should appear within a few seconds.
Log Explorer queries
Section titled “Log Explorer queries”Once the log pipeline is in place and fields are extracted, use Datadog Log Explorer to investigate Arbitex events:
# All DLP blockssource:arbitex @class_uid:2001 @finding.types:DLP
# Auth failures in the last hoursource:arbitex @class_uid:3002 status:error
# Specific user activitysource:arbitex @actor.user.uid:usr_01HZ_ALICE
# High-severity eventssource:arbitex @severity_id:[4 TO 5]Dead letter recovery
Section titled “Dead letter recovery”Failed batches are written to /var/log/arbitex/datadog_dead_letter.jsonl. Each line:
{ "event": { ... }, "error": "HTTP 429: Too Many Requests", "connector": "datadog", "timestamp": 1741564800.0}To replay, extract event objects and POST them directly to the Datadog Logs API:
jq -sc '[.[].event]' /var/log/arbitex/datadog_dead_letter.jsonl \ | curl -s -X POST "https://http-intake.logs.datadoghq.com/api/v2/logs" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: $DATADOG_API_KEY" \ --data-binary @-See also
Section titled “See also”- SIEM integration overview — OCSF event format and connector comparison
- Elasticsearch SIEM connector — bulk-API based connector
- Sumo Logic connector — HTTP Source-based connector