Edge Delta Datadog Destination
5 minute read
Overview
The Datadog destination node send items to a Datadog destination. It sends raw bytes that are generated via marshaling items as JSON. The items will be distributed into “log”, “event” and “metric” hosts based on the item type.
- incoming_data_types: cluster_pattern_and_sample, health, heartbeat, log, metric, custom, datadog_payload
Configure Datadog
Create Measure Facets
You need to create Measures for sentiment_score and pattern_count for the pattern analysis panels. To create a facet:
- Search for
@sentiment_score:*
to find all negative events. - Click any event, in the JSON section.
- Click sentiment_score and select Create Measure for @sentiment_score.
- Repeat the process for
pattern_count
.
See Measure Facets on the Datadog docs website.
Alternatively, you can create processors to convert the sentiment_score
and pattern_count
fields from strings to integers.
If you add the
include_pattern_info_in_samples: true
parameter to the cluster processor, it will replace all the cluster_samples with patterns. You will need to also turn off thecluster pattern
feature and turn on thecluster_sample
feature. If you add theinclude_pattern_info_in_samples: true
parameter you can skip the following steps: Create a Pipeline and Add a Grok Parser.
Create a Pipeline
Create a pipeline for @pattern_count
.
- Click log - configuration.
- Add a new pipeline with filter
@pattern_count:*
to search patterns. - Name the pipeline
pattern
.
See Create a pipeline on the Datadog docs website.
Add a Grok Parser
Add a Grok Parser processor with the following rule:
autoFilledRule1 %{regex(".*"):pattern}.*
Add a processor with the following attributes:
- type:
Grok Parser
- log sample:
* test
- parsing rule:
autoFilledRule1 %{regex(".*"):pattern}.*
In a log search, select edgedelta_datatype:cluster_pattern
then click on a pattern. View the pattern event attribute, and click pattern - add as a facet.
See Add a Grok Parser on the Datadog docs website.
Configure the Edge Delta Agent
Finally, you configure the Datadog destination node using Visual Pipelines or the agent YAML configuration file. You can select an Existing Datadog Integration when creating a Datadog destination node. See the Datadog docs for the endpoints.
You can consider adding a key-value pair to the Datadog integration to send an identifiable attribute such as
integration_name: edgedelta
. This helps to to easily identify and isolate Edge Delta data so that you can create facets.
Example Configuration
nodes:
- name: my_datadog
type: datadog_output
features: log
api_key: <key>
Required Parameters
name
A descriptive name for the node. This is the name that will appear in Visual Pipelines and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a -
and a space followed by the string. It is a required parameter for all nodes.
nodes:
- name: <node name>
type: <node type>
type: datadog_output
The type
parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.
nodes:
- name: <node name>
type: <node type>
api_key
The api_key
parameter provides the auth key for accessing the Datadog API. It is specified as a string and it is required. It can reference an environment variable for example: api_key: '{{ Env "KEY_ID" }}'
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
Optional Parameters
alert_as_log
The alert_as_log
parameter specifies whether to change the ingestion destination from event
to log
for alert
items. It is specified as a Boolean, with a default of false
and it is optional.
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
alert_as_log: true
buffer_max_bytesize
The buffer_max_bytesize
parameter configures the maximum byte size for total unsuccessful items. If the limit is reached, the remaining items are discarded until the buffer space becomes available. It is specified as a datasize.Size, has a default of 0
indicating no size limit, and it is optional.
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
buffer_max_bytesize: 2048
buffer_path
The buffer_path
parameter configures the path to store unsuccessful items. Unsuccessful items are stored there to be retried back (exactly once delivery). It is specified as a string and it is optional.
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
buffer_path: <path to unsuccessful items folder>
buffer_ttl
The buffer_ttl
parameter configures the time-to-Live for unsuccessful items, which indicates when to discard them. It is specified as a duration, has a default of 10m
, and it is optional.
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
buffer_ttl: 20m
event_host
The event_host
parameter is the hostname for sending event-typed items to Datadog. It is specified as a string, with a default of api.datadoghq.com
and it is optional. See the Datadog docs for the supported endpoints.
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
event_host: <host address>
features
The features
parameter defines which data types to stream to the destination. It is specified as a string of comma-separated list of item types. The default is metric,edac,cluster
. It is optional.
Feature Type | Supported? |
---|---|
Log | Yes |
Metrics | Yes |
Alert as event | Yes |
Alert as log | Yes |
Health | No |
Dimensions as attribute | Yes |
Send as is | No |
Send as JSON | No |
Custom tags | Yes |
EDAC enrichment | No |
Message template | No |
ed.pipeline.write_bytes | Yes |
outgoing__raw_bytes.sum | cell |
ed.pipeline.write_items | Yes |
output buffering to disk | Yes |
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
features: <item type>,<item type>
log_host
The log_host
parameter is the hostname for sending log-typed items to Datadog. It is specified as a string, with a default of http-intake.logs.datadoghq.com
and it is optional. See the Datadog docs for the supported endpoints.
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
log_host: <host address>
metric_host
The metric_host
parameter is the hostname for sending metric-typed items to Datadog. It is specified as a string, with a default of api.datadoghq.com
and it is optional. See the Datadog docs for the supported endpoints.
nodes:
- name: my_datadog
type: datadog_output
api_key: <key>
metric_host: <host address>