Default Edge Delta Configuration
5 minute read
Overview
When you create a new Fleet, it includes a default Pipeline configuration. This is an example of a default pipeline in a Kubernetes environment.
Inputs
Kubernetes Source
The Kubernetes source is configured to ingest logs from certain Kubernetes resources, and not others. See more documentation on configuring a Kubernetes source here.
The node ingests logs from all namespaces in the cluster (see the include
field). Of those, it excludes the k8s.pod.name=edgedelta
resource using the exclude
field.
Kubernetes Events
The Kubernetes Events input ingests Kubernetes events from the cluster when there is a state change in a cluster resource. See Kubernetes Event Input.
Kubernetes Metrics Input
The Kubernetes Metrics source node scrapes certain Kubernetes metrics such as Kube State Metrics (KSM), cAdvisor, Kubelet, and Node exporter. See Kubernetes Metrics Input.
Kubernetes Traffic Source
The Kubernetes Traffic source node ingests Kubernetes metrics via eBPF. See Kubernetes Traffic source node.
Processors
Mask Node
The Kubernetes source node is connected to a Mask node named mask_ssn
using a link. It is configured with a regex pattern that identifies US social security numbers and obfuscates them with the word REDACTED
.
See more documentation on configuring a mask node here.
The mask_ssn
node connects to a number of downstream processor nodes arranged in parallel. The parallel construction of the pipeline indicates that all output from mask_ssn
is sent to each downstream processor i.e. the data is duplicated per link.
For conditional routing along parallel paths, where data is only sent down one of the paths based on its characteristics, you can use a Route node.
Regex Filter
A Regex filter node named drop_trace_level
is one of the processors fed by the mask_ssn
node in the default configuration.
It is configured to identify logs containing the string TRACE
. The default behavior of a regex filter is to identify and pass only those logs that match the specified pattern. However, in this configuration the behavior is inverted by setting the Negate
switch to True
. Now the node will drop only those logs containing TRACE
anywhere in the log, and it will pass all other logs. In the default configuration, the drop_trace_level
node sends logs to the ed_archive_output
, and they can be accessed using the Search tab on the Logs page.
Log to Metrics
There are three log to metrics nodes that are fed logs from the mask_ssn
node in the default configuration:
error_monitoring
The error_monitoring
log to metric node is configured to create metrics based on logs containing the Golang regex pattern (?i)error
. This node will capture any logs containing the text error
or any case variation.
It outputs an error_monitoring.count
metric item for each matching log:
{
"_stat_type": "count"
"_type": "metric"
"gauge": { • • • }
"kind": "gauge"
"name": "error_monitoring.count"
"resource": { • • • }
"start_timestamp": 1726192287632
"timestamp": 1726192347632
"unit": "1"
}
You can view them in the metrics explorer.
In this instance the explorer shows a count of metric items per minute over the past 15 minutes.
exception_monitoring
The exception_monitoring
log to metric node is configured to create metrics based on logs containing the Golang regex pattern (?i)exception
. This node will capture any logs containing the text exception
or any case variation.
It outputs an exception_monitoring.count
metric item for every matching log:
{
"_stat_type": "count"
"_type": "metric"
"gauge": { • • • }
"kind": "gauge"
"name": "exception_monitoring.count"
"resource": { • • • }
"start_timestamp": 1726195211404
"timestamp": 1726195211504
"unit": "1"
}
You can view in the metrics explorer.
In this instance the explorer shows a count of metric items per minute over the past 15 minutes.
negative_sentiment_monitoring
The negative_sentiment_monitoring
log to metric node is configured to create metrics based on logs containing the Golang regex pattern:
(?i)(exception|fail|timeout|broken|caught|denied|abort|insufficient|killed|killing|malformed|unsuccessful|outofmemory|panic|undefined)
This node will capture any logs containing any of the negative sentiment keywords or any case variation of them.
It outputs the negative_sentiment_monitoring.count
metric:
{
"_stat_type": "count"
"_type": "metric"
"gauge": { • • • }
"kind": "gauge"
"name": "negative_sentiment_monitoring.count"
"resource": { • • • }
"start_timestamp": 1726195681186
"timestamp": 1726195681286
"unit": "1"
}
You can view them in the metrics explorer.
In this instance the explorer shows a count of metric items per minute over the past 15 minutes.See more documentation on configuring a log to metric node here.
Log to Pattern
A log to pattern node named log_to_pattern
is connected downstream to the mask_ssn
node in the default configuration. It reports every minute to the ed_patterns
destination node with any patterns and samples detected on the edge.
Patterns and samples detected by this node on the edge, as well as any detected in post-processing on the backend, can be explored on the Patterns tab of the Logs page.
Outputs
The default configuration sends data to Edge Delta outputs:
ed_logs_output
is used to send archive data to the Edge Delta back end to power Log Search.ed_metrics
is used to send metrics to the Edge Delta SaaS to power the metrics explorer page as well Kubernetes Overview, Pipeline Status, filters and others. In addition it collects the agent version, agent heartbeats, and last activated data.ed_pattern
is used to send pattern data to the Edge Delta SaaS to power the Patterns tab of the Logs page.ed_debug_output
is used to view a sample of logs emitted from any node it is connected to, in the default configuration it is connected to drop_trace_level and Kubernetes Events. See Debug destination.
Note: There are multiple diagnostic nodes present in the default configuration but they are hidden by default. These are used predominantly to enable Edge Delta functionality.