Default Edge Delta Configuration

Edge Delta deploys with a default pipeline.

Overview

When you create a new Fleet, it includes a default pipeline configuration based on the options you select. This is an example of a default pipeline in a Kubernetes environment.

Inputs

kubernetes_input

The Kubernetes input node allows you to specify which Kubernetes pods and namespaces the agent should monitor. This node is configured to include logs from all namespaces (k8s.namespace.name=.*) while excluding logs from pods named edgedelta (k8s.pod.name=edgedelta).

This form creates the following YAML configuration:

- name: kubernetes_input
  type: kubernetes_input
  include:
  - k8s.namespace.name=.*
  exclude:
  - k8s.pod.name=edgedelta

For more information on configuring Kubernetes input nodes, see the Kubernetes Source Documentation. It is connected to the mask node using a link.

ed_k8s_metrics

This node scrapes Kubernetes metrics. For more information, see the Kubernetes Metrics node.

k8s_traffic

This node ingests Kubernetes traffic metrics via eBPF. For more information, see the Kubernetes Traffic node.

k8s_event

This node is configured to ingest Kubernetes events from the cluster when there is a state change in a cluster resource. Events have the item.type=event attribute. It is connected to the Edge Delta Destination node which makes the events available in the Search tab on the Logs page with the search string: @item.type:event.

This form creates the following YAML configuration:

- name: k8s_event
  type: k8s_event_input
  report_interval: 1m0s

For more information, see the Kubernetes Event node documentation.

Diagnostic Inputs

There are multiple diagnostic nodes present in the default configuration but they are hidden by default. These are used predominantly to enable Edge Delta functionality.

ed_source_detection

This node autodetects your sources in the environment when the Fleet is deployed.

ed_component_health

The Component Health source node enables ingestion of health data of the components in the agent. It collects health data and it feeds the Edge Delta Destination node to enable agent diagnostics in the Edge Delta SaaS. It can also send this data to other destinations. See Component Health source node for more information.

ed_node_health

The Agent Node Health source node enables ingestion of health data of the nodes in the graph. It feeds the Edge Delta Destination node to populate the per node throughput data on the pipeline view. See Agent Node Health for more information.

ed_agent_stats

The Agent Stats source node produces metrics based on agent statistics. It feeds the Edge Delta Destination node, which sends metrics to the Edge Delta SaaS to power the Metrics Explorer page. See Agent Stats for more information.

ed_pipeline_io_stats

The Pipeline IO source node ingests incoming and outgoing stats for pipelines. It is also connected to Edge Delta Destination, which populates data on the pipeline status page. See pipeline IO for more information.

ed_system_stats

The System Statistics source node produces metrics based on the stats collected from the system at the core level. It feeds the Edge Delta Destination node, which sends systems metrics to the Edge Delta SaaS, and which are available in the Metrics Explorer page.

Processors

Mask Node

The Kubernetes source node is connected to a Mask node named mask_ssn. It is configured with a regex pattern that identifies US social security numbers and obfuscates them with the word REDACTED.

This form creates the following YAML configuration:

- name: mask_ssn
  type: mask
  pattern: \d{3}\-\d{2}-\d{4}
  mask: REDACTED

The mask_ssn node connects to a number of downstream processor nodes arranged in parallel. The parallel construction of the pipeline indicates that all output from mask_ssn is sent to each downstream processor i.e. the data is duplicated per link.

For conditional routing along parallel paths, where data is only sent down one of the paths based on its characteristics, you can use a Route node.

See mask node for more information.

Regex Filter

A Regex filter node named drop_trace_level is one of the processors fed by the mask_ssn node in the default configuration.

This form creates the following YAML configuration:

- name: drop_trace_level
  type: regex_filter
  pattern: TRACE
  negate: true

It is configured to identify logs containing the string TRACE. The default behavior of a regex filter is to identify and pass only those logs that match the specified pattern. However, in this configuration the behavior is inverted by setting Filter Out Matches in the UI or the Negate YAML switch to true. Now the node will drop only those logs containing TRACE anywhere in the log, and it will pass all other logs. In the default configuration, the drop_trace_level node sends logs to the Edge Delta Destination node and the ed_debug destination node.

See Regex filter for more information.

Log to Metrics

There are three log to metrics nodes that are fed logs from the mask_ssn node in the default configuration:

error_monitoring

The error_monitoring log to metric node is configured to create metrics based on logs containing the Golang regex pattern (?i)error. This node will capture any logs containing the text error or any case variation.

This form creates the following YAML configuration:

- name: error_monitoring
  type: log_to_metric
  pattern: (?i)error

It outputs an error_monitoring.count metric item for each matching log. In the preceding test pane, one error log was detected:

[
	{
		"_type": "metric",
		"gauge": {
			"value": 1
		},
		"kind": "gauge",
		"name": "error_monitoring.count",
		"resource": {...},
		"start_timestamp": 1732584062231,
		"timestamp": 1732584062331,
		"unit": "1",
		"_stat_type": "count"
	}
]

You can view them in the metrics explorer.

In this instance the explorer shows the sum of error_monitoring.count values rolled up to display one metric per 5 minutes over the past 1 hour.

exception_monitoring

The exception_monitoring log to metric node is configured to create metrics based on logs containing the Golang regex pattern (?i)exception. This node will capture any logs containing the text exception or any case variation.

This form creates the following YAML configuration:

- name: exception_monitoring
  type: log_to_metric
  pattern: (?i)exception

It outputs an exception_monitoring.count metric item for every matching log. In the preceding test pane, one exception log was detected:

[
	{
		"_type": "metric",
		"gauge": {
			"value": 1
		},
		"kind": "gauge",
		"name": "exception_monitoring.count",
		"resource": {...},
		"start_timestamp": 1732584446193,
		"timestamp": 1732584446293,
		"unit": "1",
		"_stat_type": "count"
	}
]

You can view exception_monitoring.count in the metrics explorer.

negative_sentiment_monitoring

The negative_sentiment_monitoring log to metric node is configured to create metrics based on logs containing a particular Golang regex pattern. This node will capture any logs containing any of the negative sentiment keywords or any case variation of them.

This form creates the following YAML configuration:

- name: negative_sentiment_monitoring
  type: log_to_metric
  pattern: (?i)(exception|fail|timeout|broken|caught|denied|abort|insufficient|killed|killing|malformed|unsuccessful|outofmemory|panic|undefined)

It outputs the negative_sentiment_monitoring.count metric. In the preceding test pane, one matching log was detected:

[
	{
		"_type": "metric",
		"gauge": {
			"value": 1
		},
		"kind": "gauge",
		"name": "negative_sentiment_monitoring.count",
		"resource": {...},
		"start_timestamp": 1732584715214,
		"timestamp": 1732584715314,
		"unit": "1",
		"_stat_type": "count"
	}
]

You can view them in the metrics explorer.

In this instance the explorer shows a sum of negative_sentiment_monitoring.count values rolled up to display one metric per 5 minutes over the past 1 hour.

See more documentation on configuring a log to metric node here.

Log to Pattern

A log to pattern node named log_to_patterns is connected downstream to the mask_ssn node in the default configuration. It reports every minute to the edgedelta destination node with any patterns and samples detected on the edge.

Patterns and samples detected by this node on the edge, as well as any detected in post-processing on the backend, can be explored on the Patterns tab of the Logs page.

See Log to Pattern node for more information.

Outputs

The default configuration sends data to Edge Delta outputs:

Edge Delta Destination

The Edge Delta Destination node is used to send all pipeline data to the Edge Delta back end to power various functionalities including Log Search, metrics explorer, and pattern detection.

ed_debug

The Debug destination node is used to view a sample of logs emitted from any node it is connected to, in the default configuration it is connected to drop_trace_level and k8s_event.