Edge Delta Aggregate Metric Node
3 minute read
The Aggregate Metric Node enables you to batch and aggregate metrics based on specified rules. It is useful for transforming incoming data types into aggregated metrics using various statistical methods.
- incoming_data_types: archive, cluster_pattern_and_sample, custom, datadog_payload, diagnostic, health, heartbeat, log, metric, signal, source, source_samples, splunk_payload, trace
- outgoing_data_types: , archive, cluster_pattern_and_sample, custom, datadog_payload, diagnostic, health, heartbeat, log, metric, signal, source, source_samples, splunk_payload, trace
Example Configuration
The following node is configured with two metric aggregation rules:

nodes:
- name: aggregate_metric
type: aggregate_metric
aggregate_metric_rules:
- name: "aggregate_metric_one"
interval: 30s
aggregation_type: sum
conditions:
- resource["container.name"] == "foo"
- name: "aggregate_metric_second"
interval: 1m
aggregation_type: count
conditions:
- resource["container.name"] == "bar"
group_by:
- resource
- attribute["hello"]
The first rule, named aggregate_metric_one
, is designed to operate with a batching interval of 30 seconds. This means it aggregates data collected every 30 seconds using the sum aggregation type. The sum
aggregation indicates that it will calculate the total of the metric values collected over each interval. Importantly, this aggregation only applies if a specific condition is met: the container’s name must be “foo” (resource["container.name"] == "foo"
). Consequently, this rule specifically targets and processes metrics originating from containers identified as “foo.”
The second rule, aggregate_metric_second
, functions on a slightly longer interval of 1 minute. Within this time frame, it uses a count
aggregation type, which counts the occurrences of metrics that satisfy its condition during each minute. The specific condition for this rule is that the metrics should come from containers named “bar” (resource["container.name"] == "bar"
). Additionally, this rule includes a “group by” clause, which further categorizes the counted metrics based on the resource attribute and attribute[“hello”]. This means the metrics are not only counted over the minute intervals but are also grouped and counted separately for each distinct combination of resource and attribute["hello"]
, allowing for detailed breakdowns of the metrics by these attributes.
Required Parameters
name
A descriptive name for the node. This is the name that will appear in Visual Pipelines and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a -
and a space followed by the string. It is a required parameter for all nodes.
nodes:
- name: <node name>
type: <node type>
type: aggregate_metric
The type
parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.
nodes:
- name: <node name>
type: <node type>
Optional Parameters
At least one metric rule is required.
aggregate_metric_rules
This section defines the rules for how the metrics will be aggregated and batched.
name
The name
of the specific metric that will be generated.
conditions
The conditions
parameter defines OTTL condition(s) that need to be met for the metric to be aggregated. They are OR’ed, meaning if one holds true, the metric will be extracted.
interval
The interval
parameter specifies how often metrics will be batched. It is specified as a duration.
aggregation_type
The aggregation_type
parameter specifies the method of aggregation. Valid options include:
- count
- max
- mean
- median
- min
- p90
- p95
- p99
- sum
Note: Aggregation only applies to
sum
andgauge
type metrics.
group_by
The group_by
parameter defines OTTL expressions for bucketing metrics. If unset, metrics with the same characteristics (such as sum that is cumulative and monotonic vs gauge) will be grouped in the same bucket.