Configure a Processor
3 minute read
Processors in Edge Delta are modular components that operate on telemetry data—logs, metrics, traces, and events—as it flows through your pipeline. Each processor performs a specific function such as parsing, masking, filtering, metric extraction, or aggregation. By combining processors, you can create dynamic, efficient pipelines tailored to your team’s observability, security, and cost goals.
If you’re new to pipelines, start with the Pipeline Quickstart Overview or learn how to Configure a Pipeline.
Processors are organized into logical categories based on their purpose. You can use these building blocks to clean, transform, enrich, or downsample data before routing it to your destination(s).
Legacy processor nodes are being deprecated in favor of these stacked sequence processors.
Processors can consume and emit all data types:
- incoming_data_types: archive, cluster_pattern_and_sample, custom, datadog_payload, diagnostic, health, heartbeat, log, metric, signal, source, source_samples, splunk_payload, trace
- outgoing_data_types: archive, cluster_pattern_and_sample, custom, datadog_payload, diagnostic, health, heartbeat, log, metric, signal, source, source_samples, splunk_payload, trace
Note: There is no need to decode the body before operating on it, despite many processors using OTTL under the hood. Similarly, patterns are automatically escaped for you.
Create a Processor
When you add a source or destination, a processor is added automatically.
Note: Legacy processor nodes are being deprecated in favor of these stacked sequence processors.
Alternatively, in pipeline Edit Mode, click Add Node and select Multi Processor.

Add Processor Based on Context
You can add and automatically configure a processor using fields in the output pane. To do this you must be in the Pretty View.
You select a field that is the subject of a processor you need. The context menu shows options for processors that can be automatically added and configured:

In this example processing succeeded
was clicked. Suppose you want to exclude logs containing this value in this field. You click Exclude:

A filter processor has been added to the processor stack and it has been automatically configured appropriately to drop any logs where attributes["msg"] == "processing succeeded"
. You can tweak the configuration or leave it as is and click Save.
Configure a Processor
A multiprocessor node is configured using Live Capture data. To start, it recommends a particular processor based on the detected input traffic. In this instance, the logs are JSON formatted so it recommends a Parse JSON processor:

Click the + icon to add the recommended processor or click + Add a processor to add a different processor.
In this example the Parse JSON processor is added to the multiprocessor.

If you select a log in the input pane, the resulting log is shown in the output. Note the green diff block showing the parsed attributes, as well as the 24% increase in data item size:

Autocomplete
As you specify fields, the fields detected in the live capture sample are offered by autocomplete:

Similarly, as you specify values, autocomplete offers samples based on the live capture sample.

Configure a Sequence of Processors
As you configure additional processors, they are stacked vertically in the centre pane. Data items are usually processed by each in turn from the top down. In this example, three processors operate in turn with the final processor filtering out certain data items.

Take note of the affected markers: this time some are red indicating those data items have been dropped. Also take note of the size decrease: despite increasing the individual size of each data item, the overall throughput volume is less on balance due to the filtering.
Disable Processors
You can toggle specific processors in the sequence on or off. You might do this as you experiment with the sequence of processors, or when troubleshooting.

In this instance, the Extract Metric process is being used after parsing JSON but without string conversion or filtering.