Configure a Processor

How to configure Edge Delta data pipeline processors.

Processors in Edge Delta are modular components that operate on telemetry data—logs, metrics, traces, and events—as it flows through your pipeline. Each processor performs a specific function such as parsing, masking, filtering, metric extraction, or aggregation. By combining processors, you can create dynamic, efficient pipelines tailored to your team’s observability, security, and cost goals.

If you’re new to pipelines, start with the Pipeline Quickstart Overview or learn how to Configure a Pipeline.

Legacy processor nodes are being deprecated in favor of these stacked sequence processors.

Processors can consume and emit all data types:

Note: There is no need to decode the body before operating on it, despite many processors using OTTL under the hood. Similarly, patterns are automatically escaped for you.

Create a Processor

When you add a source or destination, a processor is added automatically.

Alternatively, in pipeline Edit Mode, click Add Node and select Multi Processor.

Add Processor Based on Context

You can add and automatically configure a processor using fields in the output pane. To do this you must be in the Pretty View.

You select a field that is the subject of a processor you need. The context menu shows options for processors that can be automatically added and configured:

In this example processing succeeded was clicked. Suppose you want to exclude logs containing this value in this field. You click Exclude:

A filter processor has been added to the processor stack and it has been automatically configured appropriately to drop any logs where attributes["msg"] == "processing succeeded". You can tweak the configuration or leave it as is and click Save.

AI-Powered Processor Recommendations

Edge Delta’s Recommendations feature provides intelligent suggestions for processors based on your data patterns. When working in the multiprocessor view, you’ll see recommended processors in the center pane that are specifically tailored to your data characteristics. These AI-powered recommendations analyze your log patterns to suggest actions like:

  • Filtering low-value, repetitive logs to reduce volume
  • Sampling high-volume patterns that don’t require full retention
  • Masking sensitive data for privacy compliance
  • Parsing structured data like JSON or XML for better analysis

You can also access the full Recommendations feed from the main navigation, where recommendations are organized by value and category (Cost Saving, Data Hygiene, Best Practice, Privacy). Each recommendation includes a preview of its impact, allowing you to make informed decisions before implementation.

Configure a Processor

A multiprocessor node is configured using Live Capture data. To start, it recommends a particular processor based on the detected input traffic. In this instance, the logs are JSON formatted so it recommends a Parse JSON processor:

Click the + icon to add the recommended processor or click + Add a processor to add a different processor.

In this example the Parse JSON processor is added to the multiprocessor.

If you select a log in the input pane, the resulting log is shown in the output. Note the green diff block showing the parsed attributes, as well as the 24% increase in data item size:

Autocomplete

As you specify fields, the fields detected in the live capture sample are offered by autocomplete:

Similarly, as you specify values, autocomplete offers samples based on the live capture sample.

Sequence Processors

Processors within a multiprocessor node are executed sequentially in a top-down flow. This structure allows you to shape telemetry—logs, metrics, traces, and events—in a controlled, efficient manner. Each processor in the stack applies its logic to the data it receives from the previous processor.

You can think of a multiprocessor node as a mini-pipeline contained within a single pipeline node. It simplifies logic that would otherwise require chaining multiple standalone processor nodes.

Learn the foundational behavior and flow control rules in the Processors core concepts guide.

Processing Rules

The order of processors in the stack significantly impacts results. Each processor receives the data output from the processor before it. Depending on processor configuration, data may continue down the stack or exit the node early:

  • Unprocessed Items Continue: If a data item doesn’t meet a processor’s condition (e.g., type mismatch or filter logic), it is passed unchanged to the next processor.
  • Processed Items Are Transformed: When a processor modifies or enriches data, those changes become the input for the next processor.
  • Final Option: If a processor has Final enabled, successfully processed items exit the multiprocessor immediately and are routed to the next pipeline node. Failed or unprocessed items continue through the remaining processors.
  • Keep Original Telemetry Item: This option causes the processor to emit both the original and the new item when it successfully transforms data. This is useful for processors like Extract Metric, where retaining the original log may still be valuable downstream.

Practical Example

Suppose your processor stack includes:

  1. Extract Metric with Keep Original Telemetry Item enabled
  2. Add Field
  3. Parse JSON with Final enabled
  4. Filter to remove items tagged type=test

Here’s how telemetry flows:

  • A log generates a metric via Extract Metric. Since Keep Original Telemetry Item is enabled, both the metric and original log move to Add Field.
  • Add Field adds a type=test tag to both items, and they proceed to Parse JSON.
  • Parse JSON only operates on logs. It parses the log and emits a new item. Because Final is enabled, the parsed log exits the node early—bypassing the Filter processor.
  • The metric, which wasn’t modified by Parse JSON, continues to Filter, where it is dropped due to the type=test condition.

This setup demonstrates how processor sequence, combined with processor-level options, lets you fine-tune both transformation and routing logic.

Disable Processors

You can toggle specific processors in the sequence on or off. You might do this as you experiment with the sequence of processors, or when troubleshooting.

In this instance, the Extract Metric process is being used after parsing JSON but without string conversion or filtering.