Configuration Overview

Design and build effective pipelines by testing.

Know your Data

To design an effective data handling pipeline you should have a good understanding of the data your workloads generate. It is important to understand their structure and content, as well as whether they are homogeneous - of the same type and structure.

When logs are ingested into the pipeline, the entire log becomes the body and metadata is added to the log to build an OTEL data item.

Bear in mind that the OTEL source node attempts to use the incoming OTEL log fields.

See more examples of the data items. To understand how data is escaped, see Understand Escaping Characters.

Live Capture helps you design pipelines by showing actual data as it flows through the processor. See Live Capture.

Know your Requirements

To design effective log and metric pipelines, you must have a comprehensive understanding of the data handling requirements. These include business-driven factors such as cost-efficiency and adherence to legal mandates, data-specific needs such as volume capacity and optimization of data throughput, information security, and maintainability.

Leverage AI-Powered Recommendations

Edge Delta’s Recommendations feature provides AI-powered insights to optimize your pipelines. The system analyzes your log patterns and suggests relevant processors like filtering, sampling, or masking to help reduce costs and improve data quality. Access recommendations while designing pipelines in the multiprocessor view or from the dedicated Recommendations page to get contextual suggestions based on your actual data patterns.

Pipeline Conceptual Design

Create a rough or conceptual pipeline containing the nodes whose functions fulfil the requirements. Consider the sequence of nodes and opportunities for branching the pipeline in paths. Develop a high level understanding of what your data should look like as it progresses through the pipeline to meet your requirements. For example, the first node might mask a specific field, while the next might extract a field from the body and convert it into an attribute. A parallel path might be required to also generate metrics or trigger alerts against a threshold. Consider the data destination data format requirements.