What is Edge Delta?

Telemetry pipelines with support for logs, metrics, traces, and events, enabled by a next-generation architecture built to analyze petabytes on any budget.

Edge Delta is a new approach to managing telemetry data. It processes your data as it’s created and enables you to route it anywhere. As a result, you can make observability and security costs predictable, surface the most useful insights, and shape your data.

You can use Edge Delta for monitoring your workloads, querying large-scale log datasets, or creating observability pipelines.

Architecture

A pipeline is a set of agents with a single configuration. There are different types of pipelines that you can deploy depending on your architecture:

Node Pipeline

In a node pipeline, Edge Delta agents deploy directly within your computing infrastructure, such as Kubernetes clusters. This deployment strategy places it close to the data sources, facilitating immediate analysis and data optimization on the edge. The Edge Delta agents pre-process data, which includes extracting insights, generating alerts, creating summarized datasets, and performing additional tasks. Processed data is then transmitted to various endpoints in a streamlined manner, including Edge Delta’s processing engine (Edge Delta Back End), external monitoring tools, or data storage solutions.

Edge Delta’s agents include the Processing Agent, Compactor Agent,and Rollup Agent. The Processing Agent executes the pipelines. The Compactor Agent is designed to compress and encode data such as metrics and logs into efficient formats. The Rollup Agent aggregates metric data by optimizing data point frequency and cardinality, which notably reduces storage needs and can accelerate data retrievals.

Gateway Pipeline

The gateway pipeline serves as a central aggregation and processing hub within a Kubernetes environment. Its primary role is to collect and process telemetry data from multiple sources, including node pipelines operating at the node level and external inputs. This centralized approach allows the gateway to perform tasks such as service-level metric aggregation and log deduplication.

The gateway pipeline supports trace tail sampling, allowing decisions based on the full trace after it’s been collected. Since no single agent sees the whole trace, tail sampling must happen at the gateway, where all spans converge.

It can also ingest data from external sources via network protocols, meaning information not captured by node pipelines can still be processed by the system.

Typically deployed as a scalable set of agents in Kubernetes, gateway pipelines allow for multiple instances within a cluster, each potentially handling different data types or processing tasks. Connectivity is facilitated through a specialized gateway input to receive data from multiple agents and an Output in node pipelines specifically configured to send data to the gateway.

Coordinator Pipeline

The coordinator pipeline functions as a control plane agent, facilitating communication and coordination among node agents within a Kubernetes cluster. It is specifically designed to manage cluster-wide tasks and streamline agent management. One of its key functions is to act as an intermediary for control signals between the backend and node agents, effectively reducing communication overhead in larger clusters.

Deployed as a single-replica Kubernetes Deployment, there is only one coordinator per cluster, which prevents duplicate data collection and ensures effective cluster management. It also uses the Kubernetes API to discover other pipeline agents within the cluster, assisting in establishing groupings that provide a cohesive view.

The gateway focuses on the aggregation and processing of telemetry data at the cluster level, operating as a central hub for both agent-collected and external data. Meanwhile, the coordinator is centered on cluster coordination and management, facilitating efficient operations and preparing for enhanced cluster-wide data capabilities in the future.

See Edge Delta Pipeline Installation for architecture options.


Edge Delta Log Patterns

The Edge Delta agent uses a proprietary algorithm to detect repeated patterns in log data.

Log to Metric Conversion in Edge Delta

Edge Delta optimizes observability data by converting logs into metrics.

Edge Delta Metrics

Edge Delta ingests metrics signals and it calculates metrics from logs.

Edge Delta Anomaly Detection

Edge Delta detects anomalies in observability data.

In-Cluster Processing with Edge Delta

Processing logs within the cluster reduces egress costs and latency, providing faster insights.

Edge Delta Visual Builder

Engage with observability pipelines.