Edge Delta Architecture

Learn how Edge Delta’s architecture supports scalable telemetry pipelines across Kubernetes, cloud, and on-prem environments. We offer node, gateway, and coordinator pipeline deployment models — along with cloud pipeline deployments for agentless ingestion — with full support for open source standards.

Architectural Overview

Edge Delta provides a modular telemetry processing system made up of pipelines and agents that work across Kubernetes and non-Kubernetes environments, including Linux, Docker, Amazon ECS, macOS, Windows, and more. Its architecture is designed to optimize observability and security data workflows with flexibility, scalability, and operational clarity.

Core Components

  1. Agents and Integrations

    Edge Delta deploys lightweight agents within a wide variety of environments such as Kubernetes, Linux, Docker, ECS, macOS, Windows, and OpenShift. It also supports SELinux-enforced clusters. These agents collect telemetry data from a variety of sources including infrastructure, applications, and cloud-native services. Optionally, Cloud Pipeline integrations allow for ingestion from agentless, serverless, or third party agents without the need to deploy Edge Delta agents in every environment.

  2. Processing Pipelines

    A pipeline in Edge Delta is a collection of agents that share a common configuration. You configure the pipeline, and it governs how telemetry data is parsed, filtered, enriched, and routed by all associated agents. Processing occurs within agents and includes metric extraction, anomaly detection, and pattern recognition — all in near real time. This ensures that only relevant and optimized data is forwarded.

  3. Destinations

    Data is routed to one or more destinations for storage, alerting, or further analysis. These include observability platforms, SIEM tools, object stores like Amazon S3, and Edge Delta’s own Observability Platform.

Pipeline Types

Node Pipeline

A Node Pipeline is deployed per cluster and runs agents on each node in a Kubernetes environment. These agents collect and process telemetry data locally, standardize it on open formats like OpenTelemetry and OCSF, extract insights, compress data, and generate alerts in real time. The data is then sent to the configured outputs such as third-party tools or to Edge Delta.

Edge Delta supports multiple types of agents:

  • Processing Agent: Executes the pipeline logic.
  • Compactor Agent: Compresses and encodes telemetry data into efficient formats.
  • Rollup Agent: Aggregates metric data to reduce frequency and cardinality, improving performance and reducing storage needs.

Gateway Pipeline

A Gateway Pipeline is a shared pipeline that provides centralized aggregation and advanced processing in Kubernetes clusters. It collects telemetry data from NodePpipelines and external sources, performing service-level metric aggregation, log deduplication, and trace tail sampling. The Gateway sees the full picture of incoming telemetry data, making it ideal for holistic processing.

It is typically deployed as a set of scalable agents and connected to Node Pipelines through dedicated input/output configurations.

Coordinator Pipeline

The Coordinator Pipeline is a control component of the pipeline that manages backend communication and agent coordination within a cluster. Deployed as a singleton in each Kubernetes cluster, it manages communication between Edge Delta agents and the Observability Platform, reducing overhead in large environments. It handles tasks like discovery, agent grouping, and live capture coordination.

Note: In a cluster with multiple nodes, on the leader node data is collected by live capture if you do not have a coordinator pipeline deployed in the pipeline. With a coordinator deployed, the data from the top 5 producing nodes is included in live capture.

Node Pipelines collect telemetry data at the source, Gateway Pipelines aggregate and process it at the cluster level, and Coordinator Pipelines manage environment-wide agent communication and control — together forming a robust foundation for efficient telemetry operations at scale.

With them, teams can intelligently collect, process, and route telemetry data to filter out noise and preserve high-value signals, reducing telemetry costs and enhancing downstream analysis.

Architecture Variants

Edge Delta pipelines can be deployed in various forms, including agent-based and agentless models:

  • Single Node Clusters: Ideal for lightweight environments like dev sandboxes or edge testbeds.
  • Large Clusters: Combine node and Coordinator Pipelines to streamline operations and reduce noise.
  • Multi-Cluster Environments: Federate pipelines using shared or regional Gateway Pipelines.
  • Multi-Tenant & High Compliance: Use isolated Coordinators and Gateways per environment or tenant for secure and auditable deployments.
  • Hybrid Cloud Setups: Ingest from local clusters and route through a centralized gateway in the cloud.

Edge Delta allows you to tailor your pipeline structure, agent deployment patterns, and data flows to match your organizational structure and operational goals.

Cloud Pipelines (Agentless Option)

Edge Delta also offers cloud-hosted pipelines that do not require teams to deploy agents into their infrastructure. These are ideal for serverless workloads (e.g., AWS Lambda), streaming platforms (e.g., Amazon Kinesis), IoT systems, or environments with tight security or resource constraints.

You can push telemetry data to Cloud Pipelines using HTTP, HTTPS, or gRPC, or configure them to pull data using supported source nodes like HTTP Pull.

This deployment model is fully managed by Edge Delta and requires no additional infrastructure provisioning on your side.

Note: Gateway and Coordinator Pipelines are currently only supported within Kubernetes environments.

For more details on deployment strategies, see: