Core Concepts Overview
3 minute read
Core Concepts
Edge Delta is more than a telemetry collector—it’s a flexible platform for shaping how observability and security data flows through your systems. This section breaks down the foundational concepts that define how Edge Delta works under the hood, giving you insight into how to build efficient, scalable pipelines.
Each concept contributes to a unified framework for controlling, enriching, and routing your telemetry—delivering real-time insights, reducing downstream costs, and preserving visibility where it matters.
Explore the key concepts below:
Telemetry Pipelines
Learn how Edge Delta helps reduce the volume of telemetry data sent to downstream tools. This section explains how optimization is calculated, where reductions occur in your pipelines, and how to estimate cost savings over time. You’ll also learn how to compare volume changes by source or destination, view pipeline-level reductions, and analyze your total return on investment.
Processors
Processors simplify how you shape and manage telemetry data within your pipelines. Instead of chaining individual function nodes, you can define a full sequence of data operations—such as filtering, redacting, enriching, or aggregating—in a single step. This streamlines pipeline design, reduces misconfiguration risk, and makes ongoing maintenance easier.
Handling Logs, Metrics, and Traces
Understand how Edge Delta represents and interacts with logs, metrics, traces, and events. This section covers key interface elements—like the Logs viewer, Metrics Explorer, Trace Explorer, and Service Map—and shows how to navigate and use them effectively. You’ll also find guidance on correlating data across signals and interpreting individual data items as they move through a pipeline.
Real-Time Analytics
Real-time analytics in Edge Delta focuses on monitoring the current state of your systems through live dashboards and responsive visualizations. This section explains how Edge Delta prioritizes the most recent telemetry—ensuring fast detection, minimal lag, and actionable insights the moment data is ingested. You’ll learn how to reduce detection and response times, align with compliance needs, and build observability strategies that keep pace with dynamic, cloud-native environments.
Routing, Filtering, Aggregation
Routing, Filtering, and Aggregation optimize observability pipelines by directing logs based on content to appropriate destinations, filtering out irrelevant data to reduce noise and costs, and aggregating logs into metrics for clearer, actionable insights—enabling efficient, scalable, and targeted monitoring and response.
Anomaly Detection and Insights
Anomaly Detection and Insights in Edge Delta centers around surfacing meaningful log patterns and unusual system behaviors as they happen, so you can respond faster and stay ahead of incidents. This section details how patterns are detected and analyzed, how negative sentiment is flagged to highlight potentially urgent issues, and how intuitive visualizations and customizable monitors help you track anomalies in real time.
Data Tiering
Data Tiering in Edge Delta empowers you to optimize log management by intelligently routing different types of telemetry data to the most appropriate destinations based on value, use case, and cost considerations. This section explains how granular pipeline controls let you reduce expenses, improve performance, and preserve critical context—so you can retain, analyze, and deliver the right data to the right tools while meeting compliance and operational needs.
By understanding these core concepts, you’ll be better equipped to design telemetry pipelines that are fast, cost-effective, and tailored to your team’s needs.
Looking to build your own pipeline? Explore the How-To Guides for practical walkthroughs.