Core Concepts Overview
3 minute read
Core Concepts
Edge Delta combines enterprise-grade telemetry pipelines with collaborative AI teammates. The pipelines provide a trusted data foundation: parsing, enrichment, routing, masking, and governance controls that apply consistently from edge to cloud. The AI Team operates on that foundation, correlating signals, initiating investigations, and surfacing findings continuously rather than waiting for human prompts.
This section explains the foundational concepts that define how these layers work together.
AI and Intelligence
AI Team Fundamentals
Edge Delta’s AI teammates operate continuously on streaming telemetry. Unlike single-interaction agents that wait for prompts and terminate after delivering results, teammates monitor event streams, correlate signals across time, and initiate investigations autonomously when conditions warrant attention. This section explains the building blocks: how teammates connect to telemetry through connectors, how workspaces organize investigations, and how permission controls balance autonomy against operational risk.
Model Context Protocol
MCP provides the contract between AI teammates and the systems they operate on. It defines how teammates discover available tools, fetch context from resources, and take actions while respecting governance boundaries. Learn how Edge Delta implements MCP to enable AI assistants to interact with your telemetry data while maintaining security and access control.
Anomaly Detection and Insights
Anomaly detection surfaces meaningful log patterns and unusual system behaviors as they happen. The Drain algorithm groups similar logs into patterns, and monitors detect when negative patterns spike or new patterns emerge. AI teammates incorporate these signals immediately, correlating anomalies with recent changes and proposing remediation steps.
Data Foundation
Edge Delta Architecture
Understand Edge Delta’s modular pipeline architecture, including Node, Gateway, Coordinator, and Cloud Pipelines. Learn when to use each pipeline type, how to choose deployment patterns for your infrastructure, and best practices for organizing pipelines at scale.
Telemetry Pipelines
Pipelines are the backbone of the platform. They process telemetry at the source, applying parsing, enrichment, and routing before data reaches downstream destinations. This section explains how optimization is calculated, where volume reductions occur, and how to estimate cost savings over time.
Processors
Processors shape telemetry data within pipelines. Instead of chaining individual function nodes, you can define a full sequence of operations (filtering, redacting, enriching, aggregating) in a single step. This streamlines pipeline design and reduces misconfiguration risk.
Routing, Filtering, Aggregation
Control how telemetry flows through your pipelines. Route logs based on content to appropriate destinations, filter out irrelevant data to reduce noise and costs, and aggregate logs into metrics for clearer insights.
Optimization
Data Reduction
Data reduction tackles the exponential growth of telemetry volumes through pre-index processing. Strategies range from field deletion and lookup table replacements to log-to-metric conversion and pattern recognition, achieving 20-90% volume reduction while maintaining observability.
Data Tiering
Route different types of telemetry to the most appropriate destinations based on value, use case, and cost. Granular pipeline controls let you reduce expenses while preserving critical context for compliance and operational needs.
Flow Control
Dynamically manage data volume with intelligent sampling that balances cost optimization with full-fidelity troubleshooting during incidents. Adaptive sampling strategies respond to traffic patterns automatically.
Operations
Monitoring and Visibility
Monitor agent health, throughput, and performance metrics across your pipelines. Track deployment status, identify bottlenecks, and ensure your telemetry infrastructure operates reliably.
Troubleshooting and Diagnostics
Debug pipelines, inspect live data, and diagnose issues across your deployment. Access tools and techniques for identifying and resolving common problems.
Governance
Security and Compliance
Controls apply at collection time. Pipelines can redact, hash, or mask fields before telemetry leaves its origin, and security signals are enriched with the attributes downstream SIEMs expect. Learn about PII masking, encryption, audit trails, and compliance frameworks including GDPR, HIPAA, SOC 2, and PCI DSS.
Looking to build your own pipeline? Explore the How-To Guides for practical walkthroughs.