What is Edge Delta?

Edge Delta is an AI-native telemetry data management platform with intelligent Telemetry Pipelines for efficient data collection, processing, and routing.

Why Edge Delta Stands Out

Edge Delta is an AI platform designed to understand systems in motion rather than after the fact. The AI Team turns telemetry pipelines into living collaborators that interpret streaming data, while the observability layer ships as the core, out-of-the-box experience built on top of that foundation. Lightweight, Go-based agents collect from any environment, and pre-index visibility keeps you aware of what is flowing so AI-driven insights always arrive with context.

Pipeline policies are defined once and applied everywhere. Whether you are enriching Kubernetes logs, enforcing masking rules on sensitive security events, or normalizing SaaS metrics onto an OpenTelemetry schema, the processing engine treats each step as a reusable component. Because the AI Team can analyze data as it moves through these pipelines, you get guidance and automations in real time. Routing is equally flexible—send raw or transformed data to Edge Delta’s observability layer or forward it to the services you already depend on.

Explore how these capabilities come together in the Telemetry Pipelines overview.

AI Team: Always-on Specialists

Edge Delta’s AI Team sits on top of your telemetry foundation and acts like an always-on set of specialists that live inside your tooling. OnCall AI coordinates requests, pulls in domain experts such as the SRE, Security Engineer, or Code Analyzer agents, and returns a single narrative that keeps engineers and operators aligned. Because the AI Team is wired directly into telemetry pipelines and your existing connectors, agents can inspect real-time data, propose remediations, and summarize outcomes without manual handoffs.

Custom teammates extend the hero experience to the workflows that make your organization unique. You decide which connectors they can reach, how they speak, and which periodic tasks they run. When an incident unfolds or a cost spike emerges, the AI Team is already in the conversation with the context it needs to help.

Review the full roster and responsibilities in the AI Team Overview.

Telemetry Pipelines Without Limits

Edge Delta pipelines accept logs, metrics, traces, and events from virtually any source. They provide granular controls at the edge—down to pod-level routing for Kubernetes—while still offering global policies like tail-based sampling or environment-wide aggregations. Configuration happens through human-readable YAML, but you can also design and test flows visually through the pipeline interface.

Live Capture brings experimentation into the loop. You can trail new parsing rules, filters, and enrichments on a live sample before promoting them, so changes land cleanly in production. For common workloads, pre-built processing packs accelerate onboarding by encoding best practices for parsing, normalization, and enrichment.

For guided pipeline patterns, see the Effective Pipeline Design tutorial.

Optimization Guided by Intelligence

The platform continuously analyzes the data passing through each pipeline. Intelligent recommendations point out opportunities to drop redundant fields, standardize key-value pairs, redact sensitive strings, or highlight high-volume patterns worth suppressing. Proprietary clustering groups related logs into patterns in real time, letting you separate signal from noise before forwarding anything downstream.

Anomaly detection runs on top of those patterns. When behavior strays from a learned baseline, Edge Delta immediately generates an anomaly enriched with suggested next steps. The AI Team amplifies that context by drafting a remediation plan, opening issues, or following up after the resolution so the entire incident lifecycle remains documented.

Dive deeper into these optimization strategies in the Data Reduction guide.

Security, Compliance, and Scale

Sensitive data benefits from localized control. Pipelines can redact, hash, or mask fields at collection time to keep regulated data sets compliant before they enter shared systems. Security telemetry is enriched with the attributes analysts expect, ensuring that downstream SIEMs or data lakes receive contextual, curated events.

The observability platform is powered by ClickHouse, delivering low-latency queries across petabyte-scale datasets. As load grows, you can dynamically tier data—ship every raw event to economical object storage for audit readiness while reserving high-fidelity flows for real-time investigation.

See how Edge Delta adheres to regulatory requirements in the Data Privacy and Compliance overview.

Telemetry Sources You Can Onboard

Edge Delta ingests telemetry from hosts running Linux, Windows, macOS, or containerized workloads, and from Kubernetes clusters capturing logs, events, metrics, and traces. Cloud platforms such as AWS, Google Cloud, and Azure are first-class citizens, as are high-throughput streaming systems like Kafka and Pub/Sub. Security teams can forward signals from platforms including CrowdStrike FDR, and protocol-level inputs ranging from OTLP and Prometheus to Fluentd, HTTP, TCP, UDP, and gRPC are supported out of the box.

Set up Kubernetes ingestion end-to-end with the Metrics from Kubernetes guide.

Destinations Edge Delta Powers

Processed data can land anywhere you need it. Send telemetry to cloud analytics stacks in AWS, Azure, or Google Cloud; stream into SIEMs such as Microsoft Sentinel, Falcon LogScale, IBM QRadar, Exabeam, or Splunk; and keep your observability ecosystem current across Datadog, New Relic, Dynatrace, Elastic, and Sumo Logic. Long-term archives can live in S3, Blob Storage, MinIO, Google Cloud Storage, or DigitalOcean, while collaboration hooks deliver summaries to Slack, Microsoft Teams, or generic webhooks. Edge and on-prem workflows remain covered through Kafka, Fluentd, and local file outputs.

Browse destination-specific setup steps in the Destinations catalog.

What You Can Achieve Today

Teams adopt Edge Delta to reduce ingestion costs, trim redundant telemetry, and improve the fidelity of what reaches downstream systems. Vendor neutrality means you avoid lock-in while still honoring tool preferences across the organization. Dynamic data tiering adjusts sampling and routing as conditions change, so you can keep every raw event for compliance without overwhelming high-cost destinations. Schema normalization keeps analytics consistent across services, and Kubernetes-aware collection ensures clusters are observable without drowning in noise. All of it is orchestrated with AI assistance from the hero-level AI Team, so investigations, remediation, and reporting move faster with less manual coordination.

Start activating these outcomes with the Getting Started guide for AI Team.