What is Edge Delta?
7 minute read
Collaborative AI Teammates on Trusted Telemetry
Edge Delta combines enterprise-grade telemetry pipelines with collaborative AI teammates. The AI Team coordinates specialized agents that operate across batched, static, event, and streaming data, using the same guardrails that secure mission-critical pipelines today. This approach keeps the data plane consistent while bringing investigation and response closer to the signals your teams depend on.
Agents continuously review telemetry, highlight emerging issues, and request input when actions require human approval. Shared channels keep observations and decisions in one place, so operators maintain context while the AI Team handles the heavy lifting of correlation and follow-up.
AI Teammates in Daily Operations
OnCall AI routes requests to domain specialists—SRE, Security Engineer, Code Analyzer, Cost Advisor, or any custom teammate you configure. Teammates can initiate conversations when telemetry changes, outline the evidence they observe, and work alongside engineers in the same channel. They summarize findings, propose remediation steps, and escalate whenever human judgment is required, preserving continuity from detection through post-incident review.
Event-Driven Intelligence Beyond User Prompts
Most AI agent platforms operate reactively: a user submits a prompt, the agent executes a bounded task, and the session terminates. Edge Delta’s AI Team operates continuously, monitoring event streams and initiating investigations when conditions warrant attention. External events—PagerDuty incidents, GitHub pull requests, AWS CloudTrail notifications—trigger autonomous workflows where teammates correlate telemetry, identify root causes, and present findings without human prompting. This event-driven architecture transforms AI from a question-answering tool into an active operational participant that notices what humans might miss and escalates what requires their attention.
Multi-Agent Collaboration at Production Scale
Complex operational scenarios span multiple domains: an outage investigation might require SRE expertise for initial triage, DevOps knowledge for deployment history, Code Analyzer review of recent changes, and Cloud Engineer assessment of infrastructure capacity. OnCall AI orchestrates these specialists, managing context handoffs, synthesizing findings, and presenting unified recommendations. Each teammate contributes domain-specific analysis while maintaining shared conversation state, enabling sophisticated workflows that progress through investigation, remediation, and post-incident review without losing context or requiring manual coordination.
Pre-Tuned Specialists and the Prompt Engineering Tax
Effective agentic systems depend on precisely crafted system prompts that define responsibilities, tool usage, and decision boundaries. Organizations face a choice: invest significant time learning prompt engineering for operational domains, or accept suboptimal agent behavior. Edge Delta ships six pre-tuned specialized teammates with production-ready prompts, carefully selected models, and tool assignments proven across customer deployments. Teams gain immediate value without prompt engineering expertise, while retaining full customization when workflows mature. An AI-powered teammate builder generates additional system prompts from natural language descriptions, lowering the barrier for specialized use cases.
Granular Trust Controls and Progressive Autonomy
Organizations must balance AI autonomy against operational risk. Edge Delta implements tool-level permission controls where every operation carries an explicit policy: execute autonomously or require human approval. Read-only operations typically run independently while state-changing actions—infrastructure modifications, deployments, security policy updates—default to approval workflows. As teams observe teammate behavior and validate reasoning, they selectively grant autonomous execution for lower-risk operations. This progressive trust-building enables velocity improvements without compromising safety or governance.

Custom teammates extend these behaviors to your organization-specific workflows. You control their language, connector access, and scheduled tasks, ensuring every member of the AI Team reflects how your environment is instrumented. Review responsibilities and configuration patterns in the AI Team Overview.
Pipelines With Guardrails for Collaboration
Telemetry pipelines remain the backbone of the platform. Policies are authored once and deployed consistently to enforce parsing, enrichment, masking, and routing from the edge to the cloud. The same controls provide permissions, audit trails, governance, and compliance, even as data volume scales.

Streaming connectors and event-driven workflows reuse this foundation. Lightweight, Go-based agents collect from any environment and provide pre-index visibility into the telemetry they handle. Because the AI Team observes data in motion, it can guide adjustments, automate routine steps, and keep routed outputs aligned with downstream requirements. Explore pipeline patterns in the Telemetry Pipelines overview.
Streaming Intelligence and Pattern Detection
Edge Delta ingests logs, metrics, traces, and events from virtually any source while keeping analysis close to the data. Live Capture tests parsing rules, filters, and enrichments against real-time samples before changes reach production. Clustering groups related logs into patterns, and anomaly detection raises deviations with recommended next steps. AI teammates incorporate these signals immediately, coordinating investigations and documenting outcomes for later review. For guided optimization techniques, see the Data Reduction guide.
Security, Compliance, and Scale
Sensitive workloads stay protected because controls apply at collection time. Pipelines can redact, hash, or mask fields before telemetry leaves its origin, and security signals are enriched with the attributes downstream SIEMs expect. Edge Delta’s ClickHouse-powered observability layer supports low-latency queries across petabyte-scale datasets, while dynamic data tiering balances cost and fidelity. Dive deeper into governance practices in the Data Privacy and Compliance overview.
Sources and Destinations
Edge Delta meets telemetry wherever it lives: Linux, Windows, macOS, and containerized workloads; Kubernetes clusters streaming logs, events, metrics, and traces; and cloud platforms including AWS, Google Cloud, and Azure. Streaming systems such as Kafka and Pub/Sub, plus security platforms like CrowdStrike FDR, connect natively alongside protocols ranging from OTLP and Prometheus to Fluentd, HTTP, TCP, UDP, and gRPC.
Processed insights land wherever your teams need them. Feed cloud analytics stacks; SIEMs including Microsoft Sentinel, Falcon LogScale, IBM QRadar, Exabeam, or Splunk; and observability tools such as Datadog, New Relic, Dynatrace, Elastic, and Sumo Logic. Long-term archives can target S3, Blob Storage, MinIO, Google Cloud Storage, or DigitalOcean, while collaboration hooks deliver narratives to Slack, Microsoft Teams, or webhooks. Browse connection-specific guidance in the Destinations catalog, and set up Kubernetes ingestion with the Metrics from Kubernetes guide.
Operational Outcomes
Organizations use Edge Delta to accelerate investigations, reduce ingestion costs, and keep downstream systems focused on actionable signals. Vendor-neutral routing prevents lock-in while honoring tool preferences across teams. Dynamic data tiering adapts sampling and routing as conditions change, preserving raw events for compliance without overwhelming premium destinations. Throughout, AI teammates keep context intact—sharing findings, proposing actions, and leaving teams with more time for work that drives the business forward. Activate these outcomes with the Getting Started guide for AI Team.
Reducing Time-to-Resolution While Preserving Expertise
Operations teams universally report being understaffed and overwhelmed. Budgets remain constrained while infrastructure complexity compounds with cloud-native architectures, microservices proliferation, and distributed systems sprawl. The promise of AI in operations is not eliminating human judgment but amplifying human productivity by handling mechanical investigation work that currently consumes hours of specialist time.
When an incident occurs, teammates automatically correlate logs, metrics, and traces across services; search for similar historical patterns; validate recent deployments; and assemble a structured timeline—work that might consume 30-60 minutes of manual effort. They surface findings with citations, allowing human experts to validate conclusions and approve remediation in minutes rather than hours. The mechanical work that extends mean time to resolution disappears; human focus shifts from evidence gathering to strategic decision-making.
This productivity multiplier applies across operational domains. Security teammates correlate CloudTrail events with access patterns and flag policy drift without analysts manually querying multiple systems. Code Analyzer reviews pull requests for common anti-patterns, missing tests, or security vulnerabilities before human reviewers engage, raising quality without expanding headcount. Cloud Engineer monitors resource utilization trends and forecasts capacity needs, enabling proactive scaling decisions rather than reactive firefighting.
The goal is making skilled operations teams more effective with existing resources—reducing toil, accelerating routine tasks, and preserving human bandwidth for problems requiring contextual judgment that AI cannot yet replicate. Organizations report that specialized teammates handle investigation work that would otherwise require hiring additional staff, while existing team members focus on architecture decisions, process improvements, and strategic initiatives that drive business value.
Transparent Cost Management and Model Selection
Foundation model consumption follows a token-based pricing model where costs vary dramatically based on model choice and usage patterns. Switching from GPT-4o to Claude Sonnet 3.5 can multiply operational costs by an order of magnitude for equivalent workloads. Organizations scaling AI operations require visibility into consumption patterns, cost attribution, and optimization opportunities to prevent budget overruns.
Edge Delta provides granular consumption visibility: per teammate, per model, per channel, and aggregated across the organization. Teams observe which workflows consume the most tokens, which models deliver optimal cost-performance ratios for specific domains, and how usage patterns evolve over time. This transparency enables informed decisions about model selection and usage governance.
The platform assigns appropriate models by default: OnCall AI uses advanced models for complex orchestration decisions while specialists use lighter-weight models for routine analysis. Organizations override these defaults when specific workflows justify different trade-offs, guided by consumption data showing cost implications.