Model Context Protocol in Edge Delta
21 minute read
Overview
MCP gives Edge Delta a crisp seam between conversation and systems. It’s the contract that lets AI Team teammates discover sanctioned capabilities, pull just-enough context, and take actions without smuggling credentials or tribal knowledge into prompts. By putting every connector behind the same protocol surface, the AI Team can explain where evidence came from, which automations ran, and what guardrails applied before a human ever sees the result. Edge Delta layers organization-grade concerns on top of the baseline protocol (authorization checks, execution tracing, and data-residency controls) so teammates work within boundaries while still moving quickly.
MCP in Brief
MCP draws a clear boundary between intent and execution. The client (the conversational runtime acting on behalf of a user) expresses what it wants in terms of capabilities, while the server decides how to satisfy those requests against whatever systems sit behind it. This separation of concerns means you can evolve back-end integrations without perturbing the conversational layer, and you can change models without re-plumbing your estate.
MCP follows a client-server architecture where a host application (Edge Delta’s AI Team) establishes connections to one or more MCP servers. The host application creates one MCP client for each MCP server, maintaining dedicated one-to-one connections.
Note: Throughout this document, host application refers to Edge Delta’s AI Team—the runtime that provides the conversational experience and orchestrates teammate interactions. The host instantiates one MCP client per MCP server. The client handles discovery and invocation on behalf of the user, while servers expose tools, resources, and prompts backed by actual systems.
Three core primitives carry most of the weight:
- Tools are invocable operations with typed inputs and outputs. They’re the verbs of the system: search a corpus, open a ticket, read a dashboard. Tools enable AI models to perform actions, with each tool defining a specific operation using JSON Schema for validation. Well-behaved tools are explicit about side effects and designed for idempotency so clients can retry without fear; when work is long-running, tools return resource URIs (and optionally support subscriptions) that the client can dereference or subscribe to rather than stuffing bulky results into the model’s prompt. Tools may declare an
outputSchema
and clients should validate structured results when present. See the MCP Tools specification for details. - Resources are the nouns: stable references to documents or domain objects that can be fetched, paged, or sliced as needed. Resources provide structured access to information from files, APIs, databases, or any other source. Each resource has a unique URI (like
file:///path/to/document.md
) and declares its MIME type for appropriate content handling. Servers may expose resource templates (resources/templates/list
) for parameterized URIs and provide autocompletion through the completion API. See the MCP Resources specification for details. Resources help conversations point at the same thing over time (“that pipeline revision”, “this anomaly cluster”) instead of copying large blobs into the context window. - Prompts are reusable templates that help structure interactions with language models. Prompts are templates and snippets that servers curate to encode domain knowledge such as query patterns, diagnostic checklists, and escalation summaries. Clients can parameterize these at runtime, which keeps conversational scaffolding close to the systems it describes rather than scattered through application code.
For more details about MCP architecture and primitives, see the official MCP specification.
Diagram Key:
Component | Description |
---|---|
User | Human interacting with the AI Team through conversation |
AI Teammate | MCP client acting on behalf of the user, orchestrating all discovery, planning, and execution |
Approval UI | Slack or Edge Delta UI where humans review and approve higher-risk actions |
Discovery | Capability listing phase where the client learns what tools, resources, and prompts are available from connected servers |
Plan | Client selects minimal set of operations, preferring resource handles over full payloads to conserve context |
Tools | Invocable operations with typed inputs/outputs (searchLogs, openTicket, readDashboard) that perform actions across the MCP boundary |
Resources | Stable URI references to data (pipeline://config/v2, anomaly://cluster-47) that can be dereferenced on demand |
Prompts | Reusable templates (investigate-incident, analyze-pipeline) encoding domain knowledge and workflow patterns |
Edge Delta MCP Server | First-party MCP server exposing Edge Delta telemetry pipelines, dashboards, and operational data with full governance |
Custom MCP Server | Custom or third-party MCP servers adapting proprietary systems, legacy platforms, or vendor APIs |
REST / GraphQL | HTTP APIs providing structured access to cloud services, SaaS platforms, or internal microservices |
SQL / Warehouse | Relational databases, data warehouses (PostgreSQL, Snowflake) accessed through read-only or controlled queries |
Object Store | Blob storage systems (S3, Azure Blob, GCS) for logs, artifacts, or unstructured data |
File System | Local or mounted filesystems for configuration files, scripts, or application state |
Event Stream | Message brokers, event buses (Kafka, Kinesis, Pub/Sub) for real-time telemetry or event correlation |
Legacy System | Mainframes, batch job schedulers, or homegrown systems requiring protocol translation |
Audit Trail | Request/response logging with timestamps and initiating identity for compliance, post-incident review, and debugging |
Access Control | RBAC enforcement, data masking, residency controls, and activity logging applied at the server boundary |
How the MCP Seam Works:
Connect & Discover — When the AI Teammate (MCP client) starts a session, it connects to each available MCP server and sends
tools/list
,resources/list
, andprompts/list
requests. Servers respond with their complete capability catalogs, including tool schemas with typed inputs and outputs, resource URIs with MIME types, and prompt templates with parameter definitions. The discovery phase happens once per session and gives the client a full inventory of what it can do across all connected systems. The client never talks directly to the provider interfaces—all capability discovery flows through the MCP servers, which control what gets exposed.Plan — Armed with the catalog, the client plans its approach to answering the user’s question. Planning is conservative: bring back only what will fit in the context window, prefer resource handles over full payloads, and defer expensive tool calls until there’s evidence they’re needed. For example, instead of fetching an entire dashboard JSON, the client might request just the handle
dashboard://api-health
and only dereference it if the conversation requires those specific metrics.Act via Tools — To perform actions, the client invokes tools by sending
tools/call
requests across the MCP seam to the servers. Each call specifies the tool name and provides arguments that match the tool’s schema. The MCP servers receive these requests, apply governance controls, and translate them into operations against the backing provider interfaces (REST APIs, databases, object stores, etc.). Servers execute the requested operations and return structured results or handles for long-running work. These are heavy operations that cross system boundaries and may modify state, but the client remains decoupled from the underlying implementation.Fetch by Handle — When the client needs specific context, it dereferences resource handles using
resources/read
requests sent to the MCP servers. Instead of copying large blobs into the prompt, resources act as lightweight pointers. For instance,pipeline://config/v2
retrieves the exact pipeline configuration version mentioned in earlier logs, andanomaly://cluster-47
fetches details about a specific anomaly cluster. The server fetches the data from the appropriate provider interface, applies masking and access controls, and returns only what the client is authorized to see. This keeps conversations terse while still grounded in live data.Explain & Approve — For higher-risk actions, the teammate doesn’t execute immediately. Instead, it packages the original user request, the derived insight from tool results and resources, and the proposed change into a structured message sent to Slack or the Edge Delta UI. Humans review the full context, approve or modify the plan, and send their decision back to the teammate. This human-in-the-loop step ensures critical operations like pipeline deployments or ticket escalations carry proper oversight without losing the reasoning trail.
Trace & Govern — Governance enforcement happens at the client and server boundaries where Edge Delta’s controls are applied. Each MCP request and response is captured with timestamps and the initiating teammate identity, creating an audit trail for post-incident review or compliance checks. RBAC rules determine which teammates can access which MCP servers and which tools they can invoke. Masking is applied at the server boundary before payloads cross back to the client, respecting data classification policies. Edge Delta enforces residency controls for data it processes and returns via MCP; provider-side residency depends on the underlying system’s configuration.
Translate to Provider Interfaces — MCP servers act as anti-corruption layers, translating clean MCP contracts into whatever protocols the backing provider interfaces require. When a tool call or resource request arrives at an MCP server, the server translates it to the appropriate provider interface protocol—REST/GraphQL API calls, SQL queries, S3 operations, filesystem reads, event stream subscriptions, or even mainframe transactions—while keeping the client isolated from those implementation details. This layer is where vendor-specific authentication, rate limiting, retry logic, and error translation happen, ensuring that changes to underlying systems don’t ripple back through the MCP boundary.
Protocol Mechanics
MCP uses JSON-RPC 2.0 over two standard transports: stdio (for local processes) and Streamable HTTP (for remote servers with standard HTTP authentication). Streamable HTTP may use an Mcp-Session-Id
header for stateful sessions.
Lifecycle: When establishing a connection, the client calls initialize
with the protocol version and supported capabilities. The server responds with its own capabilities, allowing both sides to understand what features are available. After a successful handshake, the client sends notifications/initialized
to signal readiness. This ensures clients don’t attempt unsupported operations and enables efficient communication.
Discovery: The client sends tools/list
, resources/list
, and prompts/list
requests. Servers respond with their complete capability catalogs, including tool schemas with typed inputs and outputs, resource URIs with MIME types, and prompt templates with parameter definitions. The client learns enough schema and metadata to plan conservatively: bring back only what will fit, prefer resource URIs over payloads, and defer expensive calls until there’s evidence they’re needed.
Execution: The client invokes tools via tools/call
, fetches resources via resources/read
, and retrieves prompts via prompts/get
. Servers may send notifications like notifications/tools/list_changed
when capabilities change, and clients can subscribe to resource updates for real-time monitoring. Every answer can point to the artifacts it used, and every action can be traced back to the inputs that motivated it.
Because MCP is agnostic to programming language and transport, it lends itself to evolutionary architecture. Some teams expose thin, single‑purpose servers that front one domain and keep failure blast radius tight. Others build aggregator servers that federate multiple systems behind a unified surface. Adapter servers often act as an anti‑corruption layer for legacy or vendor APIs, translating awkward contracts into a small set of stable tools and resources. The point isn’t to pick a single pattern but to keep the seam crisp so you can rearrange topology as needs change.
A typical flow reads like a short narrative. An agent receives a question, uses capability discovery to decide whether correlation or configuration is likely, calls a small number of tools to test that hypothesis, and then fetches the specific resources needed to justify an answer. If the question pivots (perhaps the result suggests a different service is to blame), the client re‑plans with the same handles in hand, so context doesn’t collapse with the change in direction. All of this leaves a trail: which capabilities were consulted, which artifacts were read, which actions were proposed. That trace is useful for debugging the agent’s reasoning and for satisfying governance needs when changes touch production systems. In Edge Delta’s implementation, the same contract underpins connectors for telemetry pipelines, dashboards, and event sources, allowing AI teammates to cite evidence and record their steps without leaking beyond organizational policy.
There are trade‑offs to respect. Coarse tools are easy to reason about but can force clients to over‑fetch; overly fine‑grained tools reduce payload size but increase chattiness and coordination complexity. Returning raw data makes models flexible but increases prompt pressure; returning summaries reduces size but risks losing crucial detail. Good servers version their schemas, document error semantics, and make side effects explicit; good clients treat tools as unreliable networks do, with retries, backoff, and a bias toward idempotent patterns. When those habits are in place, MCP gives you a clean seam where conversations, systems, and governance concerns can evolve independently without tangled coupling.
Why MCP matters for AI operations
Operational AI has to be reproducible as well as clever. MCP makes the boundary explicit: the conversational layer asks for capabilities; systems of record decide how to fulfill them. That separation means the same investigation can run against different back ends so long as they present compatible tools and resources, and it means you can rotate models (or combine them) without rewriting integrations. The result is a style of work where answers point at evidence and actions carry their own provenance, which is exactly what audit and post‑incident review require.
How MCP powers Edge Delta’s AI Team
Within Edge Delta, teammates use MCP to discover context, ground their explanations, and leave a navigable trail of what happened. The connector catalog shows up as a set of MCP endpoints; a teammate chooses sources based on the question at hand rather than on hard‑coded rules. Conversation state keeps handles to the actual artifacts consulted (the log pattern, the anomaly cluster, the ticket) so a follow‑up can reference the same object rather than re‑quote large payloads. When a runbook is in play, the sequence of calls that produced inputs becomes part of the record; higher‑risk steps package the original request, the derived insight, and a proposed change for approval in Slack or the Edge Delta UI, keeping humans in the loop without losing context.
Paths to MCP connectivity in Edge Delta
Edge Delta supports two complementary approaches:
Edge Delta MCP Connector. This first‑party connector exposes telemetry pipelines, dashboards, and operational data to the AI Team through standardized MCP tools. It respects masking, RBAC, and retention policies defined in your organization, ensuring that teammates see only sanctioned data while still being able to carry out rich investigations. Activity logs in the AI Team workspace capture each request and response for review and compliance.
Custom Remote MCP Server Connector. When your environment includes proprietary systems, legacy data stores, or vendor APIs without a native Edge Delta connector, you can point the AI Team at any MCP‑compliant server you control. The AI Team then invokes the tools exposed by that server using natural language, without users needing to know the underlying API shapes.
Edge Delta includes a broad catalog of event connectors used by the AI Team to monitor, correlate, and act. Some connectors are implemented natively, while others are integrated through MCP. The catalog includes (not exhaustive): Atlassian, AWS, CircleCI, Databricks, Edge Delta MCP, Custom Remote MCP, GitHub, Jenkins, LaunchDarkly, Linear, Microsoft Teams, PagerDuty, Sentry, and Slack. The common protocol surface lets teammates move fluidly between these systems during an investigation, while preserving a consistent audit trail of what was accessed and why.
Example scenarios with the Custom Remote MCP
Custom servers often wrap internal CRMs, order systems, or ticketing layers so teammates can ask natural questions that become well‑typed tool calls: look up an account, pull order history, summarize recent cases. The same pattern bridges older platforms. An adapter can translate MCP calls into mainframe transactions or homegrown query languages so batch jobs, inventory snapshots, or reconciliation runs are visible without specialist terminals. For vendor platforms with uneven APIs, a thin MCP adapter normalizes authentication and rate limits and exposes a small set of stable operations the AI Team can rely on as contracts evolve.
Common integration patterns for systems not covered by Edge Delta’s native event connectors include:
- Internal tools and repositories: Connect to private GitLab instances, Bitbucket servers, or proprietary version control systems to query commit history, review internal code changes, or analyze deployment patterns
- Databases and data platforms: Expose PostgreSQL, SQLite, Redis, or time-series databases through read-only or controlled-write interfaces, enabling natural-language queries that translate to SQL or database-specific commands
- Cloud infrastructure: Query Azure resources or Google Cloud Platform components to retrieve configuration, check service health, or gather deployment metadata
- Document and knowledge systems: Integrate internal wikis, document repositories, or knowledge bases that aren’t covered by existing connectors
- Business systems: Connect to internal CRMs, ERP systems, payment platforms, or financial APIs to query customer data, retrieve transaction history, or summarize analytics
- Legacy and proprietary systems: Bridge mainframe applications, custom databases, or homegrown tools that lack modern APIs
The MCP servers repository provides reference implementations demonstrating these patterns, including filesystem access with security controls, Git repository operations, and knowledge graph storage systems.
Example scenarios with the Edge Delta MCP Connector
The Edge Delta MCP Connector enables teammates to interact naturally with observability data and pipeline configuration:
Incident Investigation
Incident work benefits from a deliberate arc: enumerate relevant sources, run targeted searches, pull representative traces, and compare today’s picture with historical baselines. During an investigation, a teammate might:
- Search logs across multiple pipelines using natural language queries
- Retrieve and explain anomaly patterns detected by Edge Delta processors
- Fetch metric trends and correlate them with recent deployments
- Compare current error rates against historical baselines
- Generate incident summaries that cite specific log patterns and metric data
Dashboard Analysis
Teammates can read dashboard definitions to explain what each widget measures and fetch current values to give a live narrative without forcing people into the UI. For example, asking “What’s the health of our API services?” might trigger the teammate to query relevant dashboards, interpret metric trends, and provide context about whether observed patterns are normal for the time of day or deployment cycle.
Pipeline Configuration
Configuration-centric conversations follow the same seam. In practice, a teammate retrieves a pipeline, proposes explicit diffs to add a source, and—subject to policy—deploys with a clear explanation of who asked for what and why. Where change control is stricter, deployment tools are configured to require approval rather than execute immediately. The teammate reviews current pipeline configurations, identifies missing sources, suggests processor additions based on log patterns observed in recent data, validates configuration syntax before deployment, and documents configuration changes with context about why they were needed.
Integration patterns
Teams typically adopt MCP along one or more patterns, depending on their organizational structure, security requirements, and operational workflows.
Single‑domain server
A product area (observability, commerce, risk) is represented by one server that contains the tools and resources for that domain. This limits blast radius and keeps permissioning straightforward.
This pattern works well when domains have distinct ownership, compliance boundaries, or rate limits. For example, your observability team might maintain an MCP server exposing log search, metric queries, and trace retrieval, while your security team operates a separate server for vulnerability scans, audit logs, and access reviews. Each server can evolve independently, and permissions map cleanly to existing RBAC structures. When an incident crosses domains, teammates can invoke tools from multiple servers in sequence while maintaining clear attribution of which system provided which insight.
Aggregator server
A single server proxies several back ends and presents a unified surface. This is useful when conversations frequently require cross‑system correlation, though it shifts more responsibility for routing and failure handling to the server.
Aggregators simplify the client experience by hiding heterogeneity: instead of teaching teammates about five different inventory APIs, you expose one “get_inventory” tool that internally fans out to regional databases, reconciles results, and returns a consolidated view. The trade-off is operational complexity. The aggregator becomes a critical path and must handle partial failures gracefully. Teams often build aggregators when they want to enforce a common schema across legacy and modern systems, or when they need to apply cross-cutting concerns like caching, rate limiting, or audit logging in one place.
Per‑team servers
Each team curates its own server with curated tools and prompts that reflect how that team works. The AI client can connect to many servers at once and plan across them.
This pattern supports organizational autonomy: the SRE team exposes runbook tools for restarting services and checking health endpoints, the data engineering team provides tools for querying data lakes and triggering pipelines, and the customer success team maintains tools for looking up account history and creating support tickets. Teammates discover all available servers and select the relevant ones based on the question being asked. Because each team controls its own server lifecycle, they can iterate on tool design, add domain-specific resources, and retire obsolete operations without cross-team coordination. This decentralization scales well but requires governance to prevent tool proliferation and naming collisions.
Adapter servers for non‑MCP systems
Where direct MCP support does not exist, lightweight adapters map vendor APIs or legacy interfaces into MCP operations. This approach provides immediate utility without waiting on vendor roadmaps.
Adapter servers are translation layers: they accept MCP tool calls, transform them into the vendor’s native format (REST, GraphQL, SOAP, or even terminal commands), and package responses back into structured MCP results. For example, you might wrap a proprietary ticketing system’s API so teammates can search tickets, update priorities, and add comments using natural language, even though the vendor has no MCP support. Adapters also work for internal systems that predate modern API design—mainframes, batch job schedulers, or custom databases—allowing operational AI to reach parts of your infrastructure that would otherwise remain invisible. The adapter pattern keeps integration logic isolated and testable, and you can replace the adapter with a native connector later without changing how teammates interact with the system.
Composable conversations
Because servers expose stable resource handles, conversations can pass those handles between teammates or across steps in a runbook, maintaining continuity without copying large payloads into prompts.
Resource handles act as lightweight pointers: instead of embedding a full dashboard JSON or a hundred-line log sample in a prompt, a teammate retrieves a handle like mcp://edgedelta/dashboard/prod-api-health
and passes it to the next step. Another teammate can dereference that handle to fetch the latest data, compare it with a different time range, or link it to a ticket. This keeps context windows manageable and ensures teammates always work with fresh data rather than stale snapshots. Composable conversations also enable workflows where one teammate investigates an anomaly, hands off a resource handle to a specialist for deeper analysis, and then a third teammate uses the same handle to generate an incident summary—all without re-querying the underlying system or losing track of what was examined.
Governance and observability
Governance is part of the envelope rather than an afterthought. Every exchange records request and response payloads with timestamps and initiating identity, which makes post‑incident review and periodic audits concrete rather than forensic. Data locality rules ensure payloads stay in the selected region and that masking is applied before anything crosses a boundary. When a connector is unavailable, teammates don’t guess. They return a degraded but explicit response that cites the missing resource so humans can intervene without chasing ambiguous failures.
Design considerations and limits
MCP encourages small, explicit contracts, which makes planning easier but shifts attention to latency and context size. Tools should return structured results that can be paged or summarized before entering a prompt; long‑running work ought to yield handles rather than oversized payloads. Idempotency and clear error semantics make retries predictable; authentication belongs at the server boundary so tokens can rotate without redeploying clients. Version tool schemas and resource representations so capability changes are detectable, and prefer additive evolution over breaking changes. These habits keep conversations resilient even as systems behind the seam move at different speeds.
Frequently asked questions
Is MCP tied to a particular LLM? No. MCP specifies how a client and server exchange structured operations and documents; it does not constrain which model generates or interprets the surrounding conversation.
How many MCP servers can a conversation use? As many as needed. The client can maintain several connections and choose per step which server to call, making cross‑system correlation a first‑class pattern.
What happens if a server is slow or down? The AI Team degrades gracefully, citing the missing resource and continuing with available context so humans know what could not be retrieved.
How does MCP relate to function calling? Function calling enables LLMs to generate structured outputs describing function invocations. MCP standardizes this pattern by defining a protocol for discovering, describing, and executing tools (functions) across different systems. While function calling focuses on the LLM’s ability to produce structured tool calls, MCP provides the infrastructure for advertising available tools, validating inputs, and returning results in a consistent way across multiple servers.
Further Reading
- Model Context Protocol Specification - Official MCP documentation and architecture overview
- MCP Specification (Latest) - Detailed protocol specification
- Function Calling with LLMs - Guest article by Kiran Prakash on Martin Fowler’s site