Security Controls

How Edge Delta’s architecture addresses AI agent security concerns through data boundaries, permissions, approvals, and audit logging.

When organizations deploy AI teammates, security teams tend to ask variations of the same questions: What data can these agents see? Who controls what they can do? How do we know what happened after the fact? These are reasonable concerns, and the answers shape whether AI teammates become trusted members of the security operation or remain perpetually quarantined in pilot programs.

This page explains how Edge Delta addresses these concerns. The short version: the same pipeline controls that protect data flowing to your SIEM also protect data flowing to AI teammates. The same permission model that governs human access governs AI access. And every action an AI teammate takes gets logged in the same audit infrastructure you already use for compliance.

Screenshot Screenshot

Data never bypasses your pipelines

The first concern most security teams raise is data exposure. AI agents query and combine data from multiple sources, and that data ends up in context windows and model responses. Without proper controls, sensitive information could leak into places it does not belong.

Edge Delta handles this by routing all AI teammate queries through your existing pipelines. The Mask processor, Filter processor, and EDXEncrypt all run before data reaches any teammate. If you have already configured these processors to redact PII, exclude sensitive log sources, or encrypt specific fields for your SIEM, those same protections apply to AI queries automatically.

This design choice matters because it means you configure data protection once. You do not maintain separate policies for human analysts, SIEM destinations, data lakes, and AI teammates. The pipeline is the enforcement point, and everything downstream inherits its rules.

See Mask Processor, Filter Processor, and EDXEncrypt and EDXDecrypt for configuration details.

Permissions work at the tool level

The second concern is permission creep. Agents designed to help can accumulate access beyond what they actually need, especially when teams are eager to get value from a new capability and skip the principle of least privilege.

Edge Delta addresses this by making permissions explicit at the tool level. Each connector exposes specific tools, and each tool has its own permission setting: either Allow (executes without approval) or Ask Permission (requires a human to approve before execution). Read operations typically default to Allow, while write and modify operations default to Ask Permission. You can adjust these defaults per tool.

Teammates only access connectors that you explicitly assign to them. Specialized teammates like Security Engineer or DevOps Engineer come pre-configured with scoped access appropriate to their role. If you create a custom teammate, you assign connectors manually, which forces you to think through exactly what that teammate should be able to do.

You can restrict access further by enabling or disabling individual tools when you assign a connector to a specific teammate. Navigate to AI Team, then Teammates, then Edit, then Connectors to configure this.

See Connectors Overview for configuration details.

Humans stay in the loop

The third concern is autonomy. Fully autonomous agents can take actions without appropriate review, and even well-intentioned automation can cause problems when it runs without oversight.

Edge Delta keeps humans in the loop through approval workflows. When a tool requires permission, the teammate packages the triggering event, its analysis, and the proposed action into an approval request. These requests appear in the Activity page with a Waiting for approval status, giving you full visibility into what the teammate wants to do and why.

Infrastructure changes (deployments, configuration updates) route through channels rather than direct messages. This ensures that approval context is visible to the team, audit trails capture who approved what, and decisions remain discoverable for compliance reviews. Direct messages are intentionally read-only for state changes, so meaningful actions flow through channels where they receive appropriate oversight.

If something goes wrong, you can disable a connector or the teammate itself through the UI to halt all activity immediately.

See Activity and Channels for more on monitoring and managing teammate actions.

Everything gets logged

The fourth concern is auditability. When an incident occurs, you need to trace what the AI accessed, what decisions it made, and what actions it took. Vague summaries do not satisfy auditors or help with root cause analysis.

Edge Delta captures comprehensive audit data across multiple surfaces:

LocationWhat it captures
Activity pageAll threads with priority, status, token consumption
AI Team EventsQueryable history by monitor, thread state, channel, connector
Thread detailsFull conversation context, tools invoked, results returned
MCP layerRequest/response with timestamps and teammate identity

Per-message metrics include tokens used, response time, and quality score. Thread history preserves the full conversation context, so you can reconstruct exactly what happened during any investigation. The Events page supports filtering by time range, connector name, teammate, or thread state, which helps with both real-time monitoring and retrospective compliance reviews.

See AI Team Events and Activity for query and filtering options.

Compliance aligns with your existing controls

When AI agents process regulated data (PII, PHI, cardholder data), compliance teams have questions about data handling and auditability. The good news is that the pipeline-level controls described above already address most of these concerns.

RegulationHow Edge Delta addresses it
GDPRMask processor redacts PII before AI context
HIPAAPipeline filtering excludes PHI sources or masks specific fields
SOC 2Activity page and Events provide configuration change tracking and approval audit trails
PCI DSSMask processor tokenizes card patterns at ingestion

RBAC restricts which teams can configure teammates and connectors. Audit trails satisfy SOC 2 configuration change tracking requirements. Data residency controls apply uniformly across pipelines, including data that AI teammates access.

See Strengthening Security and Compliance for broader platform compliance capabilities.

Investigating what went wrong

When something goes wrong, you need to trace what happened quickly. Start with the Activity page to locate the relevant thread. Each thread shows the full conversation history with all messages and tool invocations, which connectors and tools were used, results returned from each tool call, and approval history including who approved what and when.

The Events page supports filtering by time, connector, teammate, and status, so you can narrow down to specific incidents quickly. If you need to halt a teammate’s activity immediately, disable the relevant connector or the teammate itself through the UI. This stops all further tool invocations while you investigate.

Dividing work between humans and AI

Without clear boundaries, AI teammates may attempt tasks better suited for human judgment, and humans may underutilize AI capabilities. Getting this division right matters for both effectiveness and trust.

AI teammates function as force multipliers for security operations. They excel at volume, consistency, and recall across large datasets. Humans provide strategic context and make decisions that require organizational knowledge. This division allows analysts to focus on high-value work rather than mechanical data processing.

AI teammate responsibilitiesHuman responsibilities
Pattern recognition across high-volume logsStrategic decisions requiring organizational context
Timeline construction from multi-system eventsStakeholder communication and escalation
Indicator enrichment and correlationPolicy decisions and exception handling
Uniform rule application regardless of workloadFinal remediation approval
Repetitive analysis without fatigueInvestigation direction and prioritization

The value of this separation comes from playing to each side’s strengths. Teammates can process months of logs in minutes while maintaining perfect recall. Humans can apply the contextual judgment that distinguishes legitimate anomalies from actual threats.

Rolling out AI teammates gradually

Deploying AI teammates without a measured rollout can create trust issues or unintended automation. Organizations that succeed typically progress through four phases, building confidence and validating teammate behavior at each stage.

PhaseAI teammate roleHuman role
Read-only analysisReport findings and surface patternsValidate logic and assess accuracy
Supervised recommendationsSuggest specific actions with supporting evidenceReview recommendations and execute approved actions
Approved automationExecute pre-approved scenarios; escalate exceptionsDefine approval criteria; handle escalated cases
Full automationHandle routine work end-to-endManage exceptions and refine policies

Every action an AI teammate takes is auditable, and every decision shows its reasoning. This transparency supports compliance reviews and builds operational trust over time.

Most organizations require 6 to 12 months to progress through all phases, depending on their security maturity and the complexity of their environment. Rushing through the phases tends to create setbacks that slow overall adoption.

Measuring whether it works

Track outcome-based metrics rather than activity metrics to measure AI teammate effectiveness. The Activity page and Events page provide the data you need to calculate these values.

MetricTargetHow to measure
Mean time to detectionUnder 1 hourTime from event occurrence to teammate alert
Investigation resolution time4-hour reduction from baselineCompare pre- and post-deployment resolution times
False positive rateUnder 20%Ratio of dismissed alerts to total alerts
Continuous control validation90%+ coveragePercentage of controls validated without manual intervention
Strategic vs mechanical work60% strategic, 40% mechanicalAnalyst time allocation surveys

The goal is not to reduce headcount. It is to reallocate analyst time toward threat hunting, architecture improvement, and proactive security work. The metrics should reflect whether that reallocation is actually happening.

See AI Team Performance for additional metrics on teammate token usage and response quality.

See also