Security Controls

How Edge Delta’s architecture addresses AI agent security concerns through data boundaries, permissions, approvals, and audit logging.

Overview

AI teammates introduce considerations around data access, permissions, and oversight. This page explains how Edge Delta’s architecture addresses these concerns through layered controls: pipeline-level data boundaries, connector-level permissions, approval workflows, and comprehensive audit logging.

Data boundaries

Concern: AI agents can query and combine data from multiple sources, potentially exposing sensitive information in their context windows or responses.

How Edge Delta handles it:

AI teammates read from pipelines via the Edge Delta backend. All pipeline processors—masking, filtering, RBAC—apply before data reaches teammates. This means the same controls that protect data flowing to SIEMs, data lakes, or other destinations also apply to AI teammate queries.

  • Mask processor redacts PII patterns (email addresses, SSNs, credit card numbers, IP addresses) at ingestion time, before data enters any downstream system including AI context
  • Filter processor excludes entire log sources or specific record types from reaching AI-accessible pipelines
  • EDXEncrypt provides field-level encryption for highly sensitive fields that require reversible protection

Because processors run at the pipeline level, you configure data protection once and it applies uniformly—whether the consumer is a SIEM, a data lake, or an AI teammate.

See Mask Processor, Filter Processor, and EDXEncrypt and EDXDecrypt for configuration details.

Permission controls

Concern: Agents designed to help can accumulate permissions beyond what’s necessary for their role.

How Edge Delta handles it:

Each connector exposes specific tools with individual permission settings. You configure these in the connector’s Tools tab:

PermissionBehavior
AllowTool executes autonomously without approval
Ask PermissionTool requires human approval before execution

Read operations typically default to Allow, while write and modify operations default to Ask Permission. You can adjust these defaults per tool.

Teammates only access connectors explicitly assigned to them. Specialized teammates (Security Engineer, DevOps Engineer, etc.) come pre-configured with scoped connector access appropriate to their role. Custom teammates require you to assign connectors manually, ensuring you grant only what each teammate needs.

You can further restrict tools when assigning connectors to specific teammates. Navigate to AI TeamTeammates → Edit → Connectors to enable or disable individual tools per teammate.

See Connectors Overview for configuration details.

Human oversight

Concern: Fully autonomous agents can take actions without appropriate review.

How Edge Delta handles it:

When a tool is set to Ask Permission, the teammate packages the relevant context—the triggering event, its analysis, and the proposed action—into an approval request. These requests appear in the Activity page with a Waiting for approval status.

Infrastructure changes (deployments, configuration updates) occur in channels rather than direct messages. This routing ensures:

  • Approval context is visible to the team
  • Audit trails capture who approved what
  • Decisions remain discoverable for compliance reviews

Direct messages are intentionally read-only for state changes—meaningful actions route through channels where they receive appropriate oversight.

If a teammate requires immediate intervention, you can disable connectors or the teammate itself through the UI to halt all activity.

See Activity and Channels for more on monitoring and managing teammate actions.

Audit and logging

Concern: Difficulty tracking what agents accessed, what decisions they made, and what actions they took.

How Edge Delta handles it:

Edge Delta captures comprehensive audit data for AI Team activity:

LocationWhat it captures
Activity pageAll threads with priority, status, token consumption
AI Team EventsQueryable history by monitor, thread state, channel, connector
Thread detailsFull conversation context, tools invoked, results returned
MCP layerRequest/response with timestamps and teammate identity

Per-message metrics include tokens used, response time, and quality score. Thread history preserves the full conversation context, so you can reconstruct exactly what happened during any investigation.

Use the Events page filters to query by time range, connector name, teammate, or thread state. This supports both real-time monitoring and retrospective compliance reviews.

See AI Team Events and Activity for query and filtering options.

Compliance considerations

Concern: AI agents processing regulated data (PII, PHI, cardholder data) creates compliance questions about data handling and auditability.

How Edge Delta handles it:

Pipeline processors apply masking and redaction before AI access—the same controls that satisfy compliance requirements for other destinations apply to AI teammates. RBAC restricts which teams can configure teammates and connectors.

RegulationHow Edge Delta addresses it
GDPRMask processor redacts PII before AI context
HIPAAPipeline filtering excludes PHI sources or masks specific fields
SOC 2Activity page and Events provide configuration change tracking and approval audit trails
PCI DSSMask processor tokenizes card patterns at ingestion

Audit trails satisfy SOC 2 configuration change tracking requirements. Data residency controls apply uniformly across pipelines, including data that AI teammates access.

See Strengthening Security and Compliance for broader platform compliance capabilities.

Investigating issues

Concern: When something goes wrong, how do you trace what happened?

How Edge Delta handles it:

Start with the Activity page to locate the relevant thread. Each thread shows:

  • Full conversation history with all messages and tool invocations
  • Which connectors and tools were used
  • Results returned from each tool call
  • Approval history (who approved what, and when)

The Events page supports filtering by time, connector, teammate, and status, so you can narrow down to specific incidents quickly.

If you need to halt a teammate’s activity immediately, disable the relevant connector or the teammate itself through the UI. This stops all further tool invocations while you investigate.

Human-AI responsibility model

Concern: Without clear boundaries, AI teammates may attempt tasks better suited for human judgment, or humans may underutilize AI capabilities.

How Edge Delta handles it:

AI teammates function as force multipliers for security operations. They excel at volume, consistency, and recall across large datasets, while humans provide strategic context and make decisions that require organizational knowledge. This division allows analysts to focus on high-value work rather than mechanical data processing.

Responsibilities divide as follows:

AI teammate responsibilitiesHuman responsibilities
Pattern recognition across high-volume logsStrategic decisions requiring organizational context
Timeline construction from multi-system eventsStakeholder communication and escalation
Indicator enrichment and correlationPolicy decisions and exception handling
Uniform rule application regardless of workloadFinal remediation approval
Repetitive analysis without fatigueInvestigation direction and prioritization

This separation improves outcomes because teammates can process months of logs in minutes while maintaining perfect recall, and humans can focus on the contextual judgment that distinguishes legitimate anomalies from actual threats.

Implementation approach

Concern: Deploying AI teammates without a measured rollout can create trust issues or unintended automation.

How Edge Delta handles it:

Organizations typically progress through four phases when deploying AI teammates. This phased approach builds confidence and allows teams to validate teammate behavior at each stage.

PhaseAI teammate roleHuman role
Read-only analysisReport findings and surface patternsValidate logic and assess accuracy
Supervised recommendationsSuggest specific actions with supporting evidenceReview recommendations and execute approved actions
Approved automationExecute pre-approved scenarios; escalate exceptionsDefine approval criteria; handle escalated cases
Full automationHandle routine work end-to-endManage exceptions and refine policies

Every action an AI teammate takes is auditable. Every decision shows its reasoning. This transparency supports compliance reviews and builds operational trust. See Audit and logging for details on activity tracking.

Organizations typically require 6-12 months to progress through all phases, depending on security maturity and the complexity of their environment.

Measuring effectiveness

Concern: Difficulty quantifying the operational value of AI teammate deployment.

How Edge Delta handles it:

Track outcome-based metrics rather than activity metrics to measure AI teammate effectiveness. The Activity page and Events page provide the data needed to calculate these values.

MetricTargetHow to measure
Mean time to detectionUnder 1 hourTime from event occurrence to teammate alert
Investigation resolution time4-hour reduction from baselineCompare pre- and post-deployment resolution times
False positive rateUnder 20%Ratio of dismissed alerts to total alerts
Continuous control validation90%+ coveragePercentage of controls validated without manual intervention
Strategic vs mechanical work60% strategic, 40% mechanicalAnalyst time allocation surveys

The goal is not to reduce headcount but to reallocate analyst time toward threat hunting, architecture improvement, and proactive security work. Use the Events page to filter by teammate, connector, and time range when building these measurements.

See AI Team Performance for additional metrics on teammate token usage and response quality.

See also