Kubernetes Events Connector
6 minute read
Overview
The Kubernetes Events connector monitors and collects Kubernetes cluster events in real-time. Kubernetes events document state changes and operational activities including pod lifecycle changes, resource allocation decisions, scheduling operations, configuration issues, and node health conditions. Content streams into Edge Delta Pipelines for analysis by AI teammates through the Edge Delta MCP connector.
The connector watches the Kubernetes API continuously, capturing events that Kubernetes discards after one hour. This enables long-term historical analysis, anomaly detection, and compliance auditing.
When you add this streaming connector, it appears as a Kubernetes Events source in your selected pipeline. AI teammates access this data by querying the Edge Delta backend with the Edge Delta MCP connector.
Platform: Kubernetes only (requires in-cluster deployment)
Add the Kubernetes Events Connector
To add the Kubernetes Events connector, you configure it in AI Team and deploy to an Edge Delta pipeline running in your Kubernetes cluster.
Prerequisites
Before configuring the connector, ensure you have:
- Edge Delta agent deployed in Kubernetes cluster with API access
- Service account configured with event read permissions (get, list, watch)
- RBAC configured with ClusterRole or Role
Required RBAC Configuration:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: edgedelta-events-reader
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: edgedelta-events-reader-binding
subjects:
- kind: ServiceAccount
name: edgedelta
namespace: edgedelta
roleRef:
kind: ClusterRole
name: edgedelta-events-reader
apiGroup: rbac.authorization.k8s.io
Configuration Steps
- Navigate to AI Team > Connectors in the Edge Delta application
- Find the Kubernetes Events connector in Streaming Connectors
- Click the connector card
- Optionally configure Advanced Settings for Report Interval
- Select a target environment (Kubernetes deployment)
- Click Save
The connector deploys to agents and begins watching the Kubernetes API for events.

Configuration Options
Connector Name
Name to identify this Kubernetes Events connector instance.
Advanced Settings
Report Interval
Interval to report stats. Controls how frequently event statistics and metrics are generated.
Format: Duration in milliseconds
Default: 1 minute (60000ms)
Examples:
60000
- 1 minute (default)30000
- 30 seconds (higher frequency)300000
- 5 minutes (lower frequency)
Use Cases:
- Lower intervals: Real-time monitoring, critical clusters
- Higher intervals: Reduce overhead, less critical environments
Metadata Level
This option is used to define which detected resources and attributes to add to each data item as it is ingested by Edge Delta. You can select:
- Required Only: This option includes the minimum required resources and attributes for Edge Delta to operate.
- Default: This option includes the required resources and attributes plus those selected by Edge Delta
- High: This option includes the required resources and attributes along with a larger selection of common optional fields.
- Custom: With this option selected, you can choose which attributes and resources to include. The required fields are selected by default and can’t be unchecked.
Based on your selection in the GUI, the source_metadata
YAML is populated as two dictionaries (resource_attributes
and attributes
) with Boolean values.
See Choose Data Item Metadata for more information on selecting metadata.
Kubernetes Events-specific metadata included:
k8s.node.name
- Node name where event occurredk8s.namespace.name
- Namespace of resourcecontainer.image.name
- Container image namek8s.statefulset.name
- StatefulSet name (if applicable)k8s.daemonset.name
- DaemonSet name (if applicable)k8s.replicaset.name
- ReplicaSet name (if applicable)k8s.job.name
- Job name (if applicable)k8s.cronjob.name
- CronJob name (if applicable)k8s.deployment.name
- Deployment name (if applicable)ed.domain
- Edge Delta domainevent.domain
- Event domainevent.name
- Event name
Kubernetes event fields automatically included:
- Event type (Normal, Warning)
- Event reason
- Event message
- Involved object (kind, name, namespace)
- Source component
- Timestamps (first and last occurrence)
- Event count
Rate Limit
Rate limit configuration to control maximum event processing rate and manage processing capacity. Important for preventing event storms from overwhelming downstream systems.
Target Environments
Select the Edge Delta pipeline (environment) where you want to deploy this connector. Must be a Kubernetes environment - the connector requires in-cluster API access.
How to Use the Kubernetes Events Connector
The Kubernetes Events connector integrates seamlessly with AI Team, enabling AI-powered analysis of cluster operations. AI teammates automatically leverage event data to troubleshoot pod failures, analyze deployments, and investigate resource issues.
Use Case: Diagnosing Pod CrashLoopBackOff Issues
Identify why pods repeatedly crash by analyzing Kubernetes events capturing failure conditions. AI teammates use event data to reveal root causes (image pull failures, configuration errors, resource limits) and provide targeted remediation steps. When combined with PagerDuty alerts, teammates automatically query recent pod events during incident investigation to identify which pods are failing and why.
Configuration: Deploy to production Kubernetes environment with metadata enabled to capture pod lifecycle events.
Use Case: Detecting Node Resource Pressure
Proactively identify infrastructure problems through node resource pressure events (memory, disk, CPU). AI teammates detect patterns indicating capacity issues before they cause pod evictions. This is valuable for platform teams—teammates can correlate pressure events with pod scheduling failures and recommend capacity adjustments.
Configuration: Deploy to cluster monitoring environment to capture cluster-wide infrastructure health signals.
Use Case: Analyzing Deployment Scaling Operations
Understand application scaling behavior through deployment and replica set events. AI teammates analyze scaling patterns, identify capacity constraints, and troubleshoot failed scale operations. When integrated with Jira, teammates automatically document scaling issues by querying deployment events and creating tickets with diagnostic details.
Configuration: Deploy to deployment monitoring environment with event metadata to track scaling operations across workloads.
Troubleshooting
No events appearing: Verify RBAC permissions with kubectl auth can-i list events --as=system:serviceaccount:edgedelta:edgedelta
. Confirm service account exists (kubectl get sa edgedelta -n edgedelta
). Check ClusterRoleBinding (kubectl get clusterrolebinding | grep edgedelta
). Review agent logs for permission errors.
Permission denied errors: Verify ClusterRole includes events resource with get, list, watch verbs (kubectl describe clusterrole edgedelta-events-reader
). Check ClusterRoleBinding references correct service account. Confirm service account mounted in pod (kubectl get pod <pod-name> -n edgedelta -o yaml
).
Missing metadata fields: Verify metadata level configuration includes Kubernetes fields. Check event structure in your API version (kubectl get events -o yaml | head -50
). Ensure Edge Delta agent version supports full event metadata. Review processor configuration for metadata filtering.
High event volume: Apply namespace filtering for critical namespaces only. Filter by event type (Warning, Error) to exclude routine Normal events. Use resource type filtering (Pod, Node) for specific monitoring goals. Implement deduplication in processors. Configure rate limits to prevent event storms.
API server impact concerns: Kubernetes watch API is efficient with minimal impact. Monitor API server metrics during deployment. Consider namespace-specific filtering to distribute load. Recommended agent resources: 256Mi-512Mi memory, 100m-500m CPU.
Events from some namespaces only: Check namespace filter configuration for typos (case-sensitive). Verify namespaces exist (kubectl get namespaces
). Confirm RBAC uses ClusterRole not namespace-specific Role. For cluster-wide monitoring, use ClusterRole and ClusterRoleBinding.
Next Steps
- Learn about Kubernetes event source node for advanced configuration
- Learn about creating custom teammates that can use Kubernetes events
For additional help, visit AI Team Support.