Kubernetes Logs Connector
7 minute read
Overview
The Kubernetes Logs connector collects container logs from Kubernetes pods. Kubernetes generates logs from every container, capturing application output, errors, and operational events essential for troubleshooting distributed applications. Content streams into Edge Delta Pipelines for analysis by AI teammates through the Edge Delta MCP connector.
The connector provides flexible filtering based on namespaces, pod names, and container names. It automatically enriches logs with Kubernetes metadata (deployments, nodes, resource attributes) and supports both plain text and JSON log parsing.
When you add this streaming connector, it appears as a Kubernetes Logs source in your selected pipeline. AI teammates access this data by querying the Edge Delta backend with the Edge Delta MCP connector.
Platform: Kubernetes only (requires in-cluster deployment)
Add the Kubernetes Logs Connector
To add the Kubernetes Logs connector, you configure include/exclude filters to specify which pods and namespaces to monitor.
Prerequisites
Before configuring the connector, ensure you have:
- Edge Delta agent deployed to Kubernetes cluster as DaemonSet with log read permissions
- RBAC permissions to read pod logs from target namespaces
- Identified namespaces, pods, or containers to monitor
Configuration Steps
- Navigate to AI Team > Connectors in the Edge Delta application
- Find the Kubernetes Logs connector in Streaming Connectors
- Click the connector card
- Configure Kubernetes Include with resource filters
- Optionally add Kubernetes Exclude to filter out specific resources
- Optionally configure Advanced Settings for metadata, parsing, or rate limiting
- Select a target environment (Kubernetes deployment)
- Click Save
The connector deploys to agents and begins collecting logs from matching pods.

Configuration Options
Connector Name
Name to identify this Kubernetes Logs connector instance.
Kubernetes Include
Kubernetes namespace, pod, or container name that agents should monitor. Wildcards supported.
Format: k8s.<attribute>=<regex_pattern>
Examples:
k8s.namespace.name=.*
- All pods in all namespacesk8s.namespace.name=production
- Production namespace onlyk8s.pod.name=^api-.*$
- Pods starting with “api-”k8s.namespace.name=staging,k8s.container.name=app
- “app” container in staging namespace
Available Attributes:
k8s.namespace.name
- Namespace namek8s.pod.name
- Pod namek8s.container.name
- Container namek8s.deployment.name
- Deployment namek8s.statefulset.name
- StatefulSet namek8s.daemonset.name
- DaemonSet name
Kubernetes Exclude
Kubernetes namespace, pod, or container name to exclude from monitoring. Use to exclude specific resources from include filter.
Format: Same as Kubernetes Include - k8s.<attribute>=<regex_pattern>
Examples:
k8s.namespace.name=^kube-system$
- Exclude kube-system namespacek8s.pod.name=.*test.*
- Exclude pods with “test” in namek8s.container.name=sidecar
- Exclude sidecar containers
Common Patterns:
- Exclude system namespaces:
k8s.namespace.name=^kube-.*$
- Exclude test pods:
k8s.pod.name=.*-test$
- Exclude monitoring sidecars:
k8s.container.name=(prometheus|fluentd)
Advanced Settings
Resource Fields
Metadata fields to include additionally for given input. Custom labels and annotations to enrich logs.
Examples:
app.version
- Application versionteam.name
- Team ownershipcost.center
- Cost allocation
Pod Labels
List of regexes for selecting pod label keys to include.
Format: Regex patterns matching label keys
Examples:
app.*
- All labels starting with “app”version
- Version label only(team|owner|environment)
- Specific labels
Pod Annotations
List of regexes for selecting pod annotation keys to include.
Format: Regex patterns matching annotation keys
Examples:
prometheus.io/.*
- Prometheus annotationsdeployment.*
- Deployment-related annotations
Node Labels
List of regexes for selecting node label keys to include.
Format: Regex patterns matching node label keys
Examples:
node-role.kubernetes.io/.*
- Node role labelstopology.kubernetes.io/zone
- Availability zone
Namespace Labels
List of regexes for selecting namespace label keys to include.
Format: Regex patterns matching namespace label keys
Examples:
environment
- Environment labelproject.*
- Project-related labels
Discovery
Override logic for file discovery while looking for files of Kubernetes pods in mounted filesystem. Useful for retaining Kubernetes metadata with context.
Format: File path patterns
Default: Edge Delta auto-discovery
Use Cases:
- Custom log paths
- Non-standard container runtimes
- Specialized file locations
Log Parsing Mode
Log parsing mode to use for input. Basic will not parse log as JSON, Full will parse log as JSON if valid.
Values: Basic, Full
Default: Basic
When to Use:
- Basic: Plain text logs, simple formats
- Full: JSON-formatted application logs, structured logging
Example: Set to Full for logs like {"level":"error","message":"connection failed"}
Metadata Level
This option is used to define which detected resources and attributes to add to each data item as it is ingested by Edge Delta. You can select:
- Required Only: This option includes the minimum required resources and attributes for Edge Delta to operate.
- Default: This option includes the required resources and attributes plus those selected by Edge Delta
- High: This option includes the required resources and attributes along with a larger selection of common optional fields.
- Custom: With this option selected, you can choose which attributes and resources to include. The required fields are selected by default and can’t be unchecked.
Based on your selection in the GUI, the source_metadata
YAML is populated as two dictionaries (resource_attributes
and attributes
) with Boolean values.
See Choose Data Item Metadata for more information on selecting metadata.
Kubernetes Logs-specific metadata included:
k8s.node.name
- Node where pod runsk8s.namespace.name
- Pod namespacecontainer.image.name
- Container imagek8s.statefulset.name
- StatefulSet (if applicable)k8s.daemonset.name
- DaemonSet (if applicable)k8s.replicaset.name
- ReplicaSet (if applicable)k8s.job.name
- Job (if applicable)k8s.cronjob.name
- CronJob (if applicable)k8s.deployment.name
- Deployment (if applicable)ed.domain
- Edge Delta domainevent.domain
- Event domainevent.name
- Event name
Rate Limit
Rate limit configuration to control log ingestion volume and prevent log storms from overwhelming the pipeline.
Target Environments
Select the Edge Delta pipeline (environment) where you want to deploy this connector. Must be a Kubernetes environment - the connector requires in-cluster deployment.
How to Use the Kubernetes Logs Connector
The Kubernetes Logs connector integrates seamlessly with AI Team, enabling AI-powered analysis of container logs. AI teammates automatically leverage logs to troubleshoot application errors, monitor deployment health, and investigate pod crashes.
Use Case: Monitoring Production Error Patterns
Analyze application errors across production services by collecting logs from production namespace. AI teammates identify error patterns, determine which services generate most errors, and provide context about issues. When combined with PagerDuty alerts, teammates automatically investigate error spikes by querying recent production logs and identifying root causes.
Configuration: Include: k8s.namespace.name=production
, Exclude: k8s.pod.name=.*test.*
, Log Parsing Mode: Full
Use Case: Tracking Deployment Health
Verify deployment rollouts by analyzing container startup logs and identifying crash loops. AI teammates monitor logs from specific deployments, check for startup errors, and validate health checks pass. This is valuable when deploying new versions—teammates can confirm pods start successfully and catch issues before they impact users.
Configuration: Include: k8s.namespace.name=staging,k8s.deployment.name=api-v2
, Log Parsing Mode: Full
Use Case: Investigating Pod Crashes
Identify why pods crash by retrieving logs up to termination point. AI teammates analyze stacktraces, identify specific errors causing crashes, and recommend remediation based on failure patterns. When integrated with Jira, teammates automatically document crash causes by querying pod logs and creating tickets with diagnostic details.
Configuration: Include: k8s.namespace.name=.*
, Log Parsing Mode: Full
for comprehensive crash analysis
Pattern Syntax Reference
When creating include and exclude filters, use regex patterns to match Kubernetes resources:
Basic Patterns:
.
- Any single character.*
- Zero or more of any character (wildcard)^
- Start of string$
- End of string[abc]
- Any of a, b, or c[^abc]
- NOT a, b, or c
Common Examples:
k8s.namespace.name=.*
with excludek8s.namespace.name=^kube-system$
- All namespaces except kube-systemk8s.pod.name=^(api|auth|worker).*$
- Pods starting with api, auth, or workerk8s.namespace.name=^(production|staging).*$
- Production and staging namespaces only
Troubleshooting
No logs appearing: Verify Edge Delta DaemonSet running (kubectl get ds edgedelta -n edgedelta
). Check include filter matches intended pods. Confirm RBAC permissions for log access. Verify correct target environment selected.
Too many logs: Add exclude filters for noisy namespaces (k8s.namespace.name=^kube-.*$
). Make include patterns more specific. Configure rate limiting. Deploy separate connectors for different use cases.
Missing Kubernetes metadata: Ensure agent service account has permissions to read pods, deployments, services. Verify pods have standard Kubernetes labels. Check metadata level configuration includes required attributes.
JSON logs as plain text: Change Log Parsing Mode to Full. Verify application produces valid JSON. Check for BOM or encoding issues preventing JSON detection.
Agent not discovering pods: Use DaemonSet deployment for full cluster coverage. Test regex patterns against actual pod names. Verify target pods in Running state.
Delayed or missing logs: Check rate limiting not throttling collection. Monitor agent resource usage (CPU, memory). Verify network connectivity to Edge Delta backend.
Sensitive data collected: Add exclude filters for namespaces with secrets/PII. Configure data redaction processors in pipeline. For multi-tenant clusters, create separate connectors per tenant.
Next Steps
- Learn about Kubernetes source node for advanced configuration
- Learn about creating custom teammates that can use Kubernetes logs
For additional help, visit AI Team Support.