FluentD Connector
Configure the FluentD connector to receive logs and events from Fluentd agents using the forward protocol for unified logging aggregation.
11 minute read
Overview
The FluentD connector receives log data and events from Fluentd agents using the native Fluentd forward protocol. It enables Edge Delta to act as a Fluentd aggregation server, accepting data forwarded from Fluentd instances running on application servers, containers, and infrastructure. Content streams into Edge Delta Pipelines for analysis by AI teammates through the Edge Delta MCP connector.
When you add this streaming connector, it appears as a FluentD source in your selected pipeline. AI teammates access this data by querying the Edge Delta backend with the Edge Delta MCP connector.
Add the FluentD Connector
To add the FluentD connector, you configure Edge Delta to listen for incoming Fluentd connections on a specified port with shared key authentication.
Prerequisites
Before configuring the connector, ensure you have:
- Edge Delta agent deployed with network access to receive incoming connections
- Fluentd or td-agent installed on source systems forwarding logs
- Firewall rules allowing inbound traffic on the FluentD port
- Identified shared key for authentication between Fluentd instances and Edge Delta
Configuration Steps
- Navigate to AI Team > Connectors in the Edge Delta application
- Find the FluentD connector in Streaming Connectors
- Click the connector card
- Configure the Port number to listen on (default: 3421)
- Set the Shared Key for authentication
- Configure Listen address (default: 0.0.0.0)
- Set Read Timeout for incoming connections
- Optionally configure Advanced Settings
- Select a target environment
- Click Save
The connector is now listening for Fluentd connections and streaming content.

Configuration Options
Connector Name
Name to identify this FluentD connector instance.
Port
Port number to listen on for incoming Fluentd forward protocol connections.
Format: Port number (1-65535)
Examples:
3421- Default Edge Delta port24224- Standard Fluentd forward protocol port9880- Non-privileged port alternative
Shared Key
Shared secret key for authentication between Fluentd nodes and Edge Delta. Must match the shared key configured in your Fluentd instances’ forward output plugin.
Format: String value (alphanumeric and special characters)
Examples:
my-secure-shared-key-2024a8f5e2b9c1d4f7a3e6b8c9d2f5a7e3b6
Security Note: Use strong, unique shared keys in production. Avoid simple values like “shared-key” or “secret”.
Listen
IP address to which the FluentD listener will bind. Use 0.0.0.0 to accept connections on all network interfaces, or specify a particular IP to restrict access.
Format: Valid IPv4 address
Examples:
0.0.0.0- All interfaces (default)192.168.1.100- Specific IP address127.0.0.1- Localhost only (testing)
Read Timeout
Maximum time to wait for incoming data on established connections before considering the connection idle.
Format: Duration with unit (s, m, h)
Default: 1 minute
Examples:
30s- 30 seconds for low-latency networks1m- 1 minute (default)2m- 2 minutes for high-latency networks
Advanced Settings
Add Ingestion Timestamp
When enabled, adds a server-side timestamp to logs when ingested if the log entry doesn’t contain a timestamp field. When disabled, only uses timestamps from the source Fluentd instance.
Default: Disabled
Skip Ingestion Timestamp on Failure
When enabled, continues processing log data even if timestamp parsing fails. When disabled, rejects logs with invalid timestamps.
Default: Disabled
TLS
Optional TLS encryption for Fluentd forward protocol connections. When enabled, all log data transmitted between Fluentd instances and Edge Delta is encrypted.
Configuration Options:
- Ignore Certificate Check: Disables SSL/TLS certificate verification. Use with caution in testing environments only.
- CA File: Absolute file path to the CA certificate for SSL/TLS connections.
- CA Path: Absolute path where CA certificate files are located for SSL/TLS.
- CRT File: Absolute path to the SSL/TLS certificate file for secure communication.
- Key File: Absolute path to the private key file used in SSL/TLS connections.
- Key Password: Optional password for the key file.
- Client Auth Type: Client authentication type. Default is noclientcert.
- Minimum Version: Minimum TLS version. Default is TLSv1_2.
- Maximum Version: Maximum TLS version allowed for connections.
Recommendation: Enable TLS when forwarding logs over untrusted networks or when logs contain sensitive information.
Metadata Level
This option is used to define which detected resources and attributes to add to each data item as it is ingested by Edge Delta. You can select:
- Required Only: This option includes the minimum required resources and attributes for Edge Delta to operate.
- Default: This option includes the required resources and attributes plus those selected by Edge Delta
- High: This option includes the required resources and attributes along with a larger selection of common optional fields.
- Custom: With this option selected, you can choose which attributes and resources to include. The required fields are selected by default and can’t be unchecked.
Based on your selection in the GUI, the source_metadata YAML is populated as two dictionaries (resource_attributes and attributes) with Boolean values.
See Choose Data Item Metadata for more information on selecting metadata.
FluentD-specific metadata included:
ed.source.type- Source type (fluentd_input)
Rate Limit
The rate_limit parameter enables you to control data ingestion based on system resource usage. This advanced setting helps prevent source nodes from overwhelming the agent by automatically throttling or stopping data collection when CPU or memory thresholds are exceeded.
Use rate limiting to prevent runaway log collection from overwhelming the agent in high-volume sources, protect agent stability in resource-constrained environments with limited CPU/memory, automatically throttle during bursty traffic patterns, and ensure fair resource allocation across source nodes in multi-tenant deployments.
When rate limiting triggers, pull-based sources (File, S3, HTTP Pull) stop fetching new data, push-based sources (HTTP, TCP, UDP, OTLP) reject incoming data, and stream-based sources (Kafka, Pub/Sub) pause consumption. Rate limiting operates at the source node level, where each source with rate limiting enabled independently monitors and enforces its own thresholds.
Configuration Steps:
- Click Add New in the Rate Limit section
- Click Add New for Evaluation Policy
- Select Policy Type:
- CPU Usage: Monitors CPU consumption and rate limits when usage exceeds defined thresholds. Use for CPU-intensive sources like file parsing or complex transformations.
- Memory Usage: Monitors memory consumption and rate limits when usage exceeds defined thresholds. Use for memory-intensive sources like large message buffers or caching.
- AND (composite): Combines multiple sub-policies with AND logic. All sub-policies must be true simultaneously to trigger rate limiting. Use when you want conservative rate limiting (both CPU and memory must be high).
- OR (composite): Combines multiple sub-policies with OR logic. Any sub-policy can trigger rate limiting. Use when you want aggressive rate limiting (either CPU or memory being high triggers).
- Select Evaluation Mode. Choose how the policy behaves when thresholds are exceeded:
- Enforce (default): Actively applies rate limiting when thresholds are met. Pull-based sources (File, S3, HTTP Pull) stop fetching new data, push-based sources (HTTP, TCP, UDP, OTLP) reject incoming data, and stream-based sources (Kafka, Pub/Sub) pause consumption. Use in production to protect agent resources.
- Monitor: Logs when rate limiting would occur without actually limiting data flow. Use for testing thresholds before enforcing them in production.
- Passthrough: Disables rate limiting entirely while keeping the configuration in place. Use to temporarily disable rate limiting without removing configuration.
- Set Absolute Limits and Relative Limits (for CPU Usage and Memory Usage policies)
Note: If you specify both absolute and relative limits, the system evaluates both conditions and rate limiting triggers when either condition is met (OR logic). For example, if you set absolute limit to
1.0CPU cores and relative limit to50%, rate limiting triggers when the source uses either 1 full core OR 50% of available CPU, whichever happens first.
For CPU Absolute Limits: Enter value in full core units:
0.1= one-tenth of a CPU core0.5= half a CPU core1.0= one full CPU core2.0= two full CPU cores
For CPU Relative Limits: Enter percentage of total available CPU (0-100):
50= 50% of available CPU75= 75% of available CPU85= 85% of available CPU
For Memory Absolute Limits: Enter value in bytes
104857600= 100Mi (100 × 1024 × 1024)536870912= 512Mi (512 × 1024 × 1024)1073741824= 1Gi (1 × 1024 × 1024 × 1024)
For Memory Relative Limits: Enter percentage of total available memory (0-100)
60= 60% of available memory75= 75% of available memory80= 80% of available memory
- Set Refresh Interval (for CPU Usage and Memory Usage policies). Specify how frequently the system checks resource usage:
- Recommended Values:
10sto30sfor most use cases5sto10sfor high-volume sources requiring quick response1mor higher for stable, low-volume sources
The system fetches current CPU/memory usage at the specified refresh interval and uses that value for evaluation until the next refresh. Shorter intervals provide more responsive rate limiting but incur slightly higher overhead, while longer intervals are more efficient but slower to react to sudden resource spikes.
The GUI generates YAML as follows:
# Simple CPU-based rate limiting
nodes:
- name: <node name>
type: <node type>
rate_limit:
evaluation_policy:
policy_type: cpu_usage
evaluation_mode: enforce
absolute_limit: 0.5 # Limit to half a CPU core
refresh_interval: 10s
# Simple memory-based rate limiting
nodes:
- name: <node name>
type: <node type>
rate_limit:
evaluation_policy:
policy_type: memory_usage
evaluation_mode: enforce
absolute_limit: 536870912 # 512Mi in bytes
refresh_interval: 30s
Composite Policies (AND / OR)
When using AND or OR policy types, you define sub-policies instead of limits. Sub-policies must be siblings (at the same level)—do not nest sub-policies within other sub-policies. Each sub-policy is independently evaluated, and the parent policy’s evaluation mode applies to the composite result.
- AND Logic: All sub-policies must evaluate to true at the same time to trigger rate limiting. Use when you want conservative rate limiting (limit only when CPU AND memory are both high).
- OR Logic: Any sub-policy evaluating to true triggers rate limiting. Use when you want aggressive protection (limit when either CPU OR memory is high).
Configuration Steps:
- Select AND (composite) or OR (composite) as the Policy Type
- Choose the Evaluation Mode (typically Enforce)
- Click Add New under Sub-Policies to add the first condition
- Configure the first sub-policy by selecting policy type (CPU Usage or Memory Usage), selecting evaluation mode, setting absolute and/or relative limits, and setting refresh interval
- In the parent policy (not within the child), click Add New again to add a sibling sub-policy
- Configure additional sub-policies following the same pattern
The GUI generates YAML as follows:
# AND composite policy - both CPU AND memory must exceed limits
nodes:
- name: <node name>
type: <node type>
rate_limit:
evaluation_policy:
policy_type: and
evaluation_mode: enforce
sub_policies:
# First sub-policy (sibling)
- policy_type: cpu_usage
evaluation_mode: enforce
absolute_limit: 0.75 # Limit to 75% of one core
refresh_interval: 15s
# Second sub-policy (sibling)
- policy_type: memory_usage
evaluation_mode: enforce
absolute_limit: 1073741824 # 1Gi in bytes
refresh_interval: 15s
# OR composite policy - either CPU OR memory can trigger
nodes:
- name: <node name>
type: <node type>
rate_limit:
evaluation_policy:
policy_type: or
evaluation_mode: enforce
sub_policies:
- policy_type: cpu_usage
evaluation_mode: enforce
relative_limit: 85 # 85% of available CPU
refresh_interval: 20s
- policy_type: memory_usage
evaluation_mode: enforce
relative_limit: 80 # 80% of available memory
refresh_interval: 20s
# Monitor mode for testing thresholds
nodes:
- name: <node name>
type: <node type>
rate_limit:
evaluation_policy:
policy_type: memory_usage
evaluation_mode: monitor # Only logs, doesn't limit
relative_limit: 70 # Test at 70% before enforcing
refresh_interval: 30s
Target Environments
Select the Edge Delta pipeline (environment) where you want to deploy this connector.
How to Use the FluentD Connector
The FluentD connector integrates seamlessly with AI Team, enabling analysis of log data from Fluentd-instrumented infrastructure. AI teammates automatically leverage the ingested data based on the queries they receive and the context of the conversation.
Use Case: Application Log Aggregation from Distributed Servers
Centralize application logs from multiple servers that use Fluentd for local log collection. AI teammates can analyze application errors, performance issues, and patterns across your server fleet. When combined with PagerDuty alerts, teammates automatically query recent application logs during incident investigation to identify which servers are experiencing problems and correlate error patterns across hosts.
Configuration: Configure each Fluentd instance to forward to Edge Delta using the forward output plugin with matching shared key. AI teammates can then query aggregated logs from all servers through the Edge Delta MCP connector.
Use Case: Kubernetes Container Log Collection
Deploy Fluentd as a DaemonSet in Kubernetes to collect container logs and forward them to Edge Delta for centralized analysis. AI teammates can identify containers producing errors, track pod restarts, and correlate issues across microservices. This is valuable when investigating performance degradation—teammates can analyze log patterns from affected pods and identify the service causing bottlenecks.
Configuration: Deploy Fluentd DaemonSet with forward output pointing to Edge Delta FluentD connector. Enable TLS for secure transmission across cluster networks.
Use Case: Syslog Aggregation via FluentD
Use Fluentd as an intermediate aggregation layer that receives syslog messages from network devices and servers, then forwards them to Edge Delta. AI teammates can analyze authentication failures, configuration changes, and security events across your infrastructure. When integrated with Jira, teammates can automatically document security incidents by querying relevant syslog entries and creating comprehensive issue reports.
Configuration: Configure Fluentd to receive syslog on port 5140, then forward to Edge Delta using the forward protocol with shared key authentication.
Troubleshooting
Connection refused errors: Verify Edge Delta is listening on configured port (netstat -tuln | grep 3421). Check firewall rules allow inbound traffic from Fluentd instance IP addresses.
Shared key mismatch: Ensure shared key in Fluentd forward output exactly matches Edge Delta connector configuration (case-sensitive, no leading/trailing spaces).
Logs sent but not appearing: Verify correct target environment selected. Check if “Skip Ingestion Timestamp on Failure” needs enabling for logs with invalid timestamps.
TLS handshake failures: Verify certificates are valid and not expired. Ensure certificate paths in Fluentd configuration are correct and Fluentd has read permissions.
High buffer usage in Fluentd: Check if rate limits are configured and log volume exceeds limits. Verify Edge Delta is reachable and has sufficient capacity. Consider reducing flush interval in Fluentd.
Slow forwarding performance: Investigate network bandwidth and latency. Enable compression in Fluentd forward output to reduce network consumption. Monitor resource usage on Fluentd host (CPU, memory, disk I/O).
Next Steps
- Ingest from FluentD integration guide for comprehensive FluentD source configuration
- Learn about creating custom teammates that can use FluentD log data
- Explore the Edge Delta MCP connector for querying log data
For additional help, visit AI Team Support.