Edge Delta Splunk TCP Destination
12 minute read
Overview
The Splunk TCP destination node sends data directly to Splunk indexers or heavy forwarders using the Splunk-to-Splunk (S2S) protocol over TCP. This node simplifies migrations from Splunk Universal Forwarders and enables hybrid deployments where Edge Delta agents coexist with existing Splunk infrastructure.
- incoming_data_types: cluster_pattern_and_sample, health, heartbeat, log, metric, custom, splunk_payload, signal
Note: This node is currently in beta and is available for Enterprise tier accounts.
This node requires Edge Delta agent version v2.6.0 or higher.
Choosing the Right Splunk Integration
Edge Delta offers multiple ways to integrate with Splunk:
| Integration | Type | Protocol | Port | Authentication | Use Case |
|---|---|---|---|---|---|
| Splunk TCP (S2S) | Destination | TCP | 9997 (default) | Certificate-based | Direct replacement for Universal Forwarders, native Splunk protocol |
| Splunk HEC | Destination | HTTP/HTTPS | 8088 (default) | Token-based | Send data to Splunk HEC endpoint, cloud-friendly |
| Splunk TCP | Source | TCP | Custom | Certificate-based | Receive data from Splunk forwarders into Edge Delta |
| Splunk HEC Source | Source | HTTP | Custom | Token-based | Receive data via HTTP Event Collector protocol |
Key Differences:
- Destination nodes send data FROM Edge Delta TO Splunk
- Source nodes receive data INTO Edge Delta (from Splunk forwarders or HEC senders)
- Splunk TCP (S2S) provides direct compatibility with existing Splunk forwarder infrastructure
- Splunk HEC uses HTTP Event Collector for modern, cloud-friendly integration
Example Configuration
This configuration sends logs to a Splunk indexer on port 9997 (the default Splunk receiving port) with TLS enabled for secure transmission:
nodes:
- name: splunk_tcp_destination
type: splunk_tcp_output
host: splunk-indexer.example.com
port: 9997
index: main
parallel_worker_count: 10
tls:
enabled: true
ca_file: /path/to/ca.pem
Required Parameters
name
A descriptive name for the node. This is the name that will appear in pipeline builder and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a - and a space followed by the string. It is a required parameter for all nodes.
nodes:
- name: <node name>
type: <node type>
type: splunk_tcp_output
The type parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.
nodes:
- name: <node name>
type: <node type>
host
The host parameter specifies the address of the Splunk indexer or heavy forwarder that will receive the data. It can be an IP address or hostname.
- name: <node name>
type: splunk_tcp_output
host: splunk-indexer.example.com
port: 9997
port
The port parameter defines the TCP port number on which Splunk is listening for incoming data. The default Splunk receiving port is 9997.
- name: <node name>
type: splunk_tcp_output
host: localhost
port: 9997
Optional Parameters
node_reference
The node_reference parameter specifies a user-defined name for this specific destination integration. This helps identify the destination in monitoring and debugging.
- name: <node name>
type: splunk_tcp_output
node_reference: production_splunk
host: splunk.example.com
port: 9997
index
The index parameter specifies the Splunk index where the data should be stored. If not specified, data will be sent to the default index configured in Splunk (typically “main”).
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
index: application_logs
Default: main
index_expression
Minimum Agent Version: v2.7.0
The index_expression parameter allows you to dynamically route data to different Splunk indexes based on data item attributes using OTTL expressions. This advanced parameter enables flexible index routing similar to the GCS bucket expression capability, allowing you to send different types of data to different indexes based on runtime evaluation.
When configured, this expression is evaluated for each data item, and if it returns a valid index name, that index will be used instead of the static index configured in the index parameter.
Backward Compatibility: Agents running versions older than v2.7.0 will not honor the
index_expressionfield. Ensure all agents are upgraded to v2.7.0 or higher before using this parameter.
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
index: main # Fallback index
index_expression: attributes["target_index"]
Common Use Cases:
- Route data to different indexes based on log severity:
EDXIfElse(attributes["severity"] == "ERROR", "error_logs", "info_logs") - Use kubernetes namespace as index:
attributes["kubernetes.namespace.name"] - Dynamic routing based on custom attributes or tags
- Environment-specific index selection
Example: Multi-tenant index routing
- name: tenant_aware_splunk
type: splunk_tcp_output
host: splunk.example.com
port: 9997
index: default_tenant # Fallback for untagged data
index_expression: Concat(["tenant_", attributes["tenant_id"]], "")
parallel_worker_count
The parallel_worker_count parameter specifies the number of workers that run in parallel to process and send data to Splunk. Increasing this value can improve throughput for high-volume data streams.
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
parallel_worker_count: 10
Default: 5
buffer_ttl
The buffer_ttl parameter defines how long data should be buffered and retried if the connection to Splunk fails. This ensures data durability during network interruptions or Splunk maintenance windows.
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
buffer_ttl: "30m" # Retry for up to 30 minutes
buffer_path
The buffer_path parameter specifies the local directory where data will be saved if streaming to Splunk fails. The buffered data will be retried automatically when the connection is restored.
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
buffer_path: "/var/log/edgedelta/splunk_buffer"
Default: /var/log/edgedelta/outputbuffer
buffer_max_bytesize
The buffer_max_bytesize parameter configures the maximum byte size for total unsuccessful items. If the limit is reached, the remaining items are discarded until the buffer space becomes available. It is specified as a datasize.Size, has a default of 0 indicating no size limit, and it is optional.
nodes:
- name: <destination-name>
type: <destination-type>
buffer_max_bytesize: 2048
buffer_path
The buffer_path parameter configures the path to store unsuccessful items. Unsuccessful items are stored there to be retried back (exactly once delivery). It is specified as a string and it is optional.
Note: Buffered data may be delivered in non-chronological order after a destination failure. Event ordering is not guaranteed during recovery. Applications requiring ordered event processing should handle reordering at the application level.
nodes:
- name: <destination-name>
type: <destination-type>
buffer_path: <path to unsuccessful items folder>
buffer_ttl
The buffer_ttl parameter configures the time-to-Live for unsuccessful items, which indicates when to discard them. It is specified as a duration, has a default of 10m, and it is optional.
nodes:
- name: <destination-name>
type: <destination-type>
buffer_ttl: 20m
tls
Configure TLS settings for secure connections to this destination. TLS is optional and typically used when connecting to endpoints that require encrypted transport (HTTPS) or mutual TLS.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
<tls options>
Enable TLS
Enables TLS encryption for outbound connections to the destination endpoint. When enabled, all communication with the destination will be encrypted using TLS/SSL. This should be enabled when connecting to HTTPS endpoints or any service that requires encrypted transport. (YAML parameter: enabled)
Default: false
When to use: Enable when the destination requires HTTPS or secure connections. Always enable for production systems handling sensitive data, connections over untrusted networks, or when compliance requirements mandate encryption in transit.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
enabled: true
Ignore Certificate Check
Disables TLS certificate verification, allowing connections to servers with self-signed, expired, or invalid certificates. This bypasses security checks that verify the server’s identity and certificate validity. (YAML parameter: ignore_certificate_check)
Default: false
When to use: Only use in development or testing environments with self-signed certificates. NEVER enable in production—this makes your connection vulnerable to man-in-the-middle attacks. For production with self-signed certificates, use ca_file or ca_path to explicitly trust specific certificates instead.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
ignore_certificate_check: true # Only for testing!
CA Certificate File
Specifies the absolute path to a CA (Certificate Authority) certificate file used to verify the destination server’s certificate. This allows you to trust specific CAs beyond the system’s default trusted CAs, which is essential when connecting to servers using self-signed certificates or private CAs. (YAML parameter: ca_file)
When to use: Required when connecting to servers with certificates signed by a private/internal CA, or when you want to restrict trust to specific CAs only. Choose either ca_file (single CA certificate) or ca_path (directory of CA certificates), not both.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
ca_file: /certs/ca.pem
CA Certificate Path
Specifies a directory path containing one or more CA certificate files for verifying the destination server’s certificate. Use this when you need to trust multiple CAs or when managing CA certificates across multiple files. All certificate files in the directory will be loaded. (YAML parameter: ca_path)
When to use: Alternative to ca_file when you have multiple CA certificates to trust. Useful for environments with multiple private CAs or when you need to rotate CA certificates without modifying configuration. Choose either ca_file or ca_path, not both.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
ca_path: /certs/ca-certificates/
Certificate File
Path to the client certificate file (public key) used for mutual TLS (mTLS) authentication with the destination server. This certificate identifies the client to the server and must match the private key. The certificate should be in PEM format. (YAML parameter: crt_file)
When to use: Required only when the destination server requires mutual TLS authentication, where both client and server present certificates. Must be used together with key_file. Not needed for standard client TLS connections where only the server presents a certificate.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
crt_file: /certs/client-cert.pem
key_file: /certs/client-key.pem
Private Key File
Path to the private key file corresponding to the client certificate. This key must match the public key in the certificate file and is used during the TLS handshake to prove ownership of the certificate. Keep this file secure with restricted permissions. (YAML parameter: key_file)
When to use: Required for mutual TLS authentication. Must be used together with crt_file. If the key file is encrypted with a password, also specify key_password. Only needed when the destination server requires client certificate authentication.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
crt_file: /certs/client-cert.pem
key_file: /certs/client-key.pem
key_password: <password> # Only if key is encrypted
Private Key Password
Password (passphrase) used to decrypt an encrypted private key file. Only needed if your private key file is password-protected. If your key file is unencrypted, omit this parameter. (YAML parameter: key_password)
When to use: Optional. Only required if key_file is encrypted/password-protected. For enhanced security, use encrypted keys in production environments. If you receive “bad decrypt” or “incorrect password” errors, verify the password matches the key file encryption.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
crt_file: /certs/client-cert.pem
key_file: /certs/encrypted-client-key.pem
key_password: mySecurePassword123
Minimum TLS Version
Minimum TLS protocol version to use when connecting to the destination server. This enforces a baseline security level by refusing to connect if the server doesn’t support this version or higher. (YAML parameter: min_version)
Available versions:
TLSv1_0- Deprecated, not recommended (security vulnerabilities)TLSv1_1- Deprecated, not recommended (security vulnerabilities)TLSv1_2- Recommended minimum for production (default)TLSv1_3- Most secure, use when destination supports it
Default: TLSv1_2
When to use: Set to TLSv1_2 or higher for production deployments. Only use TLSv1_0 or TLSv1_1 if connecting to legacy servers that don’t support newer versions, and be aware of the security risks. TLS 1.0 and 1.1 are officially deprecated.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
min_version: TLSv1_2
Maximum TLS Version
Maximum TLS protocol version to use when connecting to the destination server. This is typically used to restrict newer TLS versions if compatibility issues arise with specific server implementations. (YAML parameter: max_version)
Available versions:
TLSv1_0TLSv1_1TLSv1_2TLSv1_3
When to use: Usually left unset to allow the most secure version available. Only set this if you encounter specific compatibility issues with TLS 1.3 on the destination server, or for testing purposes. In most cases, you should allow the latest TLS version.
YAML Configuration Example:
nodes:
- name: <node name>
type: <destination type>
tls:
max_version: TLSv1_3
Migration from Splunk Universal Forwarder
When migrating from Splunk Universal Forwarders to Edge Delta agents, the Splunk TCP destination provides a seamless transition path:
1. Parallel Deployment
Deploy Edge Delta agents alongside existing Universal Forwarders:
nodes:
- name: splunk_migration
type: splunk_tcp_output
host: splunk-indexer.example.com
port: 9997 # Same port used by Universal Forwarders
index: main
tls:
enabled: true
2. Data Format Compatibility
The Splunk TCP node automatically formats data in a Splunk-compatible format, ensuring:
- Timestamp preservation
- Field extraction compatibility
- Source and sourcetype metadata
3. Gradual Migration
- Start by deploying Edge Delta to a subset of hosts
- Configure Edge Delta to send to the same Splunk infrastructure
- Validate data quality and completeness in Splunk
- Gradually expand deployment and decommission Universal Forwarders
Performance Optimization
High-Volume Deployments
For environments with high data volumes, optimize the configuration:
nodes:
- name: high_volume_splunk
type: splunk_tcp_output
host: splunk-lb.example.com # Load balancer endpoint
port: 9997
parallel_worker_count: 20 # Increase workers
buffer_max_bytesize: "500MB" # Larger buffer
buffer_ttl: "1h" # Longer retry period
Network Optimization
- Use Load Balancers: Distribute load across multiple Splunk indexers
- Enable Compression: Reduce network bandwidth (if supported by your Splunk version)
- Optimize Worker Count: Balance between throughput and resource usage
Resource Considerations
- Each parallel worker maintains a separate TCP connection
- Buffer storage requires disk space on the Edge Delta agent host
- Monitor agent CPU and memory usage when increasing worker counts
Use Cases
Hybrid Cloud Deployments
Send data to both cloud and on-premises Splunk instances:
nodes:
- name: on_prem_splunk
type: splunk_tcp_output
host: splunk-onprem.internal.com
port: 9997
index: onprem_logs
- name: cloud_splunk
type: splunk_tcp_output
host: splunk-cloud.example.com
port: 9997
index: cloud_logs
tls:
enabled: true
Multi-Region Architecture
Configure region-specific Splunk destinations:
nodes:
- name: us_east_splunk
type: splunk_tcp_output
host: splunk-us-east.example.com
port: 9997
index: us_east_logs
- name: eu_west_splunk
type: splunk_tcp_output
host: splunk-eu-west.example.com
port: 9997
index: eu_west_logs
Troubleshooting
For comprehensive troubleshooting of all Splunk integration issues, including detailed diagnostics and solutions, see the Splunk Troubleshooting Guide.
Connection Issues
If data is not reaching Splunk:
Verify Network Connectivity:
telnet splunk-indexer.example.com 9997Check Splunk Receiver Status:
- Ensure Splunk is configured to receive data on the specified port
- Verify the receiving port is enabled in Splunk settings
Review TLS Configuration:
- Confirm certificate paths are correct
- Ensure certificates are valid and not expired
Data Not Appearing in Splunk
- Check the specified index exists in Splunk
- Verify user permissions for the index
- Review Edge Delta agent logs for errors
- Check Splunk’s internal logs for receiving errors
Performance Issues
If experiencing slow data transmission:
- Increase
parallel_worker_count - Check network latency to Splunk servers
- Monitor Splunk indexing performance
- Consider using a Splunk load balancer