Edge Delta Splunk TCP Destination
7 minute read
Overview
The Splunk TCP destination node sends data directly to Splunk indexers or heavy forwarders using the Splunk-to-Splunk (S2S) protocol over TCP. This node simplifies migrations from Splunk Universal Forwarders and enables hybrid deployments where Edge Delta agents coexist with existing Splunk infrastructure.
- incoming_data_types: cluster_pattern_and_sample, health, heartbeat, log, metric, custom, splunk_payload, signal
Note: This node is currently in beta and is available for Enterprise tier accounts.
Choosing the Right Splunk Integration
Edge Delta offers multiple ways to integrate with Splunk:
Integration | Type | Protocol | Port | Authentication | Use Case |
---|---|---|---|---|---|
Splunk TCP (S2S) | Destination | TCP | 9997 (default) | Certificate-based | Direct replacement for Universal Forwarders, native Splunk protocol |
Splunk HEC | Destination | HTTP/HTTPS | 8088 (default) | Token-based | Send data to Splunk HEC endpoint, cloud-friendly |
Splunk TCP | Source | TCP | Custom | Certificate-based | Receive data from Splunk forwarders into Edge Delta |
Splunk HEC Source | Source | HTTP | Custom | Token-based | Receive data via HTTP Event Collector protocol |
Key Differences:
- Destination nodes send data FROM Edge Delta TO Splunk
- Source nodes receive data INTO Edge Delta (from Splunk forwarders or HEC senders)
- Splunk TCP (S2S) provides direct compatibility with existing Splunk forwarder infrastructure
- Splunk HEC uses HTTP Event Collector for modern, cloud-friendly integration
Example Configuration
This configuration sends logs to a Splunk indexer on port 9997 (the default Splunk receiving port) with TLS enabled for secure transmission:
nodes:
- name: splunk_tcp_destination
type: splunk_tcp_output
host: splunk-indexer.example.com
port: 9997
index: main
parallel_worker_count: 10
tls:
enabled: true
ca_file: /path/to/ca.pem
Required Parameters
name
A descriptive name for the node. This is the name that will appear in pipeline builder and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a -
and a space followed by the string. It is a required parameter for all nodes.
nodes:
- name: <node name>
type: <node type>
type: splunk_tcp_output
The type
parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.
nodes:
- name: <node name>
type: <node type>
host
The host
parameter specifies the address of the Splunk indexer or heavy forwarder that will receive the data. It can be an IP address or hostname.
- name: <node name>
type: splunk_tcp_output
host: splunk-indexer.example.com
port: 9997
port
The port
parameter defines the TCP port number on which Splunk is listening for incoming data. The default Splunk receiving port is 9997.
- name: <node name>
type: splunk_tcp_output
host: localhost
port: 9997
Optional Parameters
node_reference
The node_reference
parameter specifies a user-defined name for this specific destination integration. This helps identify the destination in monitoring and debugging.
- name: <node name>
type: splunk_tcp_output
node_reference: production_splunk
host: splunk.example.com
port: 9997
index
The index
parameter specifies the Splunk index where the data should be stored. If not specified, data will be sent to the default index configured in Splunk (typically “main”).
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
index: application_logs
Default: main
parallel_worker_count
The parallel_worker_count
parameter specifies the number of workers that run in parallel to process and send data to Splunk. Increasing this value can improve throughput for high-volume data streams.
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
parallel_worker_count: 10
Default: 5
buffer_ttl
The buffer_ttl
parameter defines how long data should be buffered and retried if the connection to Splunk fails. This ensures data durability during network interruptions or Splunk maintenance windows.
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
buffer_ttl: "30m" # Retry for up to 30 minutes
buffer_path
The buffer_path
parameter specifies the local directory where data will be saved if streaming to Splunk fails. The buffered data will be retried automatically when the connection is restored.
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
buffer_path: "/var/log/edgedelta/splunk_buffer"
Default: /var/log/edgedelta/outputbuffer
buffer_max_bytesize
The buffer_max_bytesize
parameter sets the maximum size of the buffer used to store data when streaming fails. Once this limit is reached, older data may be dropped to make room for new data.
- name: <node name>
type: splunk_tcp_output
host: splunk.example.com
port: 9997
buffer_max_bytesize: "100MB"
tls
The tls
parameter is a dictionary that configures TLS settings for secure connections to the destination. It is optional and typically used when connecting to endpoints that require encrypted transport (HTTPS) or mutual TLS.
nodes:
- name: <node name>
type: <destination type>
tls:
<tls options>
enabled
Specifies whether TLS is enabled. This is a Boolean value. Default is false
.
nodes:
- name: <node name>
type: <destination type>
tls:
enabled: true
ignore_certificate_check
Disables certificate verification. Useful for test environments. Default is false
.
nodes:
- name: <node name>
type: <destination type>
tls:
ignore_certificate_check: true
ca_file
Specifies the absolute path to a CA certificate file for verifying the remote server’s certificate.
nodes:
- name: <node name>
type: <destination type>
tls:
ca_file: /certs/ca.pem
ca_path
Specifies a directory containing one or more CA certificate files.
nodes:
- name: <node name>
type: <destination type>
tls:
ca_path: /certs/
crt_file
Path to the client certificate file for mutual TLS authentication.
nodes:
- name: <node name>
type: <destination type>
tls:
crt_file: /certs/client-cert.pem
key_file
Path to the private key file used for client TLS authentication.
nodes:
- name: <node name>
type: <destination type>
tls:
key_file: /certs/client-key.pem
key_password
Password for the TLS private key file, if required.
nodes:
- name: <node name>
type: <destination type>
tls:
key_password: <password>
client_auth_type
Controls how client certificates are requested and validated during the TLS handshake. Valid options:
noclientcert
requestclientcert
requireanyclientcert
verifyclientcertifgiven
requireandverifyclientcert
nodes:
- name: <node name>
type: <destination type>
tls:
client_auth_type: requireandverifyclientcert
max_version
Maximum supported version of the TLS protocol.
TLSv1_0
TLSv1_1
TLSv1_2
TLSv1_3
nodes:
- name: <node name>
type: <destination type>
tls:
max_version: TLSv1_3
min_version
Minimum supported version of the TLS protocol. Default is TLSv1_2
.
nodes:
- name: <node name>
type: <destination type>
tls:
min_version: TLSv1_2
Migration from Splunk Universal Forwarder
When migrating from Splunk Universal Forwarders to Edge Delta agents, the Splunk TCP destination provides a seamless transition path:
1. Parallel Deployment
Deploy Edge Delta agents alongside existing Universal Forwarders:
nodes:
- name: splunk_migration
type: splunk_tcp_output
host: splunk-indexer.example.com
port: 9997 # Same port used by Universal Forwarders
index: main
tls:
enabled: true
2. Data Format Compatibility
The Splunk TCP node automatically formats data in a Splunk-compatible format, ensuring:
- Timestamp preservation
- Field extraction compatibility
- Source and sourcetype metadata
3. Gradual Migration
- Start by deploying Edge Delta to a subset of hosts
- Configure Edge Delta to send to the same Splunk infrastructure
- Validate data quality and completeness in Splunk
- Gradually expand deployment and decommission Universal Forwarders
Performance Optimization
High-Volume Deployments
For environments with high data volumes, optimize the configuration:
nodes:
- name: high_volume_splunk
type: splunk_tcp_output
host: splunk-lb.example.com # Load balancer endpoint
port: 9997
parallel_worker_count: 20 # Increase workers
buffer_max_bytesize: "500MB" # Larger buffer
buffer_ttl: "1h" # Longer retry period
Network Optimization
- Use Load Balancers: Distribute load across multiple Splunk indexers
- Enable Compression: Reduce network bandwidth (if supported by your Splunk version)
- Optimize Worker Count: Balance between throughput and resource usage
Resource Considerations
- Each parallel worker maintains a separate TCP connection
- Buffer storage requires disk space on the Edge Delta agent host
- Monitor agent CPU and memory usage when increasing worker counts
Use Cases
Hybrid Cloud Deployments
Send data to both cloud and on-premises Splunk instances:
nodes:
- name: on_prem_splunk
type: splunk_tcp_output
host: splunk-onprem.internal.com
port: 9997
index: onprem_logs
- name: cloud_splunk
type: splunk_tcp_output
host: splunk-cloud.example.com
port: 9997
index: cloud_logs
tls:
enabled: true
Multi-Region Architecture
Configure region-specific Splunk destinations:
nodes:
- name: us_east_splunk
type: splunk_tcp_output
host: splunk-us-east.example.com
port: 9997
index: us_east_logs
- name: eu_west_splunk
type: splunk_tcp_output
host: splunk-eu-west.example.com
port: 9997
index: eu_west_logs
Troubleshooting
For comprehensive troubleshooting of all Splunk integration issues, including detailed diagnostics and solutions, see the Splunk Troubleshooting Guide.
Connection Issues
If data is not reaching Splunk:
-
Verify Network Connectivity:
telnet splunk-indexer.example.com 9997
-
Check Splunk Receiver Status:
- Ensure Splunk is configured to receive data on the specified port
- Verify the receiving port is enabled in Splunk settings
-
Review TLS Configuration:
- Confirm certificate paths are correct
- Ensure certificates are valid and not expired
Data Not Appearing in Splunk
- Check the specified index exists in Splunk
- Verify user permissions for the index
- Review Edge Delta agent logs for errors
- Check Splunk’s internal logs for receiving errors
Performance Issues
If experiencing slow data transmission:
- Increase
parallel_worker_count
- Check network latency to Splunk servers
- Monitor Splunk indexing performance
- Consider using a Splunk load balancer