Edge Delta Splunk TCP Source
10 minute read
Overview
The Splunk TCP source node enables Edge Delta to receive data from Splunk Universal Forwarders (UF) and Splunk Heavy Forwarders, facilitating seamless migration from Splunk infrastructure or hybrid deployments. This node implements the Splunk forwarder protocol, allowing Splunk forwarders to send data directly to Edge Delta as if it were a Splunk indexer, eliminating the need to reconfigure data collection agents across your infrastructure.
- outgoing_data_types: log
The Splunk TCP source node is ideal for:
- Splunk Migration: Gradually migrate from Splunk to Edge Delta without changing forwarder configurations
- Hybrid Deployments: Run Edge Delta alongside Splunk for evaluation or dual-processing
- Cost Optimization: Process data through Edge Delta before selective forwarding to Splunk
- Multi-Destination Routing: Receive data from Splunk forwarders and route to multiple destinations
- Data Transformation: Apply Edge Delta processors to Splunk forwarder data streams
How It Works
Splunk forwarders use a proprietary TCP protocol to send data to Splunk indexers. The Splunk TCP source node implements this protocol, allowing it to:
- Accept connections from Splunk forwarders on a specified port (default 9997)
- Parse the Splunk forwarder protocol headers and metadata
- Extract events from the data stream
- Convert Splunk metadata to Edge Delta attributes
- Pass the events through your Edge Delta pipeline
Field Mapping
The Splunk TCP source node extracts and maps the following Splunk metadata to attributes:
Splunk Field | Attribute Name | Description |
---|---|---|
Source | splunk.source |
Original data source path or identifier |
Sourcetype | splunk.sourcetype |
Splunk data type classification |
Host | splunk.host |
Originating host of the data |
Index | splunk.index |
Target Splunk index name |
Time | timestamp |
Event timestamp from Splunk |
Example Configurations
Basic Configuration

This configuration receives data from Splunk forwarders on the standard Splunk port:
nodes:
- name: splunk_tcp_receiver
type: splunk_tcp_input
port: 9997
Custom Port Configuration
For environments where port 9997 is unavailable or when running multiple receivers:
nodes:
- name: splunk_tcp_custom
type: splunk_tcp_input
port: 8089
listen: "0.0.0.0"
Configuration with TLS
Enable TLS for encrypted communication with Splunk forwarders:
nodes:
- name: splunk_tcp_secure
type: splunk_tcp_input
port: 9997
tls:
enabled: true
cert_file: /path/to/server.crt
key_file: /path/to/server.key
Required Parameters
name
A descriptive name for the node. This is the name that will appear in pipeline builder and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a -
and a space followed by the string. It is a required parameter for all nodes.
nodes:
- name: <node name>
type: <node type>
type: splunk_tcp_input
The type
parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.
nodes:
- name: <node name>
type: <node type>
Optional Parameters
port
The port
parameter specifies the TCP port to listen on for incoming Splunk forwarder connections. The default Splunk forwarder port is 9997, which most Splunk Universal Forwarders are configured to use. You can specify a different port if needed for your environment.
It is specified as an integer and is optional. Default is 9997.
nodes:
- name: <node name>
type: splunk_tcp_input
port: 9997
listen
The listen
parameter specifies the IP address to bind to for listening. Use “0.0.0.0” to listen on all network interfaces, or specify a particular IP address to restrict connections to a specific interface. Default is “0.0.0.0”.
It is specified as a string and is optional.
nodes:
- name: <node name>
type: splunk_tcp_input
listen: "0.0.0.0"
max_connections
The max_connections
parameter sets the maximum number of concurrent Splunk forwarder connections the node will accept. This helps control resource usage when many forwarders are sending data simultaneously.
It is specified as an integer and is optional. Default is 100.
nodes:
- name: <node name>
type: splunk_tcp_input
max_connections: 200
tls
The tls
parameter configures Transport Layer Security for encrypted communication with Splunk forwarders. When enabled, forwarders must be configured to use SSL/TLS when sending data.
It is specified as an object with the following sub-parameters:
enabled
: Boolean to enable/disable TLScert_file
: Path to the server certificate filekey_file
: Path to the server private key fileca_file
: Path to the CA certificate file (optional, for client verification)
nodes:
- name: <node name>
type: splunk_tcp_input
tls:
enabled: true
cert_file: /etc/edgedelta/certs/server.crt
key_file: /etc/edgedelta/certs/server.key
ca_file: /etc/edgedelta/certs/ca.crt
source_metadata
The source_metadata
parameter is used to define which detected resources and attributes to add to each data item as it is ingested by the Edge Delta agent. In the GUI you can select:
- Required Only: This option includes the minimum required resources and attributes for Edge Delta to operate.
- Default: This option includes the required resources and attributes plus those selected by Edge Delta
- High: This option includes the required resources and attributes along with a larger selection of common optional fields.
- Custom: With this option selected, you can choose which attributes and resources to include. The required fields are selected by default and can’t be unchecked.
Based on your selection in the GUI, the source_metadata
YAML is populated as two dictionaries (resource_attributes
and attributes
) with Boolean values.
See Choose Data Item Metadata for more information on selecting metadata.
Configuring Splunk Forwarders
Splunk Universal Forwarder Configuration
Configure Splunk Universal Forwarders to send data to Edge Delta by modifying the outputs.conf file:
# $SPLUNK_HOME/etc/system/local/outputs.conf
[tcpout]
defaultGroup = edge_delta
[tcpout:edge_delta]
server = <edge-delta-host>:9997
# Disable indexer acknowledgment if not supported
useACK = false
# Optional: Enable compression
compressed = true
# For TLS/SSL connections
[tcpout:edge_delta]
server = <edge-delta-host>:9997
useACK = false
sslCertPath = $SPLUNK_HOME/etc/certs/client.pem
sslRootCAPath = $SPLUNK_HOME/etc/certs/ca.pem
sslVerifyServerCert = true
This configuration directs all Splunk forwarder output to Edge Delta instead of Splunk indexers. The defaultGroup
setting ensures all data flows to the Edge Delta target group. Indexer acknowledgment (useACK
) is disabled since Edge Delta may not support Splunk’s acknowledgment protocol. The second stanza shows how to enable TLS encryption for secure data transmission, requiring proper SSL certificates to be configured on both the forwarder and Edge Delta sides.
After configuration, restart the Splunk Universal Forwarder:
$SPLUNK_HOME/bin/splunk restart
Load Balancing Multiple Edge Delta Nodes
Configure Splunk forwarders to load balance across multiple Edge Delta agents:
# outputs.conf
[tcpout]
defaultGroup = edge_delta_lb
[tcpout:edge_delta_lb]
server = <edge-delta-1>:9997,<edge-delta-2>:9997,<edge-delta-3>:9997
useACK = false
autoLB = true
# Optional: Set load balancing frequency (seconds)
autoLBFrequency = 30
This configuration distributes data across multiple Edge Delta nodes for high availability and improved performance. The comma-separated server list defines all available Edge Delta endpoints. The autoLB = true
setting enables automatic load balancing, which distributes connections across the listed servers. The autoLBFrequency
parameter controls how often (in seconds) the forwarder switches between servers, helping to evenly distribute the load and quickly detect failed nodes.
Dual Destination Configuration
Send data to both Edge Delta and Splunk simultaneously for parallel processing:
# outputs.conf
[tcpout]
defaultGroup = edge_delta,splunk_indexers
[tcpout:edge_delta]
server = <edge-delta-host>:9997
useACK = false
[tcpout:splunk_indexers]
server = <splunk-indexer>:9997
useACK = true
This configuration enables parallel processing by sending identical data streams to both Edge Delta and traditional Splunk indexers. The comma-separated defaultGroup
value tells the forwarder to replicate data to both target groups simultaneously. This dual-destination approach is ideal during migration periods, allowing you to validate Edge Delta processing while maintaining your existing Splunk infrastructure. Note that Edge Delta disables acknowledgment while Splunk keeps it enabled, as each system has different reliability mechanisms.
Input-Specific Routing
Route specific inputs to Edge Delta while others go to Splunk:
# inputs.conf
[monitor:///var/log/application/*.log]
_TCP_ROUTING = edge_delta
[monitor:///var/log/system/*.log]
_TCP_ROUTING = splunk_indexers
# outputs.conf
[tcpout:edge_delta]
server = <edge-delta-host>:9997
useACK = false
[tcpout:splunk_indexers]
server = <splunk-indexer>:9997
useACK = true
This configuration provides granular control over data routing by directing different log sources to different destinations. The _TCP_ROUTING
parameter in inputs.conf overrides the default output group for specific monitored paths. In this example, application logs are sent exclusively to Edge Delta while system logs continue to flow to Splunk indexers. This selective routing strategy is useful when migrating specific data sources incrementally or when different log types require different processing pipelines. Note that no defaultGroup
is specified in outputs.conf since routing is controlled at the input level.
Migration Strategies
Gradual Migration Approach
When migrating from Splunk to Edge Delta, implement a phased approach to minimize risk and ensure continuity:
-
Parallel Processing Phase: Configure forwarders to send data to both Splunk and Edge Delta simultaneously. This allows you to validate that Edge Delta is receiving and processing all data correctly while maintaining your existing Splunk infrastructure.
-
Validation Phase: Compare data between Splunk and Edge Delta to ensure completeness and accuracy. Monitor Edge Delta pipelines for any processing issues or data gaps. Use Edge Delta’s Live Capture to spot-check data quality.
-
Gradual Cutover: Migrate forwarders to Edge Delta in groups, starting with non-critical systems. Monitor each group for several days before proceeding to the next. This approach allows quick rollback if issues arise.
-
Full Migration: Once all forwarders are successfully sending to Edge Delta and data quality is validated, decommission the Splunk indexers. Keep Splunk forwarder configurations backed up in case rollback is needed.
Testing and Validation
Before migrating production forwarders, thoroughly test the Splunk TCP source node in a development environment. Deploy a test Splunk Universal Forwarder and configure it to send sample data to Edge Delta. Verify that all expected fields are extracted correctly and that timestamps are parsed accurately. Test with various Splunk sourcetypes to ensure compatibility with your data formats.
Best Practices
Port Management
The standard Splunk forwarder port 9997 is well-known and expected by most Splunk administrators, making it the preferred choice for the Splunk TCP source node. However, if this port is already in use or if you’re running Edge Delta alongside existing Splunk infrastructure, choose an alternative port and clearly document it. Ensure firewall rules are updated to allow traffic on the chosen port from all forwarder subnets.
Performance Considerations
Splunk forwarders can generate significant network traffic, especially during catch-up periods after network outages. Configure the max_connections
parameter based on your environment size and available resources. Monitor CPU and memory usage on Edge Delta nodes receiving Splunk data, and scale horizontally by deploying additional nodes with load balancing if needed. Consider implementing rate limiting at the network level for non-critical data sources during peak periods.
Security Configuration
When receiving data from Splunk forwarders across untrusted networks, always enable TLS encryption. Generate proper certificates signed by a trusted Certificate Authority (CA) that your Splunk forwarders can verify. Implement mutual TLS (mTLS) for additional security by requiring forwarders to present client certificates. Use firewall rules or network segmentation to restrict which systems can connect to the Splunk TCP source node, preventing unauthorized data injection.
Data Processing
Splunk forwarders send data in Splunk’s proprietary format with metadata like source, sourcetype, and index. The Edge Delta Splunk TCP source node preserves this metadata as attributes, allowing you to maintain data organization schemes from Splunk. Use Edge Delta processors to further enrich or transform the data based on these Splunk attributes. For example, route data to different destinations based on the original Splunk index or apply specific parsing rules based on sourcetype.
Troubleshooting
Connection Issues
If Splunk forwarders cannot connect to Edge Delta, verify network connectivity using telnet or nc to test port accessibility. Check Edge Delta logs for any errors related to the Splunk TCP source node startup. Ensure the listen address is correct and that Edge Delta has permission to bind to the specified port. Review firewall rules on both the Edge Delta host and any network devices between forwarders and Edge Delta.
Data Not Appearing
If connections are established but data doesn’t appear in Edge Delta pipelines, check that the Splunk forwarder is actually sending data by reviewing forwarder logs for any errors or warnings. Verify the forwarder’s outputs.conf is properly configured and that the forwarder has been restarted after configuration changes. Use Edge Delta’s Live Capture to see if data is being received but potentially filtered or dropped by processors.
TLS/SSL Errors
Certificate-related issues are common when enabling TLS. Ensure certificate files are readable by the Edge Delta process and that the certificate CN or SAN matches the hostname forwarders use to connect. Verify certificate chains are complete and that CA certificates are properly configured on both sides. Check for certificate expiration and ensure system clocks are synchronized between forwarders and Edge Delta nodes.
Performance Problems
If the Splunk TCP source node experiences high CPU usage or memory consumption, consider whether the number of concurrent connections exceeds the max_connections
setting. Review the data volume being sent by forwarders and consider implementing load balancing across multiple Edge Delta nodes. Check for any processors in your pipeline that might be causing bottlenecks when processing Splunk data. Monitor network bandwidth to ensure it’s sufficient for the data volume.