Edge Delta Splunk TCP Destination

Configure the Splunk TCP destination node to send data directly to Splunk over TCP using the Splunk-to-Splunk (S2S) protocol.

Overview

The Splunk TCP destination node sends data directly to Splunk indexers or heavy forwarders using the Splunk-to-Splunk (S2S) protocol over TCP. This node simplifies migrations from Splunk Universal Forwarders and enables hybrid deployments where Edge Delta agents coexist with existing Splunk infrastructure.

Note: This node is currently in beta and is available for Enterprise tier accounts.

This node requires Edge Delta agent version v2.6.0 or higher.

Choosing the Right Splunk Integration

Edge Delta offers multiple ways to integrate with Splunk:

IntegrationTypeProtocolPortAuthenticationUse Case
Splunk TCP (S2S)DestinationTCP9997 (default)Certificate-basedDirect replacement for Universal Forwarders, native Splunk protocol
Splunk HECDestinationHTTP/HTTPS8088 (default)Token-basedSend data to Splunk HEC endpoint, cloud-friendly
Splunk TCPSourceTCPCustomCertificate-basedReceive data from Splunk forwarders into Edge Delta
Splunk HEC SourceSourceHTTPCustomToken-basedReceive data via HTTP Event Collector protocol

Key Differences:

  • Destination nodes send data FROM Edge Delta TO Splunk
  • Source nodes receive data INTO Edge Delta (from Splunk forwarders or HEC senders)
  • Splunk TCP (S2S) provides direct compatibility with existing Splunk forwarder infrastructure
  • Splunk HEC uses HTTP Event Collector for modern, cloud-friendly integration

Example Configuration

This configuration sends logs to a Splunk indexer on port 9997 (the default Splunk receiving port) with TLS enabled for secure transmission:

nodes:
- name: splunk_tcp_destination
  type: splunk_tcp_output
  host: splunk-indexer.example.com
  port: 9997
  index: main
  parallel_worker_count: 10
  tls:
    enabled: true
    ca_file: /path/to/ca.pem

Required Parameters

name

A descriptive name for the node. This is the name that will appear in pipeline builder and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a - and a space followed by the string. It is a required parameter for all nodes.

nodes:
  - name: <node name>
    type: <node type>

type: splunk_tcp_output

The type parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.

nodes:
  - name: <node name>
    type: <node type>

host

The host parameter specifies the address of the Splunk indexer or heavy forwarder that will receive the data. It can be an IP address or hostname.

- name: <node name>
  type: splunk_tcp_output
  host: splunk-indexer.example.com
  port: 9997

port

The port parameter defines the TCP port number on which Splunk is listening for incoming data. The default Splunk receiving port is 9997.

- name: <node name>
  type: splunk_tcp_output
  host: localhost
  port: 9997

Optional Parameters

node_reference

The node_reference parameter specifies a user-defined name for this specific destination integration. This helps identify the destination in monitoring and debugging.

- name: <node name>
  type: splunk_tcp_output
  node_reference: production_splunk
  host: splunk.example.com
  port: 9997

index

The index parameter specifies the Splunk index where the data should be stored. If not specified, data will be sent to the default index configured in Splunk (typically “main”).

- name: <node name>
  type: splunk_tcp_output
  host: splunk.example.com
  port: 9997
  index: application_logs

Default: main

index_expression

Minimum Agent Version: v2.7.0

The index_expression parameter allows you to dynamically route data to different Splunk indexes based on data item attributes using OTTL expressions. This advanced parameter enables flexible index routing similar to the GCS bucket expression capability, allowing you to send different types of data to different indexes based on runtime evaluation.

When configured, this expression is evaluated for each data item, and if it returns a valid index name, that index will be used instead of the static index configured in the index parameter.

Backward Compatibility: Agents running versions older than v2.7.0 will not honor the index_expression field. Ensure all agents are upgraded to v2.7.0 or higher before using this parameter.

This field supports secret references for secure credential management. Instead of hardcoding sensitive values, you can reference a secret configured in your pipeline.

To use a secret in the GUI:

  1. Create a secret in your pipeline’s Settings > Secrets section (see Using Secrets)
  2. In this field, select the secret name from the dropdown list that appears

To use a secret in YAML: Reference it using the syntax: '{{ SECRET secret-name }}'

Example:

field_name: '{{ SECRET my-credential }}'

Note: The secret reference must be enclosed in single quotes when using YAML. Secret values are encrypted at rest and resolved at runtime, ensuring no plaintext credentials appear in logs or API responses.

- name: <node name>
  type: splunk_tcp_output
  host: splunk.example.com
  port: 9997
  index: main  # Fallback index
  index_expression: attributes["target_index"]

Common Use Cases:

  • Route data to different indexes based on log severity: EDXIfElse(attributes["severity"] == "ERROR", "error_logs", "info_logs")
  • Use kubernetes namespace as index: attributes["kubernetes.namespace.name"]
  • Dynamic routing based on custom attributes or tags
  • Environment-specific index selection

keep_overridden_index

Minimum Agent Version: v2.8.0

The keep_overridden_index parameter specifies whether to retain the original index value in the data item after applying the index_expression. When set to true, the attribute used in the expression remains in the data. When set to false (default), the attribute is removed after being used for routing. It is specified as a Boolean and is optional.

- name: <node name>
  type: splunk_tcp_output
  host: splunk.example.com
  port: 9997
  index: main
  index_expression: attributes["target_index"]
  keep_overridden_index: true

Example: Multi-tenant index routing

- name: tenant_aware_splunk
  type: splunk_tcp_output
  host: splunk.example.com
  port: 9997
  index: default_tenant  # Fallback for untagged data
  index_expression: Concat(["tenant_", attributes["tenant_id"]], "")

parallel_worker_count

The parallel_worker_count parameter specifies the number of workers that run in parallel to process and send data to Splunk. Increasing this value can improve throughput for high-volume data streams.

- name: <node name>
  type: splunk_tcp_output
  host: splunk.example.com
  port: 9997
  parallel_worker_count: 10

Default: 5

tls

Configure TLS settings for secure connections to this destination. TLS is optional and typically used when connecting to endpoints that require encrypted transport (HTTPS) or mutual TLS.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      <tls options>

Enable TLS

Enables TLS encryption for outbound connections to the destination endpoint. When enabled, all communication with the destination will be encrypted using TLS/SSL. This should be enabled when connecting to HTTPS endpoints or any service that requires encrypted transport. (YAML parameter: enabled)

Default: false

When to use: Enable when the destination requires HTTPS or secure connections. Always enable for production systems handling sensitive data, connections over untrusted networks, or when compliance requirements mandate encryption in transit.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      enabled: true

Ignore Certificate Check

Disables TLS certificate verification, allowing connections to servers with self-signed, expired, or invalid certificates. This bypasses security checks that verify the server’s identity and certificate validity. (YAML parameter: ignore_certificate_check)

Default: false

When to use: Only use in development or testing environments with self-signed certificates. NEVER enable in production—this makes your connection vulnerable to man-in-the-middle attacks. For production with self-signed certificates, use ca_file or ca_path to explicitly trust specific certificates instead.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      ignore_certificate_check: true  # Only for testing!

CA Certificate File

Specifies the absolute path to a CA (Certificate Authority) certificate file used to verify the destination server’s certificate. This allows you to trust specific CAs beyond the system’s default trusted CAs, which is essential when connecting to servers using self-signed certificates or private CAs. (YAML parameter: ca_file)

When to use: Required when connecting to servers with certificates signed by a private/internal CA, or when you want to restrict trust to specific CAs only. Choose either ca_file (single CA certificate) or ca_path (directory of CA certificates), not both.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      ca_file: /certs/ca.pem

CA Certificate Path

Specifies a directory path containing one or more CA certificate files for verifying the destination server’s certificate. Use this when you need to trust multiple CAs or when managing CA certificates across multiple files. All certificate files in the directory will be loaded. (YAML parameter: ca_path)

When to use: Alternative to ca_file when you have multiple CA certificates to trust. Useful for environments with multiple private CAs or when you need to rotate CA certificates without modifying configuration. Choose either ca_file or ca_path, not both.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      ca_path: /certs/ca-certificates/

Certificate File

Path to the client certificate file (public key) used for mutual TLS (mTLS) authentication with the destination server. This certificate identifies the client to the server and must match the private key. The certificate should be in PEM format. (YAML parameter: crt_file)

When to use: Required only when the destination server requires mutual TLS authentication, where both client and server present certificates. Must be used together with key_file. Not needed for standard client TLS connections where only the server presents a certificate.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      crt_file: /certs/client-cert.pem
      key_file: /certs/client-key.pem

Private Key File

Path to the private key file corresponding to the client certificate. This key must match the public key in the certificate file and is used during the TLS handshake to prove ownership of the certificate. Keep this file secure with restricted permissions. (YAML parameter: key_file)

When to use: Required for mutual TLS authentication. Must be used together with crt_file. If the key file is encrypted with a password, also specify key_password. Only needed when the destination server requires client certificate authentication.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      crt_file: /certs/client-cert.pem
      key_file: /certs/client-key.pem
      key_password: <password>  # Only if key is encrypted

Private Key Password

Password (passphrase) used to decrypt an encrypted private key file. Only needed if your private key file is password-protected. If your key file is unencrypted, omit this parameter. (YAML parameter: key_password)

When to use: Optional. Only required if key_file is encrypted/password-protected. For enhanced security, use encrypted keys in production environments. If you receive “bad decrypt” or “incorrect password” errors, verify the password matches the key file encryption.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      crt_file: /certs/client-cert.pem
      key_file: /certs/encrypted-client-key.pem
      key_password: mySecurePassword123

Minimum TLS Version

Minimum TLS protocol version to use when connecting to the destination server. This enforces a baseline security level by refusing to connect if the server doesn’t support this version or higher. (YAML parameter: min_version)

Available versions:

  • TLSv1_0 - Deprecated, not recommended (security vulnerabilities)
  • TLSv1_1 - Deprecated, not recommended (security vulnerabilities)
  • TLSv1_2 - Recommended minimum for production (default)
  • TLSv1_3 - Most secure, use when destination supports it

Default: TLSv1_2

When to use: Set to TLSv1_2 or higher for production deployments. Only use TLSv1_0 or TLSv1_1 if connecting to legacy servers that don’t support newer versions, and be aware of the security risks. TLS 1.0 and 1.1 are officially deprecated.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      min_version: TLSv1_2

Maximum TLS Version

Maximum TLS protocol version to use when connecting to the destination server. This is typically used to restrict newer TLS versions if compatibility issues arise with specific server implementations. (YAML parameter: max_version)

Available versions:

  • TLSv1_0
  • TLSv1_1
  • TLSv1_2
  • TLSv1_3

When to use: Usually left unset to allow the most secure version available. Only set this if you encounter specific compatibility issues with TLS 1.3 on the destination server, or for testing purposes. In most cases, you should allow the latest TLS version.

YAML Configuration Example:

nodes:
  - name: <node name>
    type: <destination type>
    tls:
      max_version: TLSv1_3

persistent_queue

The persistent_queue configuration enables disk-based buffering to prevent data loss during destination failures or slowdowns. When enabled, the agent stores data on disk and automatically retries delivery when the destination recovers.

Complete example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  mode: error
  max_byte_size: 1GB
  drain_rate_limit: 1000

How it works:

  1. Normal operation: Data flows directly to the destination (for error and backpressure modes) or through the disk buffer (for always mode)
  2. Destination issue detected: Based on the configured mode, data is written to disk at the configured path
  3. Recovery: When the destination recovers, buffered data drains at the configured drain_rate_limit while new data continues flowing
  4. Completion: Buffer clears and normal operation resumes

Key benefits:

  • Data durability: Logs preserved during destination outages and slowdowns
  • Agent protection: Slow backends don’t cascade failures into the agent cluster
  • Automatic recovery: No manual intervention required
  • Configurable behavior: Choose when and how buffering occurs based on your needs

Learn more: Buffer Configuration - Conceptual overview, sizing guidance, and troubleshooting

path

The path parameter specifies the directory where buffered data is stored on disk. This parameter is required when configuring a persistent queue.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer

Requirements:

  • Required field - persistent queue will not function without a valid path
  • The directory must have sufficient disk space for the configured max_byte_size
  • The agent process must have read/write permissions to this location
  • The path should be on a persistent volume (not tmpfs or memory-backed filesystem)

Best practices:

  • Use dedicated storage for buffer data separate from logs
  • Monitor disk usage to prevent buffer from filling available space
  • Ensure the path persists across agent restarts to maintain buffered data

max_byte_size

The max_byte_size parameter defines the maximum disk space the persistent buffer is allowed to use. Once this limit is reached, any new incoming items are dropped, ensuring the buffer never grows beyond the configured maximum.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  max_byte_size: 1GB

Sizing guidance:

  • Small deployments (1-10 logs/sec): 100MB - 500MB
  • Medium deployments (10-100 logs/sec): 500MB - 2GB
  • Large deployments (100+ logs/sec): 2GB - 10GB

Calculation example:

Average log size: 1KB
Expected outage duration: 1 hour
Log rate: 100 logs/sec

Buffer size = 1KB × 100 logs/sec × 3600 sec = 360MB
Recommended: 500MB - 1GB (with safety margin)

Important: Set this value based on your disk space availability and expected outage duration. The buffer will accumulate data during destination failures and drain when the destination recovers.

mode

The mode parameter determines when data is buffered to disk. Three modes are available:

  • error (default) - Buffers data only when the destination returns errors (connection failures, HTTP 5xx errors, timeouts). During healthy operation, data flows directly to the destination without buffering.

  • backpressure - Buffers data when the in-memory queue reaches 80% capacity OR when destination errors occur. This mode helps handle slow destinations that respond successfully but take longer than usual to process requests.

  • always - Uses write-ahead-log behavior where all data is written to disk before being sent to the destination. This provides maximum durability but adds disk I/O overhead to every operation.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  mode: error
  max_byte_size: 1GB

Mode comparison:

ModeProtects AgainstTrade-offRecommended For
errorDestination outages and failuresNo protection during slow responsesReliable destinations with consistent response times
backpressureOutages + slow/degraded destinationsSlightly more disk writes during slowdownsMost production deployments
alwaysAll scenarios including agent crashesDisk I/O on every item reduces throughputMaximum durability requirements

Why choose error mode:

The error mode provides the minimal protection layer needed to prevent data loss when destinations temporarily fail. Without any persistent queue, a destination outage means data is lost. With error mode enabled, data is preserved on disk during failures and delivered automatically when the destination recovers.

Why choose backpressure mode:

The backpressure mode provides everything error mode offers, plus protection against slow destinations. When a destination is slow but not completely down:

  • Without backpressure: Data delivery becomes unreliable, and the backend’s slowness propagates to the agent—the agent can get stuck waiting before sending subsequent payloads
  • With backpressure: The agent spills data to disk and continues processing, isolating itself from the slow backend

This prevents a slow destination from cascading failures into your agent cluster. For most production environments, backpressure provides the best balance of protection and performance.

Why choose always mode:

The always mode is designed for customers with extremely strict durability requirements. It forces the agent to write every item to disk before attempting delivery, then reads from disk for transmission. This guarantees that data survives even sudden agent crashes or restarts.

Important: This mode introduces a measurable performance cost. Each agent performs additional disk I/O on every item, which reduces overall throughput. Most deployments do not require this level of durability—this feature addresses specialized needs that apply to a small minority of customers.

Only enable always mode if you have a specific, well-understood requirement where the durability guarantee outweighs the throughput reduction.

strict_ordering

The strict_ordering parameter controls how items are consumed from the persistent buffer.

When strict_ordering: true, the agent runs in strict ordering mode with a single processing thread. This mode always prioritizes draining buffered items first—new incoming data waits until all buffered items are processed in exact chronological order. When strict_ordering: false (default), multiple workers process data in parallel, and new data flows directly to the destination while buffered data drains in the background.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  mode: always
  strict_ordering: true
parallel_workers: 1

Default value: false

Important: Strict ordering is a specialized feature needed by a very small minority of deployments. Most users should keep the default value of false. Only enable strict ordering if you have a specific, well-understood requirement for exact event sequencing.

Required setting: When strict_ordering: true, you must set parallel_workers: 1. Pipeline validation will fail if parallel_workers is greater than 1 because parallel processing inherently breaks ordering guarantees.

Behavior:

ValueProcessing ModelBuffer PriorityRecovery Latency
false (default)Parallel workersBuffered data drains in backgroundLower - current state visible immediately
trueSingle-threadedBuffered items always drain firstHigher - queue must drain before new data

Why the default is false:

In most observability use cases, data freshness is more valuable than strict ordering. When a destination recovers from an outage, operators typically want to see current system state on dashboards immediately, while historical data backfills in the background. The default behavior prioritizes this real-time visibility.

When to enable strict ordering:

Strict ordering is primarily needed by security-focused customers who build systems where events must arrive in the exact delivery order. These customers typically run stateful security streaming engines that depend on precise temporal sequencing.

Specific use cases:

  • Stateful security streaming engines - Security systems that maintain state across events and detect patterns based on exact event order
  • Audit and compliance logs - Regulatory requirements that mandate audit trails preserve exact temporal sequence
  • State reconstruction - Systems that replay events to rebuild state require chronological order

When to keep default (false):

The vast majority of deployments should keep the default:

  • Real-time monitoring dashboards - Current state visibility is more important than historical order
  • High-volume log ingestion - Faster drain times reduce recovery period
  • Stateless analytics - When each log is analyzed independently without temporal correlation

drain_rate_limit

The drain_rate_limit parameter controls the maximum items per second when draining the persistent buffer after a destination recovers from a failure.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  drain_rate_limit: 1000

Default value: 0 (no limit - drain as fast as the destination accepts)

Why rate limiting matters:

When a destination recovers from an outage, it may still be fragile. Immediately flooding it with hours of backlogged data can trigger another failure. The drain rate limit allows gradual, controlled recovery that protects destination stability.

Choosing the right rate:

ScenarioRecommended RateReasoning
Stable, well-provisioned destination0 (unlimited)Minimize recovery time when destination can handle full load
Shared or multi-tenant destination20-50% of capacityLeave headroom for live traffic and other tenants
Recently recovered destination10-25% of capacityGentle ramp-up to prevent re-triggering failure
Rate-limited destination (e.g., SaaS)Below API rate limitAvoid throttling or quota exhaustion

Impact on recovery time:

Buffer size: 1GB
Average log size: 1KB
Total items: ~1,000,000 logs

At unlimited (0): Depends on destination capacity
At 5000:      ~3.5 minutes to drain
At 1000:      ~17 minutes to drain
At 100:       ~2.8 hours to drain

Migration from Splunk Universal Forwarder

When migrating from Splunk Universal Forwarders to Edge Delta agents, the Splunk TCP destination provides a seamless transition path:

1. Parallel Deployment

Deploy Edge Delta agents alongside existing Universal Forwarders:

nodes:
- name: splunk_migration
  type: splunk_tcp_output
  host: splunk-indexer.example.com
  port: 9997  # Same port used by Universal Forwarders
  index: main
  tls:
    enabled: true

2. Data Format Compatibility

The Splunk TCP node automatically formats data in a Splunk-compatible format, ensuring:

  • Timestamp preservation
  • Field extraction compatibility
  • Source and sourcetype metadata

3. Gradual Migration

  1. Start by deploying Edge Delta to a subset of hosts
  2. Configure Edge Delta to send to the same Splunk infrastructure
  3. Validate data quality and completeness in Splunk
  4. Gradually expand deployment and decommission Universal Forwarders

Performance Optimization

High-Volume Deployments

For environments with high data volumes, optimize the configuration:

nodes:
- name: high_volume_splunk
  type: splunk_tcp_output
  host: splunk-lb.example.com  # Load balancer endpoint
  port: 9997
  parallel_worker_count: 20     # Increase workers

Network Optimization

  1. Use Load Balancers: Distribute load across multiple Splunk indexers
  2. Enable Compression: Reduce network bandwidth (if supported by your Splunk version)
  3. Optimize Worker Count: Balance between throughput and resource usage

Resource Considerations

  • Each parallel worker maintains a separate TCP connection
  • Buffer storage requires disk space on the Edge Delta agent host
  • Monitor agent CPU and memory usage when increasing worker counts

Use Cases

Hybrid Cloud Deployments

Send data to both cloud and on-premises Splunk instances:

nodes:
- name: on_prem_splunk
  type: splunk_tcp_output
  host: splunk-onprem.internal.com
  port: 9997
  index: onprem_logs

- name: cloud_splunk
  type: splunk_tcp_output
  host: splunk-cloud.example.com
  port: 9997
  index: cloud_logs
  tls:
    enabled: true

Multi-Region Architecture

Configure region-specific Splunk destinations:

nodes:
- name: us_east_splunk
  type: splunk_tcp_output
  host: splunk-us-east.example.com
  port: 9997
  index: us_east_logs

- name: eu_west_splunk
  type: splunk_tcp_output
  host: splunk-eu-west.example.com
  port: 9997
  index: eu_west_logs

Troubleshooting

For comprehensive troubleshooting of all Splunk integration issues, including detailed diagnostics and solutions, see the Splunk Troubleshooting Guide.

Connection Issues

If data is not reaching Splunk:

  1. Verify Network Connectivity:

    telnet splunk-indexer.example.com 9997
    
  2. Check Splunk Receiver Status:

    • Ensure Splunk is configured to receive data on the specified port
    • Verify the receiving port is enabled in Splunk settings
  3. Review TLS Configuration:

    • Confirm certificate paths are correct
    • Ensure certificates are valid and not expired

Data Not Appearing in Splunk

  1. Check the specified index exists in Splunk
  2. Verify user permissions for the index
  3. Review Edge Delta agent logs for errors
  4. Check Splunk’s internal logs for receiving errors

Performance Issues

If experiencing slow data transmission:

  1. Increase parallel_worker_count
  2. Check network latency to Splunk servers
  3. Monitor Splunk indexing performance
  4. Consider using a Splunk load balancer