Edge Delta Google SecOps Destination

Send logs to Google SecOps (Chronicle) for security analytics and threat detection.

Overview

The Google SecOps destination node sends log and custom telemetry data to Google Security Operations (Chronicle) for security information and event management (SIEM). This enables security teams to analyze, correlate, and detect threats across various data sources.

This node requires Edge Delta agent version v2.8.0 or higher.

Prerequisites

Before configuring the Google SecOps destination, you need:

  1. A Google SecOps (Chronicle) account with API access enabled
  2. A Google Cloud service account with appropriate permissions for Chronicle ingestion
  3. The service account credentials JSON file (or Application Default Credentials if running in GKE)

Example Configuration

Screenshot Screenshot
nodes:
  - name: my_google_secops
    type: google_secops_output
    region: us
    credentials_path: /path/to/service-account.json
    customer_id: customer-12345
    compression: gzip

Required Parameters

name

A descriptive name for the node. This is the name that will appear in pipeline builder and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a - and a space followed by the string. It is a required parameter for all nodes.

nodes:
  - name: <node name>
    type: <node type>

type: google_secops_output

The type parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.

nodes:
  - name: <node name>
    type: <node type>

region

The region parameter specifies the Google SecOps (Chronicle) region where data will be ingested. It is specified as a string and is required.

Available regions:

  • us - United States
  • europe - Europe
  • asia-southeast1 - Asia Southeast 1
nodes:
  - name: my_google_secops
    type: google_secops_output
    region: us

Optional Parameters

buffer_max_bytesize

The buffer_max_bytesize parameter configures the maximum byte size for total unsuccessful items. If the limit is reached, the remaining items are discarded until the buffer space becomes available. It is specified as a datasize.Size, has a default of 0 indicating no size limit, and it is optional.

nodes:
  - name: <destination-name>
    type: <destination-type>
    buffer_max_bytesize: 2048

buffer_path

The buffer_path parameter configures the path to store unsuccessful items. Unsuccessful items are stored there to be retried back (exactly once delivery). It is specified as a string and it is optional.

Note: Buffered data may be delivered in non-chronological order after a destination failure. Event ordering is not guaranteed during recovery. Applications requiring ordered event processing should handle reordering at the application level.

nodes:
  - name: <destination-name>
    type: <destination-type>
    buffer_path: <path to unsuccessful items folder>

buffer_ttl

The buffer_ttl parameter configures the time-to-Live for unsuccessful items, which indicates when to discard them. It is specified as a duration, has a default of 10m, and it is optional.

nodes:
  - name: <destination-name>
    type: <destination-type>
    buffer_ttl: 20m

compression

The compression parameter specifies the compression format for data sent to Google SecOps. It is specified as a string and is optional.

Available options:

  • gzip - Compress data using gzip
  • uncompressed - Send data without compression (default)
nodes:
  - name: my_google_secops
    type: google_secops_output
    region: us
    compression: gzip

credentials_path

The credentials_path parameter specifies the path to a Google Cloud service account credentials JSON file. If not specified, Application Default Credentials from the environment will be used (e.g., when running in GKE with Workload Identity). It is specified as a string and is optional.

nodes:
  - name: my_google_secops
    type: google_secops_output
    region: us
    credentials_path: /path/to/service-account.json

customer_id

The customer_id parameter specifies the Google SecOps customer ID. If not specified, the default customer associated with the service account will be used. It is specified as a string and is optional.

nodes:
  - name: my_google_secops
    type: google_secops_output
    region: us
    customer_id: customer-12345

parallel_worker_count

The parallel_worker_count parameter specifies the number of workers that run in parallel for sending data to Google SecOps. It is specified as an integer, has a default of 5, and is optional.

nodes:
  - name: my_google_secops
    type: google_secops_output
    region: us
    parallel_worker_count: 10

persistent_queue

The persistent_queue configuration enables disk-based buffering to prevent data loss during destination failures or slowdowns. When enabled, the agent stores data on disk and automatically retries delivery when the destination recovers.

Complete example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  mode: error
  max_byte_size: 1GB
  drain_rate_limit: 1000
  strict_ordering: true

How it works:

  1. Normal operation: Data flows directly to the destination (for error and backpressure modes) or through the buffer (for always mode)
  2. Destination failure: Data is written to disk at the configured path
  3. Recovery: When the destination becomes available, buffered data drains at the configured drain_rate_limit while new data continues flowing
  4. Completion: Buffer clears and normal operation resumes

Key benefits:

  • No data loss: Logs are preserved during destination outages
  • Automatic recovery: No manual intervention required
  • Configurable behavior: Choose when and how buffering occurs based on your needs

path

The path parameter specifies the directory where buffered data is stored on disk.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer

Default value: /var/lib/edgedelta/outputbuffer

Requirements:

  • The directory must have sufficient disk space for the configured max_byte_size
  • The agent process must have read/write permissions to this location
  • The path should be on a persistent volume (not tmpfs or memory-backed filesystem)

Best practices:

  • Use dedicated storage for buffer data separate from logs
  • Monitor disk usage to prevent buffer from filling available space
  • Ensure the path persists across agent restarts to maintain buffered data

max_byte_size

The max_byte_size parameter sets the maximum disk space allocated for the persistent buffer. When the buffer reaches this limit, behavior depends on your configuration.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  max_byte_size: 1GB

Sizing guidance:

  • Small deployments (1-10 logs/sec): 100MB - 500MB
  • Medium deployments (10-100 logs/sec): 500MB - 2GB
  • Large deployments (100+ logs/sec): 2GB - 10GB

Calculation example:

Average log size: 1KB
Expected outage duration: 1 hour
Log rate: 100 logs/sec

Buffer size = 1KB × 100 logs/sec × 3600 sec = 360MB
Recommended: 500MB - 1GB (with safety margin)

Important: Set this value based on your disk space availability and expected outage duration. The buffer will accumulate data during destination failures and drain when the destination recovers.

mode

The mode parameter determines when data is buffered to disk. Three modes are available:

  • error (default) - Buffers data only when the destination returns errors (connection failures, HTTP 5xx errors, timeouts). During healthy operation, data flows directly to the destination without buffering.

  • backpressure - Buffers data when the in-memory queue reaches 80% capacity OR when destination errors occur. This mode helps handle slow destinations that respond successfully but take longer than usual to process requests.

  • always - Uses write-ahead-log behavior where all data is written to disk before being sent to the destination. This provides maximum durability but adds disk I/O overhead to every operation.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  mode: error
  max_byte_size: 1GB

When to use each mode:

  • Use error for most production deployments with reliable destinations
  • Use backpressure when destinations occasionally experience slowdowns but remain healthy
  • Use always for mission-critical data that must survive agent crashes or restarts

strict_ordering

The strict_ordering parameter ensures that buffered data is delivered in the exact order it was generated.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  strict_ordering: true

Default value: true

Behavior:

  • true - Logs delivered in exact chronological order. Single-threaded processing (slower drain).
  • false - Logs may arrive out of order. Multi-threaded processing enabled (faster drain).

Important: When strict_ordering is true, the parallel_worker_count must be set to 1. Setting it to a higher value will cause configuration validation to fail.

When to use:

  • Use true (strict ordering) when:

    • Log sequence is critical (debugging, troubleshooting, audit trails)
    • Applications rely on temporal order of events
    • Compliance requirements mandate chronological delivery
  • Use false (no strict ordering) when:

    • Faster buffer drain is more important than order
    • Destination can handle out-of-order data
    • You need parallel processing for high-volume recovery

Performance impact: Disabling strict ordering can significantly speed up buffer drain by enabling parallel workers, but may result in logs arriving out of their original sequence.

drain_rate_limit

The drain_rate_limit parameter controls the maximum events per second (EPS) when draining the persistent buffer after a destination recovers from a failure.

Example:

persistent_queue:
  path: /var/lib/edgedelta/outputbuffer
  drain_rate_limit: 1000

Default value: 1000 EPS

Choosing the right rate:

  • Fast drain (1000-10000 EPS): Minimizes recovery time but may overwhelm the destination
  • Moderate drain (500-1000 EPS): Balanced approach for most use cases
  • Slow drain (100-500 EPS): Gentle recovery for sensitive destinations

Impact on recovery time:

Buffer size: 1GB
Average log size: 1KB
Total items: ~1,000,000 logs

At 1000 EPS: ~17 minutes to drain
At 5000 EPS: ~3.5 minutes to drain
At 100 EPS: ~2.8 hours to drain

Note: During drain, both current data and buffered data flow to the destination simultaneously. Set this value based on your destination’s capacity to handle additional load during recovery.