Edge Delta ClickHouse Destination
5 minute read
Overview
The ClickHouse destination node streams data to ClickHouse databases via the HTTP interface. ClickHouse is a high-performance, column-oriented database management system optimized for real-time analytics on large datasets.
This node supports schema mapping using OTTL expressions, allowing you to define how Edge Delta fields map to ClickHouse columns. Data is sent using the ClickHouse HTTP interface with optional gzip compression for improved throughput.
Note: This node is currently in beta and is available for Enterprise tier accounts.
This node requires Edge Delta agent version v2.10.0 or higher.
Example Configuration

This configuration sends log data to a ClickHouse table. The schema_mapping defines how Edge Delta data fields map to ClickHouse columns using OTTL expressions.
nodes:
- name: clickhouse_logs
type: clickhouse_output
endpoint: "http://clickhouse.example.com:8123"
database: default
clickhouse_table: logs
password: '{{ SECRET clickhouse_password }}'
compression: gzip
schema_mapping:
- column_name: timestamp
expression: timestamp
column_type: DateTime64(3)
required: true
- column_name: severity
expression: severity_text
column_type: LowCardinality(String)
default_value: INFO
- column_name: body
expression: body
column_type: String
- column_name: host
expression: resource["host.name"]
column_type: String
- column_name: service
expression: resource["service.name"]
column_type: LowCardinality(String)
See Secrets for information on securely storing credentials.
Required Parameters
name
A descriptive name for the node. This is the name that will appear in pipeline builder and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a - and a space followed by the string. It is a required parameter for all nodes.
nodes:
- name: <node name>
type: <node type>
type: clickhouse_output
The type parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.
nodes:
- name: <node name>
type: <node type>
endpoint
The ClickHouse HTTP endpoint URL. This is the HTTP interface endpoint for your ClickHouse instance, typically running on port 8123.
nodes:
- name: <node name>
type: clickhouse_output
endpoint: "http://clickhouse.example.com:8123"
database: <database>
clickhouse_table: <table>
database
The ClickHouse database name to write data into. The database must already exist in your ClickHouse instance.
nodes:
- name: <node name>
type: clickhouse_output
endpoint: <endpoint>
database: default
clickhouse_table: <table>
clickhouse_table
The ClickHouse table name to write data into. The table must already exist within the specified database with a schema compatible with your schema_mapping configuration.
nodes:
- name: <node name>
type: clickhouse_output
endpoint: <endpoint>
database: <database>
clickhouse_table: logs
Optional Parameters
username
Username for authenticating with ClickHouse. If omitted, uses the default ClickHouse user.
nodes:
- name: <node name>
type: clickhouse_output
endpoint: <endpoint>
database: <database>
clickhouse_table: <table>
username: default
password
Password for authenticating with ClickHouse. Use the {{ SECRET secret_name }} syntax to reference secrets stored securely in Edge Delta. See Secrets for more information.
nodes:
- name: <node name>
type: clickhouse_output
endpoint: <endpoint>
database: <database>
clickhouse_table: <table>
password: '{{ SECRET clickhouse_password }}'
compression
Compression method for data sent to ClickHouse. Options are none or gzip. Default is gzip.
nodes:
- name: <node name>
type: clickhouse_output
endpoint: <endpoint>
database: <database>
clickhouse_table: <table>
compression: gzip
flush_byte_length
Maximum size of data to accumulate before flushing to ClickHouse, in bytes. Default is 1048576 (1 MB).
nodes:
- name: <node name>
type: clickhouse_output
endpoint: <endpoint>
database: <database>
clickhouse_table: <table>
flush_byte_length: 2097152
schema_mapping
Defines how Edge Delta fields map to ClickHouse columns using OTTL expressions. Each mapping includes:
| Field | Description | Required |
|---|---|---|
column_name | ClickHouse column name | Yes |
expression | OTTL expression to extract the value from item data | Yes |
column_type | ClickHouse column type (e.g., String, DateTime64(3), LowCardinality(String), Float64) | Yes |
required | If true, data is dropped when this field is missing | No |
default_value | Default value if expression evaluates to empty/null | No |
nodes:
- name: <node name>
type: clickhouse_output
endpoint: <endpoint>
database: <database>
clickhouse_table: <table>
schema_mapping:
- column_name: timestamp
expression: timestamp
column_type: DateTime64(3)
required: true
- column_name: message
expression: body
column_type: String
tls
TLS configuration for secure connections to ClickHouse.
| Field | Description | Default |
|---|---|---|
enabled | Enable TLS for this connection | false |
ignore_certificate_check | Disable certificate verification (not recommended for production) | false |
ca_file | Path to CA certificate file | - |
crt_file | Path to client certificate file | - |
key_file | Path to client private key file | - |
min_version | Minimum TLS version (TLSv1_2, TLSv1_3) | TLSv1_2 |
nodes:
- name: <node name>
type: clickhouse_output
endpoint: "https://clickhouse.example.com:8443"
database: <database>
clickhouse_table: <table>
tls:
enabled: true
ca_file: /etc/ssl/certs/ca.crt
persistent_queue
Configure persistent buffering for reliability when ClickHouse is temporarily unavailable.
| Field | Description |
|---|---|
path | Directory path for buffer storage |
max_byte_size | Maximum buffer size (e.g., 1GB) |
mode | Buffer mode: error, backpressure, or always |
strict_ordering | Maintain strict event ordering |
drain_rate_limit | Maximum items per second to drain from queue |
nodes:
- name: <node name>
type: clickhouse_output
endpoint: <endpoint>
database: <database>
clickhouse_table: <table>
persistent_queue:
path: /var/lib/edgedelta/clickhouse-buffer
max_byte_size: 1GB
mode: error
ClickHouse Table Schema
Create your ClickHouse table with a schema that matches your schema_mapping configuration.
Example Logs Table:
CREATE TABLE logs (
timestamp DateTime64(3),
severity LowCardinality(String),
body String,
host String,
service LowCardinality(String)
) ENGINE = MergeTree()
ORDER BY timestamp;
Example Metrics Table:
CREATE TABLE metrics (
timestamp DateTime64(3),
metric_name LowCardinality(String),
metric_value Float64,
host String,
tags Map(String, String)
) ENGINE = MergeTree()
ORDER BY (metric_name, timestamp);
Use Cases
Log Analytics
Stream application logs to ClickHouse for fast SQL-based analytics and long-term storage.
nodes:
- name: clickhouse_app_logs
type: clickhouse_output
endpoint: "http://clickhouse:8123"
database: observability
clickhouse_table: application_logs
compression: gzip
schema_mapping:
- column_name: timestamp
expression: timestamp
column_type: DateTime64(3)
required: true
- column_name: level
expression: severity_text
column_type: LowCardinality(String)
- column_name: message
expression: body
column_type: String
- column_name: trace_id
expression: attributes["trace_id"]
column_type: String
Metrics Storage
Store time-series metrics in ClickHouse for custom dashboards and reporting.
nodes:
- name: clickhouse_metrics
type: clickhouse_output
endpoint: "http://clickhouse:8123"
database: metrics
clickhouse_table: system_metrics
schema_mapping:
- column_name: timestamp
expression: timestamp
column_type: DateTime64(3)
required: true
- column_name: name
expression: metric_name
column_type: LowCardinality(String)
required: true
- column_name: value
expression: metric_value
column_type: Float64
required: true