Send Data from Edge Delta to a Kafka Destination
3 minute read
Overview
You can use the Kafka destination node to send logs, metrics, signals, health events, and cluster patterns and samples to Kafka.
Production Example
Prerequisites
To configure Edge Delta to send data to Kafka you require the following information:
- Kafka broker addresses
- Authentication credentials depending on the environment, for example:
- SASL/PLAIN: Requires username and password.
- SASL/SCRAM-SHA-256 or SASL/SCRAM-SHA-512: Requires username and password.
- SASL/OAUTHBEARER: Requires an OAuth access token.
- TLS Authentication: Requires CA certificate, client certificate, and private key.
- AWS IAM (for AWS MSK): Uses IAM role-based authentication.
- Topic name
Configure the Kafka Destination Node
Add a Kafka destination node to the relevant pipeline and configure it with the information you obtained in the Prerequisites section. In this example, certain TLS parameters and SASL credentials are required:
- name: Kafka Destination
type: kafka_output
endpoint: kafka.mycompany.com:9094
topic: logs-production
tls:
ca_file: /etc/edgedelta/certs/ca.pem
sasl:
username: "edgedelta_user"
password: "SuperSecurePass!"
mechanism: "scram-sha-512"
Test the Configuration
You can use Kafka tools to test whether Edge Delta traffic is being processed by Kafka. You can run this command on a server or workstation with the Kafka CLI tools installed, or on the workload machine (where Edge Delta is running) if the workload machine has Kafka CLI tools installed and has network access to Kafka.
kafka-console-consumer --bootstrap-server kafka.mycompany.com:9094 --topic logs-production --from-beginning
This connects a consumer to Kafka and displays output from the topic.
Local POC Example
You can run kafka in a docker container and push logs from a local Edge Delta instance as a quick POC.
Create a Kafka Cluster
To start, create a Docker Compose file for Apache Kafka and Zookeeper:
docker-compose.yml:
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- "2181:2181"
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
Next, create the Kafka cluster:
docker compose up -d
Create a Topic
Shell into the cluster:
docker exec -it kafka bash
Note: replace
kafka
with the name of the Kafka container if it is different.
Once in the container shell, create the topic:
kafka-topics --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test-topic
Create a Consumer
While in the container shell, you can create a consumer to see data coming out of the Kafka topic:
kafka-topics --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test-topic
Deploy an Edge Delta Agent
The Kafka cluster is configured to expose an endpoint on the host machine localhost:9092. Therefore, install an Edge Delta agent on the host (not in Docker).
This is a Linux or MacOS example, in a fresh terminal:
ED_API_KEY=12345678987654321 bash -c "$(curl -L https://release.edgedelta.com/release/install.sh)"
Configure the Edge Delta Agent
A minimal configuration of the Kafka Destination node is required for a local POC:
nodes:
- name: Kafka Destination
type: kafka_output
endpoint: localhost:9092
topic: test-topic
You can add a demo input node to the Edge Delta pipeline, or pipe a workload’s data to the Kafka destination.
- from: Demo Source
to: Kafka Destination

After a few minutes, the Kafka consumer in the container shell will being to show output from the demo node. In this instance, the demo node is outputting sample Palo Alto traffic, it is piped to the Kafka destination node, ingested by Kafka and picked up by the consumer listening on the test-topic.
{"_type":"log","timestamp":1738723983041,"body":"1,2025-02-05 02:53:03,2218251872,TRAFFIC,end,2,2025-02-05 02:53:32,215.210.87.87,142.62.183.51,124.69.150.181,85.50.249.53,rule-16,Glover6580,Grant6281,Benefitbuy,vsys-6,zone-36,zone-38,eth2,eth14,profile-22,3,7563359137,12,3596,48629,27328,54228,135243ab,udp,drop ICMP,85806,6271,7249,110,2025-02-05 02:53:45,25,,4,61,1,Niger,French Southern Territories,5,15,39,tcp-fin,19,17,5,26,virtual-system-10,device-12,,c8b0626e-a2c7-4295-a9db-b3e5c37cac77,7e63fc32-ff65-4f4d-b5df-75edabeb811c,446248314306958,5859226580934749,248,2025-02-05 02:55:47,PPTP,3,54,30,42,4a1f8799-82d9-4c81-b726-d62e7563a5ef,469,2,28,,cluster-6,branch,hub-spoke,humancompelling.org,dynamic-user-group-14,81.172.81.101,category-7,profile-61,some-model-3,\"Barchart\",MacOS,11,Garret Marquardt,3d:d6:9a:8b:a2:7b,category-7,profile-13,some-model-7,\"Code for America\",Linux,15.0.0,Shany Heller,47:a5:d8:84:8c:80,51355052,namespace-3,palo-alto-15219479,external-dynamic-list-2,external-dynamic-list-11,207659842,357505484,dynamic-address-group-7,dynamic-address-group-4,Crist5037,2025-02-05T02:53:13.039Z,,,email,business-systems,peer-to-peer,1,\"some,characteristic\",container-68992165,MediumVioletRedcat,0,1,0\n","resource":{"ed.conf.id":"12345678987654321","ed.demo":"Demo Source","ed.org.id":"1234567898765432112345678987654321","ed.source.name":"Demo Source","ed.source.type":"demo_input","ed.tag":"kafka-producer","host.ip":"192.168.0.105","host.name":"demotest","service.name":"Demo Source","src_type":"Demo"}}