Send Events from an Edge Delta Pipeline to Slack
3 minute read
Overview
You can configure Edge Delta to send alerts to Slack. They can originate from either the edge within a pipeline configuration, or they can originate from a centralized monitor. See Threshold-Based Alerts with Edge Delta Monitors for an overview of pipeline thresholds vs threshold monitors. Also see Send Events from Edge Delta Monitors to Slack for configuring monitors.
Prepare Slack
You need to create a Slack app and configure it with an incoming webhook to receive notifications from Edge Delta.
- Navigate and log in to Your Apps on the Slack API website:
https://api.slack.com/apps
. - Click Create New App.
- Select From scratch.
- Name the app and select your Slack workspace.
- Click Create App.
- Click Incoming Webhooks and select Activate Incoming Webhooks.
- Click Add New Webhook.
- Select a channel you want the app to post notifications to and click Allow.
- Copy the webhook URL
Send Signals from the Edge
With the Slack app configured, you can trigger notifications from pipelines.
To send signals to Slack from a pipeline on the edge, you need to add a Slack destination node to your pipeline and connect it to a threshold node.
- Click Pipelines and select your pipeline.
- Click Edit Mode.
- Click Add Node.
- Select Slack Destination.
- Configure the form with the endpoint you copied earlier when you created the Slack app. Alternatively, if you have configured an integration for Slack, select it in the From Integration field.

In this scenario, error logs are counted using an extract metric processor and an aggregate metric processor. See How to Extract and Aggregate Metrics with Edge Delta. Then a threshold is set at 5
for the metric.

Here is the relevant YAML for this section of the pipeline:
links:
- from: loggen
to: loggen_multiprocessor
- from: loggen_multiprocessor
to: threshold_66ce
- from: threshold_66ce
to: slack_output_f56f_multiprocessor
- from: slack_output_f56f_multiprocessor
to: slack_output_f56f
nodes:
- name: loggen
type: kubernetes_input
include:
- k8s.deployment.name=loggen,k8s.namespace.name=loggenlogs
- name: loggen_multiprocessor
type: sequence
processors:
- type: extract_metric
metadata: '{"id":"29TBvgHUwdRGv7zTrd8g3","type":"extract_metric","name":"Extract
Metric"}'
keep_item: true
data_types:
- log
extract_metric_rules:
- name: error-logs
unit: "1"
conditions:
- IsMatch(body, "error")
gauge:
value: "1"
- type: aggregate_metric
metadata: '{"id":"C5uNP22I2QDFvdCdkkWz_","type":"aggregate_metric","name":"Aggregate
Metric"}'
data_types:
- metric
aggregate_metric_rules:
- name: error-logs-per-minute
conditions:
- name == "error-logs"
interval: 1m0s
aggregation_type: count
group_by:
- resource
- attributes
- name: threshold_66ce
type: threshold
user_description: Threshold
condition: value > 5
filter: item.name == "error-logs-per-minute"
- name: slack_output_f56f
type: slack_output
user_description: Slack Destination Basic
endpoint: https://hooks.slack.com/services/<REDACTED>
suppression_window: 20m0s
- name: slack_output_f56f_multiprocessor
type: sequence
user_description: Multi Processor
During the threshold evaluation period, 55 such metrics were detected so a signal was sent to the Slack destination.
{
"_type": "signal",
"timestamp": 1749030386195,
"resource": {
"container.id": "2296c093fbecee88cbc00a00c01ab4a1a1361c6ae0fb2f27e0077014d66fad29",
"container.image.name": "docker.io/userexample/imageexample:latest",
"ed.conf.id": "<redacted>",
"ed.domain": "k8s",
"ed.filepath": "/var/log/pods/loggenlogs_loggen-d94d75-ggkcr_bd79183f-caa1-42dd-9cbf-8d2e7bd9b01e/loggen/0.log",
"ed.org.id": "<redacted>",
"ed.source.name": "loggen",
"ed.source.type": "kubernetes_input",
"ed.tag": "slacker",
"host.ip": "172.19.0.3",
"host.name": "slacker-control-plane",
"k8s.container.name": "loggen",
"k8s.deployment.name": "loggen",
"k8s.namespace.name": "loggenlogs",
"k8s.node.name": "slacker-control-plane",
"k8s.pod.name": "loggen-d94d75-ggkcr",
"k8s.pod.uid": "bd79183f-caa1-42dd-9cbf-8d2e7bd9b01e",
"k8s.replicaset.name": "loggen-d94d75",
"service.name": "loggen"
},
"attributes": {},
"signal": {
"description": "error-logs-per-minute hit threshold -threshold-checker of filter: item.name == \"error-logs-per-minute\" and condition: value > 5 with value 55.00",
"name": "error-logs-per-minute",
"signal_id": "111022",
"threshold_condition": "value > 5",
"threshold_filter": "item.name == \"error-logs-per-minute\"",
"title": "Threshold -threshold-checker triggered",
"value": 55
}
}
In Slack, the message uses resource
and attributes
data from the signal to create the content:
