Edge Delta Route Node
5 minute read
Overview
The Route node evaluates various log body and metadata conditions and directs logs to specific paths depending on whether a log meets those conditions. This allows the segmentation of telemetry data through a pipeline depending on its content or other attributes. For instance, it can be used to route logs containing specific errors to specific processing nodes or to direct logs from a particular Kubernetes namespace to another node for specialized handling. Logs that don’t match any of the path patterns are routed to the unmatched
path.
Conceptually, a Route node with only a single path and no links from the unmatched path will function similarly to a Regex Filter node. The benefits of a Route node over a Regex Filter node are:
- Process telemetry data streams based on message items, such as routing Kubernetes namespace data based on values contained in
item["resource"]["k8s.namespace.name"]
field - The ability to process multiple matching criteria for distinct pipeline paths and destination nodes
- The ability to handle unmatched logs for further processing on another output path.
- Evaluate logs based on either literal CEL item value matches or CEL macros
General best practices when choosing between a Regex Filter node and a Route node are:
- Use a Regex Filter node when you want to completely remove raw log data, such as “Drop all DEBUG logs from further processing”
- Use a Route node when you want to create separate paths for distinct log criteria, such as “Route all logs from the NGINX namespace into one set of processors, and all logs from the rest of my cluster into another set of processors.”
The routing is configured either using Pipeline Builder or by defining path parameters in the links section of the YAML.
For a detailed walkthrough, see the Route Logs in a Branched Pipeline page.
NOTE: The following examples use CEL Macros for richer evaluation, however making a literal comparison to individual CEL items, such as
item["resource"]["k8s.namespace.name"] == nginx
works as well.
Example Configuration 1
nodes:
- name: route
type: route
paths:
- path: log_to_patterns
condition: regex_match(item["body"], "node1")
- path: regex_filter
condition: regex_match(item["body"], "node2")
- path: log transform
condition: regex_match(item["body"], "node4")
- path: extract json
condition: regex_match(item["body"], "node5")
- path: mask
condition: regex_match(item["body"], "node6")
- path: log to metric
condition: regex_match(item["body"], "node7")
links:
- from: route
path: log_to_patterns
to: log_to_patterns
- from: route
path: regex_filter
to: regex_filter_test
- from: route
path: log transform
to: log_transform_test
- from: route
path: extract json
to: extract_json_test
- from: route
path: mask
to: mask_test
- from: route
path: log to metric
to: log_to_metric_test
- from: route
path: unmatched
to: route fails
Suppose these two logs were sent to a pipeline with this route configuration:
{
"timestamp": "2023-04-23T12:34:56.789Z",
"logLevel": "ERROR",
"serviceName": "AuthService",
"nodeId": "node6",
"message": "Login failed",
"clientIP": "192.168.1.10",
"username": "user123",
"event": "login_attempt",
"outcome": "failure"
}
{
"timestamp": "2023-04-23T12:34:56.789Z",
"logLevel": "ERROR",
"serviceName": "AuthService",
"nodeId": "node4",
"message": "Login failed",
"clientIP": "192.168.1.10",
"username": "user123",
"event": "login_attempt",
"outcome": "failure"
}
The first node would match on node6
and be routed to the Mask path while the second log would match on node4
and be routed to the log transform path.
Tip: You can use this condition to identify base64 encoded logs that need to be routed to a base64_decode node:
- path: base64
condition: regex_match(item["body"], "^[A-Za-z0-9+/]+(?:={2}|={1})?$")
Example Configuration 2
- name: item_router
type: route
paths:
- path: "pre-splunk"
condition: regex_match(item["body"], "(?i)ERROR")
- path: "ns=edgedelta"
condition: item["resource"]["k8s.namespace.name"] == "edgedelta"
links:
- from: source_node
to: item_router
- from: item_router
path: pre-splunk
to: next_node1
- from: item_router
path: ns=edgedelta
to: next_node2
- from: route
path: unmatched
to: other_output
Required Parameters
name
A descriptive name for the node. This is the name that will appear in Visual Pipelines and you can reference this node in the YAML using the name. It must be unique across all nodes. It is a YAML list element so it begins with a -
and a space followed by the string. It is a required parameter for all nodes.
nodes:
- name: <node name>
type: <node type>
type: route
The type
parameter specifies the type of node being configured. It is specified as a string from a closed list of node types. It is a required parameter.
nodes:
- name: <node name>
type: <node type>
paths
The paths parameter defines the paths and their expressions for matching logs. At least one path is required.
- path is the name of the sub-path. This name is referenced with the path parameter in the links section
- condition is a condition to evaluate if the log item should be sent to this path. The format is Common Expression Language (CEL) and you can use CEL macros.
- name: <node name>
type: route
paths:
- path: "pre_elastic"
condition: regex_match(item["body"], "(?i)ERROR")
- path: "ns=edgedelta"
condition: item["resource"]["k8s.namespace.name"] == "edgedelta"
See Also:
Optional parameters
exit_if_matched
The exit_if_matched
parameter stops evaluation of further paths if a log matches the parent path. It is specified as a Boolean and the default is false. It is optional.
- name: <node name>
type: route
paths:
- path: "<path name>"
condition: <matching condition expression>
exit_if_matched: true
- path: "<path name>"
condition: <matching condition expression>