Edge Delta Lookup Processor
8 minute read
Overview
You can enrich data items dynamically using a lookup table. This is useful for enriching data based on multiple criteria. For example, you can enrich data items that contain codes with attributes that provide the code definitions based on a table of all possible codes and their definitions. You can host the lookup table in Edge Delta in your own location.
Prerequisites
JSON Log Parsing
When processing JSON-formatted logs, parse the JSON into attributes before lookup processors can access the fields. This ensures the lookup processor can find and match the fields specified in your configuration.
Add a Parse JSON processor before your lookup processor in the sequence:
- type: ottl_transform
name: Parse JSON
statements: |-
set(cache["parsed-json"], ParseJSON(body))
merge_maps(attributes, cache["parsed-json"], "upsert") where IsMap(attributes) and IsMap(cache["parsed-json"])
Field Availability
The lookup processor operates on logs that contain the fields specified in key_fields
. Logs without these fields will pass through unmodified to the next processor in the sequence. See Processors for more information about how processors handle data items in a multiprocessor stack.
Configuration
Suppose your logs contain FTD codes for errors. You can use a lookup table that provides an explanation of the code and the recommended action as attributes.
Consider this log:
<80>Apr 22 02:07:40 securegateway01 %FTD-1-104002: (Primary) Switching to STANDBY (cause: bad/incomplete config).
The following processor checks a lookup table for matching keys:

The following row is discovered in the table:
FTD Code, Explanation, Recommended Action
%FTD-1-104002, You have forced the failover pair to switch roles either by entering the failover active command on the standby unit or the no failover active command on the active unit, If the message occurs because of manual intervention no action is required. Otherwise use the cause reported by the secondary unit to verify the status of both units of the pair
The FTD Code row matches the %FTD-1-104002 code in the log body.
Logs containing this code are now populated with the additional attributes:

The processor YAML is as follows:
- name: Multi Processor
type: sequence
processors:
- type: lookup
metadata: '{"id":"Ayp5ZStWEwntpQjvJXA1-","type":"lookup","name":"Lookup"}'
location_path: ed://ftd_code_explanation_action.csv
reload_period: 10m0s
match_mode: regex
key_fields:
- event_field: body
lookup_field: FTD Code
out_fields:
- event_field: attributes["ftd_explanation"]
lookup_field: Explanation
- event_field: attributes["ftd_action"]
lookup_field: Recommended Action
Example: Using Different Match Modes
Here’s an example using the prefix
match mode for categorizing error messages:
- type: lookup
name: Error Category Lookup
location_path: ed://error_categories.csv
reload_period: 5m0s
match_mode: prefix # Match log messages that start with lookup keys
key_fields:
- event_field: body
lookup_field: error_prefix
out_fields:
- event_field: attributes["error_category"]
lookup_field: category
- event_field: attributes["severity"]
lookup_field: severity_level
With a lookup table like:
error_prefix,category,severity_level
ERROR-4,Client Error,medium
ERROR-5,Server Error,high
WARN-,Warning,low
INFO-,Informational,info
This configuration would match:
- “ERROR-404 Not Found” → category: “Client Error”, severity: “medium”
- “ERROR-500 Internal Server Error” → category: “Server Error”, severity: “high”
- “WARN-001 High memory usage” → category: “Warning”, severity: “low”
Options
Select a telemetry type
You can specify, log
, metric
, trace
or all
. It is specified using the interface, which generates a YAML list item for you under the data_types
parameter. This defines the data item types against which the processor must operate. If data_types is not specified, the default value is all
. It is optional.
It is defined in YAML as follows:
- name: multiprocessor
type: sequence
processors:
- type: <processor type>
data_types:
- log
condition
The condition
parameter contains a conditional phrase of an OTTL statement. It restricts operation of the processor to only data items where the condition is met. Those data items that do not match the condition are passed without processing. You configure it in the interface and an OTTL condition is generated. It is optional. You can select one of the following operators:
Operator | Name | Description | Example |
---|---|---|---|
== |
Equal to | Returns true if both values are exactly the same |
attributes["status"] == "OK" |
!= |
Not equal to | Returns true if the values are not the same |
attributes["level"] != "debug" |
> |
Greater than | Returns true if the left value is greater than the right |
attributes["duration_ms"] > 1000 |
>= |
Greater than or equal | Returns true if the left value is greater than or equal to the right |
attributes["score"] >= 90 |
< |
Less than | Returns true if the left value is less than the right |
attributes["load"] < 0.75 |
<= |
Less than or equal | Returns true if the left value is less than or equal to the right |
attributes["retries"] <= 3 |
matches |
Regex match | Returns true if the string matches a regular expression |
isMatch(attributes["name"], ".*\\.name$" |
It is defined in YAML as follows:
- name: _multiprocessor
type: sequence
processors:
- type: <processor type>
condition: attributes["request"]["path"] == "/json/view"
Location
You define the location of the lookup table. You can specify a lookup table hosted in Edge Delta, a file on the cluster, or Other for a URL. If you select an Edge Delta lookup table you can select it from a list. If you select File you enter the filename and path. Or you specify the URL for other.
The tool populates the location_path
parameter in the YAML. This field is mandatory and the format is as follows depending on the location type:
"file://<path>"
"ed://<file name in ED stored lookup>"
"(http|https)://<URL to CSV>"
Reload Period
This option is used to specify how often the lookup table is reloaded. It is defined as a duration and defaults to 5 minutes if not specified. The tool populates the reload_period
parameter in YAML.
Match mode
You can choose how to match the lookup key field. The tool populates the match_mode
parameter with one of the following options:
exact
(default) - Matches when the event field value exactly equals the lookup field valueregex
- Matches when the lookup field contains a regex pattern that matches the event fieldcontain
- Matches when the event field value contains the lookup field value as a substringprefix
- Matches when the event field value starts with the lookup field valuesuffix
- Matches when the event field value ends with the lookup field value
Note: The lookup processor will enrich logs where the specified event_field
exists and matches the lookup criteria. Logs without matching fields pass through to the next processor unmodified.
Match Mode Examples
Consider a lookup table with a key field containing “ERROR-500”:
Match Mode | Log Body | Matches? |
---|---|---|
exact | “ERROR-500” | ✓ Yes |
exact | “ERROR-500 occurred” | ✗ No |
contain | “ERROR-500 occurred in production” | ✓ Yes |
prefix | “ERROR-500 internal server error” | ✓ Yes |
prefix | “Server ERROR-500” | ✗ No |
suffix | “Critical ERROR-500” | ✓ Yes |
suffix | “ERROR-500 detected” | ✗ No |
key_fields
The key_fields
are pairs that map event fields to lookup fields to find matches.
For key_fields
, the event_field
specifies the key value in the log and binds it to the lookup_field
. For each log, the node will extract the event_field
value using the event field’s pattern and compare it to each value in lookup_field
for a match.
See how to use lookup tables for information on how the key_fields bind a log field and a table field.
out_fields
The out_fields
define mappings from lookup table to event attributes for enrichment upon successful matches. Supports default_value
for no match, and append_mode
if multiple rows are matched.
For out_fields
, there are two binding pairs: For each, a new attribute will be created based on the event_field
, and its value will be extracted from the lookup_field
- for all rows matched by the key_field
parameter.
See how to use lookup tables for information on how the out_fields bind a log field and a table field.
Final
Determines whether successfully processed data items should continue through the remaining processors in the same processor stack. If final
is set to true
, data items output by this processor are not passed to subsequent processors within the node—they are instead emitted to downstream nodes in the pipeline (e.g., a destination). Failed items are always passed to the next processor, regardless of this setting.
The UI provides a slider to configure this setting. The default is false. It is defined in YAML as follows:
- name: multiprocessor
type: sequence
processors:
- type: <processor type>
final: true
Troubleshooting
Understanding Processor Behavior
The lookup processor follows standard multiprocessor logic: it processes logs that match its criteria and passes all logs (both processed and unprocessed) to the next processor in the sequence. If you set final: true
, only unmatched logs continue to the next processor. See Processors for details on processor chaining behavior.
Common Scenarios
No enrichment occurring
Possible causes:
- JSON not parsed: For JSON logs, ensure you have a Parse JSON processor before the lookup processor
- Field not in attributes: Verify the field specified in
event_field
exists in your parsed attributes - No matching values: Check that your lookup table contains the values present in your logs
- Case sensitivity: For exact matches, ensure case matches exactly (use
ignore_case
option if needed)
Lookup table issues
Possible causes:
- Invalid CSV format: Ensure your lookup table is properly formatted CSV with headers
- Location path: Verify the
location_path
format (e.g.,ed://filename.csv
for Edge Delta hosted tables) - Reload period: The table refreshes based on
reload_period
(default 5 minutes)
Testing Tips
- Use Live Capture to verify fields are properly extracted and available in attributes
- Test with simple exact matches first before trying complex match modes
- Allow 1-2 minutes for log indexing when validating enrichment in search results
- Send test logs with known lookup values to verify configuration
- Check agent logs to confirm the lookup table is loaded successfully
See Also
- For an overview and to understand processor sequence flow, see Processors Overview
- To learn how to configure a processor, see Configure a Processor.
- For optimization strategies, see Best Practices for Edge Delta Processors.
- If you’re new to pipelines, start with the Pipeline Quickstart Overview or learn how to Configure a Pipeline.
- Looking to understand how processors interact with sources and destinations? Visit the Pipeline Overview.