Edge Delta Lookup Processor
15 minute read
Overview
You can enrich logs, metrics, and traces dynamically using a lookup table. This is useful for enriching data based on multiple criteria. For example, you can enrich data items that contain codes with attributes that provide the code definitions based on a table of all possible codes and their definitions. You can host the lookup table in Edge Delta or in your own location.
The following diagram illustrates how the lookup processor handles incoming data items:
Prerequisites
JSON Log Parsing
When processing JSON-formatted logs, parse the JSON into attributes before lookup processors can access the fields. This ensures the lookup processor can find and match the fields specified in your configuration.
Add a Parse JSON processor before your lookup processor in the sequence:
- type: ottl_transform
name: Parse JSON
statements: |-
set(cache["parsed-json"], ParseJSON(body))
merge_maps(attributes, cache["parsed-json"], "upsert") where IsMap(attributes) and IsMap(cache["parsed-json"])
Field Availability
The lookup processor operates on data items (logs, metrics, or traces) that contain the fields specified in key_fields. Data items without these fields pass through unmodified to the next processor in the sequence. See Processors for more information about how processors handle data items in a multiprocessor stack.
Configuration
Suppose your logs contain FTD codes for errors. You can use a lookup table that provides an explanation of the code and the recommended action as attributes.
Consider this log:
<80>Apr 22 02:07:40 securegateway01 %FTD-1-104002: (Primary) Switching to STANDBY (cause: bad/incomplete config).
The following processor checks a lookup table for matching keys:

The following row is discovered in the table:
FTD Code, Explanation, Recommended Action
%FTD-1-104002, You have forced the failover pair to switch roles either by entering the failover active command on the standby unit or the no failover active command on the active unit, If the message occurs because of manual intervention no action is required. Otherwise use the cause reported by the secondary unit to verify the status of both units of the pair
The FTD Code row matches the %FTD-1-104002 code in the log body.
Logs containing this code are now populated with the additional attributes:

The processor YAML is as follows:
- name: Multi Processor
type: sequence
processors:
- type: lookup
metadata: '{"id":"Ayp5ZStWEwntpQjvJXA1-","type":"lookup","name":"Lookup"}'
location_path: ed://ftd_code_explanation_action.csv
reload_period: 10m0s
match_mode: regex
key_fields:
- event_field: body
lookup_field: FTD Code
out_fields:
- event_field: attributes["ftd_explanation"]
lookup_field: Explanation
- event_field: attributes["ftd_action"]
lookup_field: Recommended Action
Example: Using Different Match Modes
Here’s an example using the prefix match mode for categorizing error messages:
- type: lookup
name: Error Category Lookup
location_path: ed://error_categories.csv
reload_period: 5m0s
match_mode: prefix # Match log messages that start with lookup keys
key_fields:
- event_field: body
lookup_field: error_prefix
out_fields:
- event_field: attributes["error_category"]
lookup_field: category
- event_field: attributes["severity"]
lookup_field: severity_level
With a lookup table like:
error_prefix,category,severity_level
ERROR-4,Client Error,medium
ERROR-5,Server Error,high
WARN-,Warning,low
INFO-,Informational,info
This configuration would match:
- “ERROR-404 Not Found” → category: “Client Error”, severity: “medium”
- “ERROR-500 Internal Server Error” → category: “Server Error”, severity: “high”
- “WARN-001 High memory usage” → category: “Warning”, severity: “low”
Example: Enriching Metrics with Error Codes
You can enrich metrics that contain error codes with human-readable descriptions:
- type: lookup
name: Error Code Enrichment
data_types:
- metric
location_path: ed://error_codes.csv
reload_period: 5m0s
match_mode: exact
key_fields:
- event_field: attributes["error_code"]
lookup_field: code
out_fields:
- event_field: attributes["error_description"]
lookup_field: description
- event_field: attributes["error_severity"]
lookup_field: severity
With a lookup table like:
code,description,severity
E001,Connection timeout,high
E002,Authentication failed,critical
500,Internal server error,critical
Example: Using regex Match Mode
The regex match mode uses regular expressions for pattern matching. Use when values follow patterns or need flexible matching.
Lookup Table (ed://log_patterns.csv):
pattern,severity,category
.*ERROR.*,critical,error
.*WARN.*,warning,warning
.*INFO.*,info,informational
Configuration:
- type: lookup
name: Log Level Classification
location_path: ed://log_patterns.csv
match_mode: regex
regex_option: first
key_fields:
- event_field: body
lookup_field: pattern
out_fields:
- event_field: attributes["severity"]
lookup_field: severity
- event_field: attributes["category"]
lookup_field: category
A log with body "2024-01-15 ERROR Connection failed" gets enriched with severity: "critical" and category: "error".
Example: Using contain Match Mode
The contain match mode checks if the lookup field value is contained within the event field. Use for substring matching.
Lookup Table (ed://services.csv):
service_keyword,team,oncall_channel
payment,payments-team,#payments-oncall
auth,identity-team,#identity-oncall
order,commerce-team,#commerce-oncall
Configuration:
- type: lookup
name: Service Team Lookup
location_path: ed://services.csv
match_mode: contain
match_option: first
key_fields:
- event_field: attributes["service_name"]
lookup_field: service_keyword
out_fields:
- event_field: attributes["owning_team"]
lookup_field: team
- event_field: attributes["oncall_channel"]
lookup_field: oncall_channel
A data item with service_name: "payment-gateway-v2" matches because it contains "payment", and gets enriched with owning_team: "payments-team".
Example: Using suffix Match Mode
The suffix match mode checks if the event field value ends with the lookup field value. Use for file extensions, domain matching, etc.
Lookup Table (ed://file_types.csv):
extension,file_type,handler
.log,log_file,log_processor
.json,json_file,json_parser
.csv,csv_file,csv_parser
Configuration:
- type: lookup
name: File Type Classification
location_path: ed://file_types.csv
match_mode: suffix
match_option: first
key_fields:
- event_field: attributes["filename"]
lookup_field: extension
out_fields:
- event_field: attributes["file_type"]
lookup_field: file_type
- event_field: attributes["handler"]
lookup_field: handler
A data item with filename: "application.log" matches because it ends with ".log", and gets enriched with file_type: "log_file".
Example: Using match_option all with append_mode
When you need to collect all matching values (useful with contain, prefix, or suffix modes), use match_option: all with append_mode: true.
Lookup Table (ed://tags.csv):
keyword,tag
error,needs-review
critical,high-priority
payment,billing-team
auth,security-team
Configuration:
- type: lookup
name: Multi-Tag Lookup
location_path: ed://tags.csv
match_mode: contain
match_option: all
key_fields:
- event_field: body
lookup_field: keyword
out_fields:
- event_field: attributes["tags"]
lookup_field: tag
append_mode: true
A log with body "critical error in payment service" matches three rows and gets tags: "needs-review,high-priority,billing-team" (comma-separated).
Options
Select a telemetry type
You can specify, log, metric, trace or all. It is specified using the interface, which generates a YAML list item for you under the data_types parameter. This defines the data item types against which the processor must operate. If data_types is not specified, the default value is all. It is optional.
It is defined in YAML as follows:
- name: multiprocessor
type: sequence
processors:
- type: <processor type>
data_types:
- log
condition
The condition parameter contains a conditional phrase of an OTTL statement. It restricts operation of the processor to only data items where the condition is met. Those data items that do not match the condition are passed without processing. You configure it in the interface and an OTTL condition is generated. It is optional.
Important: All conditions must be written on a single line in YAML. Multi-line conditions are not supported.
Comparison Operators
| Operator | Name | Description | Example |
|---|---|---|---|
== | Equal to | Returns true if both values are exactly the same | attributes["status"] == "OK" |
!= | Not equal to | Returns true if the values are not the same | attributes["level"] != "debug" |
> | Greater than | Returns true if the left value is greater than the right | attributes["duration_ms"] > 1000 |
>= | Greater than or equal | Returns true if the left value is greater than or equal to the right | attributes["score"] >= 90 |
< | Less than | Returns true if the left value is less than the right | attributes["load"] < 0.75 |
<= | Less than or equal | Returns true if the left value is less than or equal to the right | attributes["retries"] <= 3 |
matches | Regex match | Returns true if the string matches a regular expression (generates IsMatch function) | IsMatch(attributes["name"], ".*\\.log$") |
Logical Operators
Important: Use lowercase and, or, not - uppercase operators will cause errors!
| Operator | Description | Example |
|---|---|---|
and | Both conditions must be true | attributes["level"] == "ERROR" and attributes["status"] >= 500 |
or | At least one condition must be true | attributes["log_type"] == "TRAFFIC" or attributes["log_type"] == "THREAT" |
not | Negates the condition | not regex_match(attributes["path"], "^/health") |
Functions
| Function | Description | Example |
|---|---|---|
regex_match | Returns true if string matches the pattern | regex_match(attributes["message"], "ERROR\|FATAL") |
IsMatch | Alternative regex function (UI generates this from “matches” operator) | IsMatch(attributes["name"], ".*\\.log$") |
Field Existence Checks
| Check | Description | Example |
|---|---|---|
!= nil | Field exists (not null) | attributes["user_id"] != nil |
== nil | Field doesn’t exist | attributes["optional_field"] == nil |
!= "" | Field is not empty string | attributes["message"] != "" |
Common Examples
- name: _multiprocessor
type: sequence
processors:
- type: <processor type>
# Simple equality check
condition: attributes["request"]["path"] == "/json/view"
- type: <processor type>
# Multiple values with OR
condition: attributes["log_type"] == "TRAFFIC" or attributes["log_type"] == "THREAT"
- type: <processor type>
# Excluding multiple values (NOT equal to multiple values)
condition: attributes["log_type"] != "TRAFFIC" and attributes["log_type"] != "THREAT"
- type: <processor type>
# Complex condition with AND/OR/NOT
condition: (attributes["level"] == "ERROR" or attributes["level"] == "FATAL") and attributes["env"] != "test"
- type: <processor type>
# Field existence and value check
condition: attributes["user_id"] != nil and attributes["user_id"] != ""
- type: <processor type>
# Regex matching using regex_match
condition: regex_match(attributes["path"], "^/api/") and not regex_match(attributes["path"], "^/api/health")
- type: <processor type>
# Regex matching using IsMatch
condition: IsMatch(attributes["message"], "ERROR|WARNING") and attributes["env"] == "production"
Common Mistakes to Avoid
# WRONG - Cannot use OR/AND with values directly
condition: attributes["log_type"] != "TRAFFIC" OR "THREAT"
# CORRECT - Must repeat the full comparison
condition: attributes["log_type"] != "TRAFFIC" and attributes["log_type"] != "THREAT"
# WRONG - Uppercase operators
condition: attributes["status"] == "error" AND attributes["level"] == "critical"
# CORRECT - Lowercase operators
condition: attributes["status"] == "error" and attributes["level"] == "critical"
# WRONG - Multi-line conditions
condition: |
attributes["level"] == "ERROR" and
attributes["status"] >= 500
# CORRECT - Single line (even if long)
condition: attributes["level"] == "ERROR" and attributes["status"] >= 500
Location
You define the location of the lookup table. You can specify a lookup table hosted in Edge Delta, a file on the cluster, or Other for a URL. If you select an Edge Delta lookup table you can select it from a list. If you select File you enter the filename and path. Or you specify the URL for other.
The tool populates the location_path parameter in the YAML. This field is mandatory and the format is as follows depending on the location type:
"file://<path>""ed://<file name in ED stored lookup>""(http|https)://<URL to CSV>"
The following diagram shows how the processor loads and caches lookup tables:
Reload Period
This option is used to specify how often the lookup table is reloaded. It is defined as a duration and defaults to 5 minutes if not specified. The tool populates the reload_period parameter in YAML.
Match mode
You can choose how to match the lookup key field. The tool populates the match_mode parameter with one of the following options:
exact(default) - Matches when the event field value exactly equals the lookup field valueregex- Matches when the lookup field contains a regex pattern that matches the event fieldcontain- Matches when the event field value contains the lookup field value as a substringprefix- Matches when the event field value starts with the lookup field valuesuffix- Matches when the event field value ends with the lookup field value
Note: The lookup processor enriches data items (logs, metrics, or traces) where the specified event_field exists and matches the lookup criteria. Data items without matching fields pass through to the next processor unmodified.
regex_option
The regex_option parameter controls how many matches are returned when using regex match mode.
| Value | Description |
|---|---|
first | (Default) Stop after finding the first matching row |
all | Return all matching rows. Use with append_mode: true on out_fields to collect multiple values |
match_option
The match_option parameter controls how many matches are returned when using contain, prefix, or suffix match modes.
| Value | Description |
|---|---|
first | (Default) Stop after finding the first matching row |
all | Return all matching rows. Use with append_mode: true on out_fields to collect multiple values |
Note: For regex match mode, use regex_option instead. The match_option parameter only applies to contain, prefix, and suffix modes.
ignore_case
When enabled, the lookup matching becomes case-insensitive. This option is available when using exact, contain, prefix, or suffix match modes (not available for regex mode).
| Value | Description |
|---|---|
false | (Default) Case-sensitive matching |
true | Case-insensitive matching |
Match Mode Examples
Consider a lookup table with a key field containing “ERROR-500”:
| Match Mode | Event Field Value | Matches? |
|---|---|---|
| exact | “ERROR-500” | Yes |
| exact | “ERROR-500 occurred” | No |
| contain | “ERROR-500 occurred in production” | Yes |
| prefix | “ERROR-500 internal server error” | Yes |
| prefix | “Server ERROR-500” | No |
| suffix | “Critical ERROR-500” | Yes |
| suffix | “ERROR-500 detected” | No |
key_fields
The key_fields are pairs that map event fields to lookup fields to find matches.
For key_fields, the event_field specifies the key value in the data item and binds it to the lookup_field. For each data item, the processor extracts the event_field value and compares it to each value in lookup_field for a match.
See how to use lookup tables for information on how the key_fields bind a data item field and a table field.
Multiple key_fields (compound matching)
You can specify multiple key_fields to create compound match conditions. When multiple keys are specified, all keys must match for a lookup row to be selected (AND logic).
The following example matches rows where both region AND service fields match:
key_fields:
- event_field: attributes["region"]
lookup_field: Region
- event_field: attributes["service"]
lookup_field: Service
In this configuration, a lookup row is only matched if both the region AND service fields match their corresponding lookup table columns.
out_fields
The out_fields define mappings from lookup table to event attributes for enrichment upon successful matches.
For out_fields, there are two binding pairs: For each, a new attribute will be created based on the event_field, and its value will be extracted from the lookup_field - for all rows matched by the key_field parameter.
See how to use lookup tables for information on how the out_fields bind a data item field and a table field.
default_value
The default_value parameter specifies a fallback value to use when no matching row is found in the lookup table. If omitted and no match occurs, the field is not added to the data item.
out_fields:
- event_field: attributes["timezone"]
lookup_field: TimeZone
default_value: UTC
append_mode
The append_mode parameter determines how multiple matched values are handled. When set to true, the processor concatenates all matching values with commas. Use this with regex_option: all or match_option: all to collect values from multiple matching rows.
| Value | Description |
|---|---|
false | (Default) Use the value from the first matching row |
true | Concatenate values from all matching rows, separated by commas |
out_fields:
- event_field: attributes["matched_hosts"]
lookup_field: Host
append_mode: true
Final
Determines whether successfully processed data items should continue through the remaining processors in the same processor stack. If final is set to true, data items output by this processor are not passed to subsequent processors within the node—they are instead emitted to downstream nodes in the pipeline (e.g., a destination). Failed items are always passed to the next processor, regardless of this setting.
The UI provides a slider to configure this setting. The default is false. It is defined in YAML as follows:
- name: multiprocessor
type: sequence
processors:
- type: <processor type>
final: true
Troubleshooting
Understanding Processor Behavior
The lookup processor follows standard multiprocessor logic: it processes data items that match its criteria and passes all data items (both processed and unprocessed) to the next processor in the sequence. If you set final: true, only unmatched data items continue to the next processor. See Processors for details on processor chaining behavior.
Common Scenarios
No enrichment occurring
Possible causes:
- JSON not parsed: For JSON logs, ensure you have a Parse JSON processor before the lookup processor
- Field not in attributes: Verify the field specified in
event_fieldexists in your parsed attributes - No matching values: Check that your lookup table contains the values present in your logs
- Case sensitivity: For exact matches, ensure case matches exactly (use
ignore_caseoption if needed)
Lookup table issues
Possible causes:
- Invalid CSV format: Ensure your lookup table is properly formatted CSV with headers
- Location path: Verify the
location_pathformat (e.g.,ed://filename.csvfor Edge Delta hosted tables) - Reload period: The table refreshes based on
reload_period(default 5 minutes)
Testing Tips
- Use Live Capture to verify fields are properly extracted and available in attributes
- Test with simple exact matches first before trying complex match modes
- Allow 1-2 minutes for log indexing when validating enrichment in search results
- Send test logs with known lookup values to verify configuration
- Check agent logs to confirm the lookup table is loaded successfully
See Also
- For an overview and to understand processor sequence flow, see Processors Overview
- To learn how to configure a processor, see Configure a Processor.
- For optimization strategies, see Best Practices for Edge Delta Processors.
- If you’re new to pipelines, start with the Pipeline Quickstart Overview or learn how to Configure a Pipeline.
- Looking to understand how processors interact with sources and destinations? Visit the Pipeline Overview.