Edge Delta Parse CSV Processor
7 minute read
The Parse CSV processor parses a CSV row field and a header field into an array of fields. The processor performs a conditional transformation based on whether the attributes field is a map or not. If attributes is already a map, the processor will merge the parsed body into it. If it’s not a map, the processor will replace it completely with the parsed body.
Note: The CSV data must be valid. For example, if the header and data rows have different numbers of fields, the processor will not parse the data.
Configuration
Consider this log:
{
"_type": "log",
"timestamp": 1745295299953,
"body": "{\"csv\": \"XYZ789,user2,logout,failure\", \"csv_headers\": \"Session_ID,User_ID,Event_Type,Event_Details\"}",
"resource": {
...
},
"attributes": {
"csv": "XYZ789,user2,logout,failure",
"csv_headers": "Session_ID,User_ID,Event_Type,Event_Details"
}
}
The following configuration parses the attribute fields for the header and csv rows into a structured object:

This is the YAML version:
- name: Multi Processor_fa8d
type: sequence
processors:
- type: ottl_transform
metadata: '{"id":"a056ZYQrDcwFT1iLvwZWk","type":"parse-csv","name":"Parse CSV"}'
statements: |-
merge_maps(attributes["parsed"], ParseCSV(attributes["csv"], attributes["csv_headers"], ",", ",", "lazyQuotes"), "upsert") where IsMap(attributes["parsed"])
set(attributes["parsed"], ParseCSV(attributes["csv"], attributes["csv_headers"], ",", ",", "lazyQuotes")) where not IsMap(attributes["parsed"])
The resulting log now contains the parsed attribute:
{
"_type": "log",
"timestamp": 1745295299953,
"body": "{\"csv\": \"XYZ789,user2,logout,failure\", \"csv_headers\": \"Session_ID,User_ID,Event_Type,Event_Details\"}",
"resource": {
...
},
"attributes": {
"csv": "XYZ789,user2,logout,failure",
"csv_headers": "Session_ID,User_ID,Event_Type,Event_Details",
"parsed": {
"Event_Details": "failure",
"Event_Type": "logout",
"Session_ID": "XYZ789",
"User_ID": "user2"
}
}
}
Options
Select a telemetry type
You can specify, log
, metric
, trace
or all
. It is specified using the interface, which generates a YAML list item for you under the data_types
parameter. This defines the data item types against which the processor must operate. If data_types is not specified, the default value is all
. It is optional.
It is defined in YAML as follows:
- name: multiprocessor
type: sequence
processors:
- type: <processor type>
data_types:
- log
condition
The condition
parameter contains a conditional phrase of an OTTL statement. It restricts operation of the processor to only data items where the condition is met. Those data items that do not match the condition are passed without processing. You configure it in the interface and an OTTL condition is generated. It is optional.
Important: All conditions must be written on a single line in YAML. Multi-line conditions are not supported.
Comparison Operators
Operator | Name | Description | Example |
---|---|---|---|
== |
Equal to | Returns true if both values are exactly the same |
attributes["status"] == "OK" |
!= |
Not equal to | Returns true if the values are not the same |
attributes["level"] != "debug" |
> |
Greater than | Returns true if the left value is greater than the right |
attributes["duration_ms"] > 1000 |
>= |
Greater than or equal | Returns true if the left value is greater than or equal to the right |
attributes["score"] >= 90 |
< |
Less than | Returns true if the left value is less than the right |
attributes["load"] < 0.75 |
<= |
Less than or equal | Returns true if the left value is less than or equal to the right |
attributes["retries"] <= 3 |
matches |
Regex match | Returns true if the string matches a regular expression (generates IsMatch function) |
IsMatch(attributes["name"], ".*\\.log$") |
Logical Operators
Important: Use lowercase and
, or
, not
- uppercase operators will cause errors!
Operator | Description | Example |
---|---|---|
and |
Both conditions must be true | attributes["level"] == "ERROR" and attributes["status"] >= 500 |
or |
At least one condition must be true | attributes["log_type"] == "TRAFFIC" or attributes["log_type"] == "THREAT" |
not |
Negates the condition | not regex_match(attributes["path"], "^/health") |
Functions
Function | Description | Example |
---|---|---|
regex_match |
Returns true if string matches the pattern |
regex_match(attributes["message"], "ERROR\|FATAL") |
IsMatch |
Alternative regex function (UI generates this from “matches” operator) | IsMatch(attributes["name"], ".*\\.log$") |
Field Existence Checks
Check | Description | Example |
---|---|---|
!= nil |
Field exists (not null) | attributes["user_id"] != nil |
== nil |
Field doesn’t exist | attributes["optional_field"] == nil |
!= "" |
Field is not empty string | attributes["message"] != "" |
Common Examples
- name: _multiprocessor
type: sequence
processors:
- type: <processor type>
# Simple equality check
condition: attributes["request"]["path"] == "/json/view"
- type: <processor type>
# Multiple values with OR
condition: attributes["log_type"] == "TRAFFIC" or attributes["log_type"] == "THREAT"
- type: <processor type>
# Excluding multiple values (NOT equal to multiple values)
condition: attributes["log_type"] != "TRAFFIC" and attributes["log_type"] != "THREAT"
- type: <processor type>
# Complex condition with AND/OR/NOT
condition: (attributes["level"] == "ERROR" or attributes["level"] == "FATAL") and attributes["env"] != "test"
- type: <processor type>
# Field existence and value check
condition: attributes["user_id"] != nil and attributes["user_id"] != ""
- type: <processor type>
# Regex matching using regex_match
condition: regex_match(attributes["path"], "^/api/") and not regex_match(attributes["path"], "^/api/health")
- type: <processor type>
# Regex matching using IsMatch
condition: IsMatch(attributes["message"], "ERROR|WARNING") and attributes["env"] == "production"
Common Mistakes to Avoid
# WRONG - Cannot use OR/AND with values directly
condition: attributes["log_type"] != "TRAFFIC" OR "THREAT"
# CORRECT - Must repeat the full comparison
condition: attributes["log_type"] != "TRAFFIC" and attributes["log_type"] != "THREAT"
# WRONG - Uppercase operators
condition: attributes["status"] == "error" AND attributes["level"] == "critical"
# CORRECT - Lowercase operators
condition: attributes["status"] == "error" and attributes["level"] == "critical"
# WRONG - Multi-line conditions
condition: |
attributes["level"] == "ERROR" and
attributes["status"] >= 500
# CORRECT - Single line (even if long)
condition: attributes["level"] == "ERROR" and attributes["status"] >= 500
Parse from
Specify the field containing the CSV data.
Assign to
Specify the field where you want the parsed object to be saved.
Header source
Choose between defining the headers in the configuration (Static string) or specifying a field in the data item that contains the headers (Path).
Header
Specify the field containing the headers if Path is selected as the Header source. Otherwise specify the Headers separated by a comma.
Delimiter
You can specify which character is used in the source data item to separate the CSV values. It uses a comma by default.
Header Delimiter
You can specify which character is used in the source data item to separate the Header values. It uses a comma by default.
Mode
The mode option sets how strictly to interpret the CSV fields. You can choose from one of the following:
Option | Description |
---|---|
Strict | Enforces proper CSV format with matching quotes. |
Lazy quotes | Allows malformed or mismatched quotes, parses leniently. |
Ignore quotes | Treats quotes as normal characters, no special parsing. |
Final
Determines whether successfully processed data items should continue through the remaining processors in the same processor stack. If final
is set to true
, data items output by this processor are not passed to subsequent processors within the node—they are instead emitted to downstream nodes in the pipeline (e.g., a destination). Failed items are always passed to the next processor, regardless of this setting.
The UI provides a slider to configure this setting. The default is false. It is defined in YAML as follows:
- name: multiprocessor
type: sequence
processors:
- type: <processor type>
final: true
Keep original telemetry item
Controls whether the original, unmodified telemetry item is preserved after processing. If keep_item
is set to true
, the processor emits both:
- The original telemetry item (e.g., a log), and
- Any new item generated by the processor (e.g., a metric extracted from the log)
Both items are passed to the next processor in the stack unless final is also set.
Interaction with final
If final: true
is enabled, any successfully processed data items, whether original, newly created, or both, exit the processor stack or node immediately. No subsequent processors within the same node are evaluated, although downstream processing elsewhere in the pipeline continues. This means:
- If
keep_item: true
andfinal: true
, both the original and processed items bypass the remaining processors in the current node and are forwarded to downstream nodes (such as destinations). - If
keep_item: false
andfinal: true
, only the processed item continues beyond this processor, skipping subsequent processors in the stack, and the original item is discarded.
Note: If the data item fails to be processed, final
has no effect, the item continues through the remaining processors in the node regardless of the keep_item
setting.
The app provides a slider to configure keep_item
. The default is false
.
- name: ed_gateway_output_a3fa_multiprocessor
type: sequence
processors:
- type: <processor_type>
keep_item: true
final: true
See Also
- For an overview and to understand processor sequence flow, see Processors Overview
- To learn how to configure a processor, see Configure a Processor.
- For optimization strategies, see Best Practices for Edge Delta Processors.
- If you’re new to pipelines, start with the Pipeline Quickstart Overview or learn how to Configure a Pipeline.
- Looking to understand how processors interact with sources and destinations? Visit the Pipeline Overview.