Edge Delta Parse Regex Processor

The Edge Delta parse regex processor parses a field into structured data based on a regex pattern.

Overview

The processor performs a conditional transformation based on whether the attributes field is a map or not. If attributes is already a map, the processor will merge the parsed body into it. If it’s not a map, the processor will replace it completely with the parsed body.

For detailed instructions on how to use multiprocessors, see Use Multiprocessors.

Configuration

Consider this log body:

Item info: 45678-AB-xyz
- name: Multi Processor_fa8d
  type: sequence
  processors:
  - type: ottl_transform
    metadata: '{"id":"I8VTOEkTHhWVyYLdu2eMj","type":"parse-regex","name":"Parse Regex"}'
    statements: |-
      merge_maps(attributes, ExtractPatterns(body, "(?P<sample>\\d+)-(?P<batch>[A-Z]{2})-(?P<code>[a-z]+)"), "upsert") where IsMap(attributes)
      set(attributes, ExtractPatterns(body, "(?P<sample>\\d+)-(?P<batch>[A-Z]{2})-(?P<code>[a-z]+)")) where not IsMap(attributes)      

The output data items has parsed the body using the named captures:

{
  "_type": "log",
  "timestamp": 1745303825609,
  "body": "Item info: 45678-AB-xyz",
  "resource": {
...
  },
  "attributes": {
    "batch": "AB",
    "code": "xyz",
    "sample": "45678"
  }
}

Options

Select a telemetry type

You can specify, log, metric, trace or all. It is specified using the interface, which generates a YAML list item for you under the data_types parameter. This defines the data item types against which the processor must operate. If data_types is not specified, the default value is all. It is optional.

It is defined in YAML as follows:

- name: multiprocessor
  type: sequence
  processors:
  - type: <processor type>
    data_types:
    - log

condition

The condition parameter contains a conditional phrase of an OTTL statement. It restricts operation of the processor to only data items where the condition is met. Those data items that do not match the condition are passed without processing. You configure it in the interface and an OTTL condition is generated. It is optional. You can select one of the following operators:

Operator Name Description Example
== Equal to Returns true if both values are exactly the same attributes["status"] == "OK"
!= Not equal to Returns true if the values are not the same attributes["level"] != "debug"
> Greater than Returns true if the left value is greater than the right attributes["duration_ms"] > 1000
>= Greater than or equal Returns true if the left value is greater than or equal to the right attributes["score"] >= 90
< Less than Returns true if the left value is less than the right attributes["load"] < 0.75
<= Less than or equal Returns true if the left value is less than or equal to the right attributes["retries"] <= 3
matches Regex match Returns true if the string matches a regular expression isMatch(attributes["name"], ".*\\.name$"

It is defined in YAML as follows:

- name: _multiprocessor
  type: sequence
  processors:
  - type: <processor type>
    condition: attributes["request"]["path"] == "/json/view"

Parse from

Specify the field containing the source data.

Assign to

Specify the field where you want the parsed object to be saved.

Regex pattern

Specify the pattern that will be used to match the value. It also uses named captures as the field names.

Note the escaping the tool has added to the pattern. For example (?P<sample>\d+) has become (?P<sample>\\d+).

Final

The final parameter specifies whether successfully processed data items should continue to subsequent processors within the same multiprocessor node. Data items that fail to be processed by the processor will be passed to the next processor in the node regardless of this setting. You select the slider in the tool which specifies it for you in the YAML as a Boolean. The default is false and it is optional.

It is defined in YAML as follows:

- name: multiprocessor
  type: sequence
  processors:
    - type: <processor type>
    final: true

Keep original telemetry item

This option defines whether to delete the original unmodified data item after it is processed. For example, you can keep the original log as well as any metrics generated by an extract metric processor. If you select this option your data volume will increase.