Regex Processors
  • Dark
    Light

Regex Processors

  • Dark
    Light

Processors Recap

You can configure a processor to perform logs to metrics conversions of incoming raw log data. Once configured, the processor will populate the Anomalies and Insights pages as well as the Metrics view. See Processors Overview for more information about processors. Edge Delta has a number of processor types, one of which is a regex processor.


Regex Processors

A regex processor uses a Golang regex pattern to match data in log events. Each processor can handle the matches it detects using its configured logic, for example, to calculate averages or to ignore out-of-threshold matches. The default enabled stats are count and anomaly1 for occurrence captures. For numeric captures, the default enabled_stats are count, min, max, avg, and anomaly1. You can create different types of regex processors by using different regex patterns:

  • Simple Keyword - checks log event streams for basic regex matches to generate count metrics.
  • Dimension Counter - checks log event streams for regex in named capture groups to generate count metrics grouped by dimension.
  • Numeric Capture - checks log event streams for numeric regex in capture groups to generate count metrics for events that carry performance indicators.
  • Dimension Numeric Capture - checks log event streams for specific numerical fields, such as latency, per unique dimension, such as api_path to generate count metrics.

To manage a regex processor, you configure the agent yaml with processor parameters in a regexes section.

processor:
   regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      <optional_parameter>: <parameter_value>

See Regex Processor Parameters for the list of all regex processor parameters, and see Create and Manage a Processor for more information on how to configure the agent yaml.


Regex Processor Examples

The following regex processors are configured by default. Each processor will process metrics (such as counting) for log entries with a particular matching keyword, such as "error" using the pattern parameter. Each processor will exclude any results that dont have a threshold of 95% certainty that the event is an anomaly, configured with the anomaly_probability_percentage trigger. To determine the anomaly certainty score they will each look back at the previous 12 hours, set with the retention parameter:

processors:
  regexes:
    - name: error-monitoring
      pattern: (?i)error
      trigger_thresholds:
        anomaly_probability_percentage: 95
      retention: 12h0m0s
    - name: exception-monitoring
      pattern: (?i)exception
      trigger_thresholds:
        anomaly_probability_percentage: 95
      retention: 12h0m0s
    - name: failure-monitoring
      pattern: (?i)fail
      trigger_thresholds:
        anomaly_probability_percentage: 95
      retention: 12h0m0s
    - name: negative-sentiment-monitoring
      pattern: (?i)(exception|fail|timeout|broken|caught|denied|abort|insufficient|killed|killing|malformed|unsuccessful|outofmemory|panic|undefined)
      trigger_thresholds:
        anomaly_probability_percentage: 95
      retention: 12h0m0s
      

As an example of output, the first processor will generate the following metrics:
error_monitoring.count
error_monitoring.anomaly1

The following examples illustrate other processor configurations you can add.

Simple Keyword Match

In this example there are two processors. The first, named error-regex matches variations of the word error. It examines a rolling 4 hour previous period, specified with the retention parameter, and rolls up statistics into 2 minute reporting intervals, which is configured using the interval parameter. If the upper_limit_per_interval threshold is met for five consecutive intervals, an alert will be triggered. This behaviour is defined with the consecutive trigger. This processor will not expose metrics for Prometheus on a metrics endpoint, even if rule_metrics_prom_stats_enabled is set to true in the agent_settings section. The second processor, named severity_high, will match and count messages only where the logic configured in the filter named - extract_severity allows. This is configured using the filters parameter.

processors:
  regexes:
    - name: : "error-regex"
      pattern: "error|ERROR|problem|ERR|Err"
      interval: 2m
      retention: 4h
      anomaly_confidence_period: 1h
      anomaly_tolerance: 0.2
      only_report_nonzeros: true
      description: "Counts of messages including error per 2 minutes."
      trigger_thresholds:
        upper_limit_per_interval: 250 
        consecutive: 5 
      disable_reporting_in_prometheus: true
    - name: "severity_high" 
      pattern: "HIGH|high"
      filters:
        - extract_severity 

As an example of output, the first processor will generate the following metrics:
error_regex.count
error_regex.anomaly1
The second processor will generate the following metrics:
severity_high.count
severity_high.anomaly1

Dimension Counter

In the following example, there are three processors.

The first processor, named http-method matches a capture group called "method" (in the pattern parameter). In turn, the "method" dimension configures the processor to count instances of each HTTP method.

The count and anomalymin stats are configured using enabled_stats. The anomalymin metric reduces alert noise by taking a min of anomaly1, which is emitted by default, and anomaly2.

Seeing method events is normal so a trigger_threshold is configured to detect if there are fewer than two for the past interval. This is configured with the lower_limit_per_interval parameter. The filters parameter configures the http-method processor to only search for matches using the logic configured in the filter named - info rather than the whole log.

The second processor is named http-single. It is also a dimension counter but the dimensions_as_attributes parameter is set to true, causing it to pivot the metrics by serving the dimensions as attributes.

The third processor is named http-group. It is also a dimension counter with the dimensions_as_attributes parameter set to true, but it includes the dimensions_groups parameter to group attributes for metrics.

processors:
  regexes:
    - name: "http-method"
      pattern: "] \"(?P<method>\\w+)"
      dimensions: ["method"]
      enabled_stats: ["count", "anomalymin"]
      trigger_thresholds:
        lower_limit_per_interval: 2 
      filters:
        - info
    - name: "http-single"
      pattern: "] \"(?P<method>\\w+) (?P<uri>\\S*) (?P<httpversion>\\S*)\" (?P<code>\\d+)"
      dimensions: ["method", "httpversion", "code"]
      dimensions_as_attributes: true
    - name: "http-group"
      pattern: "] \"(?P<method>\\w+) (?P<httpversion>\\S*)\" (?P<code>\\d+)"
      dimensions: ["method", "httpversion", "code"]
      dimensions_as_attributes: true
      dimensions_groups:
        - selected_dimensions: ["method", "code"]
        - selected_dimensions: ["method", "httpversion"]

The first processor will generate an occurrence count and anomalymin for each HTTP method:
http_method_get.count
http_method_get.anomalymin
http_method_post.count
http_method_post.anomalymin etc.

The second processor will generate metrics with dimensions as attributes such as this example:

http.count     1   {method="get"}
http.anomaly1  25  {method="get"} 
http.count     1   {method="post"}
http.anomaly1  25  {method="post"} 
http.count     2   {httpversion="1.1"}
http.anomaly1  25  {httpversion="1.1"}
http.count     2   {code="200"} 
http.anomaly1  25  {code="200"}

The third processor will generate metrics with grouped dimension attributes such as this example:

http_group.count      1     {method="get", code="200"}
http_group.anomaly1   25    {method="get", code="200"}
http_group.count      1     {method="post", code="200"}
http_group.anomaly1   25    {method="post", code="200"}
http_group.count      1     {method="get", httpversion="1.1"}
http_group.anomaly1   25    {method="get", httpversion="1.1"}
http_group.count      1     {method="post", httpversion="1.1"}
http_group.anomaly1   25    {method="post", httpversion="1.1"}

Numeric Capture

In the following example there are two processors.

The first processor, named flog matches to messages with a status code and it reports their response sizes. After getting the response size, the processor divides it by 1000 before reporting the metric. This logic is specified using the value_adjustment_rules parameter.

The anomaly probability percentage trigger_threshold is set to a very low 1. This means that most events will be matched even if they are unlikely to be anomalies.

The second processor, named http-response-size is a more simple version with an unnamed single capture group pattern. It also has a very low anomaly probability percentage. This processor will generate flog_statuscode and flog_responsesize.

processors:
  regexes:
    - name: "flog"  
      pattern: " (?P<statuscode>\\d+) (?P<responsesize>\\d+)$" 
      value_adjustment_rules:
        responsesize:
          operator: "/"
          operand: 1000.0 
      trigger_thresholds:
        anomaly_probability_percentage: 1
    - name: "http-response-size"  
      pattern: " (\\d+)$"
      trigger_thresholds:
        anomaly_probability_percentage: 1

The first processor will generate the following metrics
flog_statuscode_responsize.count
flog_statuscode_responsize.min
flog_statuscode_responsize.max
flog_statuscode_responsize.avg
flog_statuscode_responsize.anomaly1

The second processor will generate the following metrics
http_response_size.count
http_response_size.min
http_response_size.max
http_response_size.avg
http_response_size.anomaly1

Dimension Numeric Capture

The following example contains a processors named http-request-latencies. It will generate numeric statistics for each HTTP method.

The interval is specified as 1m so it will capture values for 1 minute before calculating metrics. The retention parameter is set to 1 hour, lower than the default 3 hours. This will make the processor more sensitive to spikes in metric values.

Intervals with no events will be excluded from calculations such as average and standard deviation. This is configured with skip_empty_intervals. The anomaly probability percentage is set to a very low 1 to match most events even if they are unlikely to be anomalies.

processors:
  regexes:
    - name: "http-request-latencies"
      pattern: "] \"(?P<method>\\w+) took (?P<latency>\\d+) ms"
      dimensions: ["method"]
      interval: 1m
      retention: 1h
      skip_empty_intervals: true
      trigger_thresholds:
        anomaly_probability_percentage: 1

In this example, suppose the following logs were fed into the processor:

"GetAlbums took 12ms"
"GetRecords took 16ms"

Metrics are displayed in the following format:
{processor name}{dimension name}{dimension value}_{numeric capture group name}.{stat type}
The agent will generate the following metrics:

  • http_request_latencies_method_getalbums_latency.count
  • http_request_latencies_method_getalbums_latency.avg
  • http_request_latencies_method_getalbums_latency.min
  • http_request_latencies_method_getalbums_latency.max
  • http_request_latencies_method_getalbums_latency.anomaly1
  • http_request_latencies_method_getrecords_latency.count
  • http_request_latencies_method_getrecords_latency.avg
  • http_request_latencies_method_getrecords_latency.min
  • http_request_latencies_method_getrecords_latency.max
  • http_request_latencies_method_getrecords_latency.anomaly1

For each distinct dimension (getalbums and getrecords), numeric statistics are calculated and reported with a metric name that contains the dimension.

If the example had dimensions_as_attributes: true, then the metric name would not have been altered for each dimension value. Instead, the dimension value is added as an attribute. In this case, the following metrics would have been generated:

name: http_latency.count, attributes: {method="GetAlbums"}
name: http_latency.avg, attributes: {method="GetAlbums"}
name: http_latency.min, attributes: {method="GetAlbums"}
name: http_latency.max, attributes: {method="GetAlbums"}
name: http_latency.anomaly1, attributes: {method="GetAlbums"}
name: http_latency.count, attributes: {method="GetRecords"}
name: http_latency.avg, attributes: {method="GetRecords"}
name: http_latency.min, attributes: {method="GetRecords"}
name: http_latency.max, attributes: {method="GetRecords"}
name: http_latency.anomaly1, attributes: {method="GetRecords"}

Dimension Numeric Capture with dimensions_groups

The following example configures a regex processor named apidata. It will generate numeric duration averages for the ostype and service dimensions. It outputs dimensions as attributes and it uses dimensions_groups to group the service and ostype attributes. For dimensions_groups the numeric dimension must be specified and there can only be one numeric dimension per regex processor.

processors:
  regexes:
    - name: apidata
      pattern: ostype=(?P<ostype>\w+).+?service=(?P<service>.+?)\sduration=(?P<duration>\d+)
      dimensions: ['ostype','service']
      numeric_dimension: "duration"
      dimensions_as_attributes: true
      enabled_stats: ["avg"]
      dimensions_groups:
        - selected_dimensions: ["service","ostype"] 

If this example configuration was fed the following log, the metrics generated would be as follows:

2022-08-20 08:21:14.288134 response=201 loglevel=INFO ostype=Unix service=one-packaging-ui source=syslog-test duration=41 svcTime=59128524

Metric:

apidata.avg   41  {"service":"one-packaging-ui source=syslog-test", "ostype":"Unix"}

Dimensions as Attributes exposed to Prometheus

The following example configures a regex processor to expose metrics with dimensions as attributes in the Prometheus format. See the Prometheus Integration guide for more information.

processors:
  regexes:
  - name: http
      pattern: "http request to (?P<destination>\\S*) method (?P<method>\\w+) status code (?P<code>\\d+)"
      dimensions: ["destination", "method", "code"]
      dimensions_as_attributes: true

This configuration will expose the following metrics to Prometheus"

http_destination
http_method
http_code

Prometheus can also scrape regex processors with multiple dimensions:

processors:
  regexes:
  - name: http
      pattern: "http request to (?P<destination>\\S*) method (?P<method>\\w+) status code (?P<code>\\d+)"
      dimensions: ["destination", "method", "code"]
      dimensions_as_attributes: true
      dimensions_groups: 
        - selected_dimensions: ["code", "destination"] 
        - selected_dimensions: ["code", "method"]

This configuration will expose the following metrics to Prometheus"

http_code_destination
http_code_method

Regex Processor Parameters

The following parameters can be used in a regex processor.

Note:

See the compatibility table to find out which parameters won't work with certain regex processor types.

Required Parameters

name (required)

The name parameter specifies a name for the processor. You refer to this name in other places, for example to refer to a specific processor in a workflow. Names must be unique within the processor section. It is a yaml list element so it begins with a - and a space followed by the string. A name is a required parameter for a regex processor.

processors:
  regexes:
    - name: <processor-name>

See the example implementation in a dimension counter processor.

pattern (required)

The pattern parameter specifies a Golang regex pattern that the regex processor will look for. It is a string that should be wrapped in quotes to handle escapes. A pattern is a required parameter for a regex processor.

processors:
  regexes:
    - name: <processor_name>
      pattern: “<regex_pattern>”

See the example implementation in a dimension counter processor.



Note
If you enter a regex pattern with quotes in a YAML file, then you must add a secondary backslash.

In the following example, both patterns are the same:

  • pattern: \d{9}
  • pattern: "\\d{9}"


Optional Parameters

anomaly_confidence_period

The anomaly_confidence_period parameter specifies a grace period after a processor starts during which time anomaly scores will not be calculated. This can happen when the agent is new or when the agent restarts after its configuration is updated. Anomaly scores will all be zero while baselines are established. The default value is 30m. It is specified in the Golang duration format.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      anomaly_confidence_period: <duration>

See the example implementation in a simple keyword match processor.

anomaly_tolerance

When the anomaly tolerance is non-zero, anomaly scores can better handle edge cases with a standard deviation that is small. The default value is 0.01. It is specified as a floating point number.

processors:
  regexes:
    - name: <name>
      pattern: <regex_pattern>
      anomaly_tolerance: <tolerance>

See the example implementation in a simple keyword match processor.

description

A parameter that describes a processor’s function. It is specified as a string value.

processor:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      description: <processor_description>

See the example implementation in a simple keyword match processor.

dimensions

The dimensions parameter specifies one or more capture groups from the pattern parameter to use as a dynamic dimension for grouping results. A numeric capture processor can’t use the dimensions parameter. It is specified as an array of strings.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      dimensions: [“<capture_group1>”, “<capture_group2>”]

See the example implementations for dimension numeric capture and dimension counter processors.

dimensions_as_attributes

The dimensions_as_attributes parameter determines how to send dimension key-value pairs. Using true sends them as attributes, while false appends them to the metric name. It is specified as a boolean value with true/on/yes or false/off/no.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      dimensions: <capture_group>
      dimensions_as_attributes: <boolean_value>

See the example implementations for dimension numeric capture and dimension counter processors.

dimensions_groups

The dimensions_groups parameter specifies which dimension key-value pairs to group together. You can only use this parameter if dimensions_as_attributes is set to true. Dimension groups as specified using list elements, each with an array of dimension name strings. If you use it in a Dimension Numeric Capture type regex processor only one numeric dimension can be defined per processor.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      dimensions: ["<dimension1>", "<dimension2>", "<dimension3>"]
      numeric_dimension: "<dimension3>"
      dimensions_as_attributes: true
      dimensions_groups:
        - selected_dimensions: ["<dimension1>", "<dimension2>"]
Note:

Dimension values must not contain regex special characters such as | or . for a grouped dimension regex processor to work in the aggregator agent. This is a limitation due to reverse extraction of the dimension values from the metric name's string representation in an aggregator agent. e.g. httpgroupmethodgetcode200

See the example implementation for a dimension counter processor.

disable_reporting_in_prometheus

The disable_reporting_in_prometheus parameter disables reporting to prometheus for a specific processor even if rule_metrics_prom_stats_enabled is set to true in the agent_settings section. The parameter is set with a boolean true or false. It is an optional parameter.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      disable_reporting_in_prometheus: <true|false>

enabled_stats

The enabled_stats parameter defines what data to generate when a regex rule finds a match. You can obtain the following values:

  • count - the number of instances matched
  • min - the smallest matching value
  • max - the largest matching value
  • avg - the average (mean) matching value
  • anomaly1 - Edge Delta anomaly score 1
  • anomaly2 - Edge Delta anomaly score 2
  • anomalymin - the min of anomaly1 and anomaly2 to reduce alert noise.

The default enabled stats are count and anomaly1 for occurrence captures; and count, min, max, avg, and anomaly1 metrics for numeric captures. You specify enabled_stats as an array of strings.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      enabled_stats: ["<stat_name>", "<stat_name>"]

filters

The filters parameter refers to a defined filter that has been configured in the filters section of the agent yaml. The filter contains logic that defines where in the log to apply the processor. All other data is ignored by the processor. You can use a filter to prevent the processor from processing portions of a log that contain sensitive data. Filters are a yaml list element so they begin with a - and a space. They are defined with a string that matches a filter name.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      filters:
        - <filter_reference>

See the example implementation in a simple keyword match processor.

interval

The interval parameter specifies the reporting interval for the statistics that a regex processor will generate. A processor will collect values for the duration of the interval before calculating metrics such as the average. The default is 1 minute. It is specified in the Golang duration format.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      interval: <duration>
Note:

The cluster processor uses a parameter called reporting_frequency which is equivalent to interval in a regex processor.

See the example implementation in a simple keyword match processor.

only_report_nonzeros

The only_report_nonzeros parameter configures whether non-zero stats should be processed. Excluding stats with a zero value changes the metric averages. The default value is false for processors. It is specified as a boolean value with true/on/yes or false/off/no.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      only_report_nonzeros: <boolean_value>

See the example implementation in a simple keyword match processor.

retention

The retention parameter specifies how far back to look when the regex processor generates anomaly scores. A short retention period will be more sensitive to spikes in metric values. The default for a regex processor is 3 hours. It is specified as a Golang duration.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      retention: <duration>

See the example implementation in a dimension numeric capture processor.

skip_empty_intervals

The skip_empty_intervals parameter configures the regex processor to skip the intervals that have no events so the overall average or standard deviation is not affected. The default is true. It is specified as a boolean value with true/on/yes or false/off/no.

processors:
  regexes:
    - name: <processor_name>
      pattern: <regex_pattern>
      skip_empty_intervals: <boolean_value>

See the example implementation in a dimension numeric capture processor.

trigger_thresholds

The trigger_thresholds parameter is a dictionary type that can specify certain child parameters with specific combinations of thresholds. When a threshold is reached a trigger destination (specified in the corresponding workflow) is notified.

processors:
  <processor type>:
    - name: <processor_name>
      pattern: <regex_pattern> 
      trigger_thresholds:
        <trigger_threshold_parameter>: <integer>

The following thresholds can be configured for regex processors:

anomaly_probability_percentage

The anomaly_probability_percentage parameter sets the threshold for a trigger based on the Edge Delta agent’s confidence that an event is an anomaly. The range is 0-100 where 100 is the highest confidence that an event is an anomaly. There is no default value. It is configured as an integer. See the example implementation in a dimension numeric capture processor.


upper_limit_per_interval

The upper_limit_per_interval parameter sets the maximum number of events within the reporting interval. A higher occurrence would trigger a notification for too many events. It is configured as an integer. See the example implementation in a simple keyword match processor.


lower_limit_per_interval

The lower_limit_per_interval parameter sets the minimum number of events within the reporting interval. A lower occurrence would trigger a notification for not enough events. It is configured as an integer. See the example implementation in a dimension counter processor.


consecutive

The consecutive parameter sets the minimum number of times a threshold must be triggered before an alert is issued. It requires another trigger_threshold parameter to be set for the processor. The default is zero. It is configured as an integer. See the example implementation in a simple keyword match processor.

value_adjustment_rules

The value_adjustment_rules parameter defines a mathematical operation to perform on a value detected by the regex. The value can be multiplied or divided. The parameter contains three nested elements: the capture group name from the regex pattern, an operator, and an operand. The value name is a string, the operator is a string of either / or * wrapped in quotes to escape the divide symbol, and the operand is an integer or floating point number. The capture group must be a numerical value and the operand cannot be zero. This parameter is useful for metric conversions.

processors:
  regexes:
    - name: <processor_name>  
      pattern: <regex_pattern> 
      value_adjustment_rules:
        <value_name>:
          operator: "</|*>"
          operand: <number> 

See the example implementation for a numeric capture processor.


Parameter Compatibility

Some parameters are not available depending on the type of regex pattern or metrics being returned by a processor.

Parameter Simple Keyword Dimension Counter Numeric Capture Dimension Numeric Capture
name Required Required Required Required
pattern Required Required Required Required
anomaly_confidence_period Optional Optional Optional Optional
anomaly_tolerance Optional Optional Optional Optional
description Optional Optional Optional Optional
dimensions Optional Optional Not Applicable Optional
dimensions_as_attributes Optional Optional Not Applicable Optional
dimensions_groups Optional Not Applicable Not Applicable Optional
enabled_stats Optional Optional Optional Optional
filters Optional Optional Optional Optional
interval Optional Optional Optional Optional
only_report_nonzeros Optional Optional Optional Optional
retention Optional Optional Optional Optional
skip_empty_intervals Optional Optional Optional Optional
trigger_thresholds Optional Optional Optional Optional
value_adjustment_rules Not Applicable Not Applicable Optional Not Applicable

Was this article helpful?

What's Next
Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.