Cisco SFTD Pack
9 minute read
Edge Delta Pipeline Pack for Cisco Secure Firewall Threat Defense (SFTD)
Overview
This pipeline processes Cisco SFTD logs, extracting and transforming information for better analysis and monitoring. It involves parsing the logs, filtering them based on certain criteria, and enhancing them with additional information.
Pack Description
1. Data Ingestion
The data flow starts with the Input node, which serves as the entry point for logs into this pack.
2. Multi-Processor Node
Logs are processed by the Multi Processor node, which is a Multiprocessor node. This node executes a sequence of operations to transform the log data:
2.1 Parse FTD Code
The first operation utilizes a Parse Regex Processor to extract patterns from logs to retrieve the ftd_code.
- type: ottl_transform
metadata: '{"id":"QrJFHBqwPgjSQKgcYqkpu","type":"parse-regex","name":"Parse FTD code"}'
data_types:
- log
statements: |-
merge_maps(attributes, ExtractPatterns(body, "%FTD-\S*-(?P<ftd_code>\d+):"), "upsert") where IsMap(attributes)
set(attributes, ExtractPatterns(body, "%FTD-\S*-(?P<ftd_code>\d+):")) where not IsMap(attributes)
The first statement utilizes a regex pattern in the ExtractPatterns function to search within the body of the log message for a specific pattern that matches %FTD-\S*-(?P<ftd_code>\d+):. This pattern is designed to detect strings starting with %FTD-, followed by any non-space characters (\S*), then a dash, and finally capturing one or more digits as ftd_code using a named capture group. The result of this extraction is a map that captures these patterns. Subsequently, the IsMap(attributes) function checks whether attributes is already a map (a dictionary-like structure). If true, the merge_maps function is called, which integrates this newly extracted map into the existing attributes map with an “upsert” operation. This means that if a key already exists in attributes, its value is updated; otherwise, the new key-value pair is inserted. This ensures that the new log data is seamlessly merged with the existing structure.
The second statement again uses ExtractPatterns with the same regex to extract the ftd_code from the body of the log messages. However, it uses the set function to replace the contents of attributes with this newly extracted map of attributes when attributes is not already a map structure, as determined by not IsMap(attributes). This means that if attributes is initially not a dictionary-like structure, it will become one, populated solely with the ftd_code and any other extracted values from the pattern. The strategy here ensures that non-map attributes receive a structured update, transforming them into a map based on new extraction results.
2.2 Filter No FTD Code
The next step involves a Filter Processor to eliminate any logs that don’t contain an FTD code.
- type: ottl_filter
metadata: '{"id":"H5deS0wvGroNU1ZYcpdvQ","type":"filter","name":"Filter no FTD code"}'
condition: attributes["ftd_code"] == nil
data_types:
- log
filter_mode: exclude
2.3 Lookup FTD Codes to Drop
This utilizes a Lookup Processor to check against a local file for any FTD codes that should be excluded, currently disabled.
- type: lookup
metadata: '{"id":"uA9Mfy_Z0abLxI8Syp01Y","type":"lookup","name":"Lookup FTD codes to drop"}'
disabled: true
data_types:
- log
location_path: ed://ftd_drops.csv
key_fields:
- event_field: attributes["ftd_code"]
lookup_field: ftd_code
out_fields:
- event_field: attributes["ftd_drop"]
lookup_field: ftd_code
The processor is currently disabled, as noted by the disabled: true flag, meaning it won’t actively participate in the data processing until enabled. It operates on data classified as type log, indicating its role in transforming log entries.
Central to this processor’s functionality is the location_path, which specifies the path to the lookup table stored under ed://ftd_drops.csv. This file serves as a reference to identify logs that should potentially be excluded based on their ftd_code. The key_fields parameter binds the event_field (in this case, attributes[“ftd_code”]) to the lookup_field within the CSV, which is also labeled ftd_code. This binding allows the lookup processor to identify matched entries between the logs and the table. Once a match is found, out_fields defines how the data is enriched. Here, the processor maps the ftd_code from the CSV to attributes["ftd_drop"] in the log, which would be used by subsequent processes to determine if a log should be dropped. Effectively, this configuration sets up a system for reducing noise by potentially filtering out logs with certain FTD codes, thus focusing on the more relevant data entries for analysis.
2.4 Drop by FTD Code
Another Filter Processor excludes logs based on the ftd_drop attribute, which is also disabled.
- type: ottl_filter
metadata: '{"id":"ZYKQdpGBRj2C1ZtqbJDx3","type":"filter","name":"Drop by FTD code"}'
condition: attributes["ftd_drop"] != nil
disabled: true
data_types:
- log
filter_mode: exclude
2.5 Add Vendor Product
An Add Field Processor enriches logs with additional vendor and product details.
- type: ottl_transform
metadata: '{"id":"XUf2YmjtV-ZnYjP6wCrWG","type":"add-field","name":"Add vendor product"}'
data_types:
- log
statements: |-
set(attributes["product"], "FTD")
set(attributes["vendor"], "Cisco")
set(attributes["vendor_product"], "Cisco FTD")
2.6 Extract and Parse Timestamp
This involves a Parse Regex Processor to extract timestamps, followed by a Parse Timestamp Processor to standardize their format.
- type: ottl_transform
metadata: '{"id":"DLIOorvzHDC47hQaatMjd","type":"parse-grok","name":"Extract Timestamp"}'
data_types:
- log
statements: |-
merge_maps(attributes, ExtractGrokPatterns(body, "(?<timestamp>%{MONTH} %{MONTHDAY} %{YEAR} %{TIME})"), "upsert") where IsMap(attributes)
set(attributes, ExtractGrokPatterns(body, "(?<timestamp>%{MONTH} %{MONTHDAY} %{YEAR} %{TIME})")) where not IsMap(attributes)
In the first OTTL statement, the configuration leverages the ExtractGrokPatterns function to parse the body of the log message. This pattern is designed to identify and capture timestamp information from the log message, such as month, day, year, and time, which are then grouped under the label timestamp. The merge_maps ensures that the parsed timestamp is merged into the existing attributes map, with the “upsert” directive updating the key if it exists or inserting a new one if it does not. This operation is conditional on attributes being a map, allowing seamless integration of the timestamp into the existing attribute structure.
In contrast, the second statement is applied when attributes is not already a map. This statement sets attributes directly to the outcome of ExtractGrokPatterns, utilizing the same Grok pattern to capture timestamp details. The captured timestamp effectively replaces the existing attributes data structure with a new map populated with the extracted timestamp, ensuring logs without an existing map structure are standardized by the same parsing logic.
- type: ottl_transform
metadata: '{"id":"d5mOENwvune0dVDc6rAcp","type":"parse-timestamp","name":"Parse Timestamp"}'
condition: attributes["timestamp"] != nil
data_types:
- log
statements: set(timestamp, UnixMilli(Time(attributes["timestamp"], "Jan 02 2006 15:04:05")))
The OTTL statement is a transformation that occurs once a timestamp has already been extracted and stored as attributes[“timestamp”]. The statement converts the extracted timestamp from a textual format into a Unix time representation in milliseconds. The Time function interprets the string format based on the specified template "Jan 02 2006 15:04:05" to ensure accurate parsing of the date and time, while UnixMilli transforms this result into a Unix epoch time
2.7 Lookup Regex by FTD Code
Another Lookup Processor determines the parsing patterns based on ftd_code.
- type: lookup
metadata: '{"id":"HYLvyh1s01mV5s7MN3XnF","type":"lookup","name":"Lookup regex by FTD code"}'
data_types:
- log
location_path: ed://ftd_parsing.csv
key_fields:
- event_field: attributes["ftd_code"]
lookup_field: ftd_code
out_fields:
- event_field: attributes["regex"]
lookup_field: regex
The key_fields parameter is crucial for this lookup processor, connecting the log’s attribute, attributes["ftd_code"], to the ftd_code field within the lookup table. This binding allows the lookup processor to search for matches between each log’s FTD code and those listed in the table. Once a match is identified, the processor uses the out_fields parameter to enrich the log data. Specifically, it binds the regex field from the CSV to attributes["regex"] within the log, effectively assigning the matched regex pattern to the log entry. This process enables subsequent stages of the data pipeline to apply tailored parsing or filtering operations based on the regex patterns provided.
2.8 Parse by Regex and Body Parsing
Utilizes a Parse Regex Processor to extract data via regex, then a Parse Key Value Processor for finer granularity.
- type: ottl_transform
metadata: '{"id":"k3UvSYjQSRamFX2unSueU","type":"ottl_transform","name":"Parse by regex"}'
data_types:
- log
statements: |-
replace_pattern(body, ".*?%FTD-", "%FTD-")
merge_maps(attributes, EDXExtractPatterns(body, attributes["regex"]), "upsert") where attributes["regex"] != nil
set(attributes["43000x"], EDXExtractPatterns(body, "%FTD-.*43000[12345]: (?<body>.*)"))
The first statement is a transformation that modifies the body of the log message. It searches for any sequence of characters preceding the pattern %FTD-, denoted by .*?, and replaces it with %FTD-. The use of .*? acts as a non-greedy modifier, ensuring that the shortest matching sequence is selected. This replacement makes the log entry consistent, ensuring that any extraneous characters before the %FTD- pattern are removed, which can simplify subsequent parsing processes and ensure that the log adheres to a standard format, focusing specifically on the details immediately following %FTD-.
The second statement dynamically applies regex extraction from the log body using a regex pattern stored in attributes["regex"]. The EDXExtractPatterns function processes this regex to extract key-value pairs from the log body. These extracted pairs are then merged into the existing attributes, with the “upsert” operation updating any existing keys or inserting new ones as necessary. This conditional operation where attributes["regex"] != nil ensures the transformation is performed only when a valid regex pattern is available, allowing for flexible, pattern-driven data extraction that enriches log attributes with parsed data elements.
The third statement specifically targets log messages containing subcodes within the range 430001 to 430005. Using EDXExtractPatterns, this statement captures everything following these subcodes into a named group, body, and assigns this extracted value to attributes["43000x"]. By capturing and storing this part of the log, this transformation isolates critical pieces of information related to specific subcodes, allowing further downstream processing or analysis to occur on these targeted log components.
- type: ottl_transform
metadata: '{"id":"NekW9KrFLww7ZOOTy9wko","type":"parse-key-value","name":"Parse 43000x body"}'
condition: attributes["43000x"]["body"] != nil
data_types:
- log
statements: |-
merge_maps(attributes, ParseKeyValue(attributes["43000x"]["body"], ":", ","), "upsert") where IsMap(attributes)
set(attributes, ParseKeyValue(attributes["43000x"]["body"], ":", ",")) where not IsMap(attributes)
The first statement is executed when attributes is confirmed to be a map. Here, the ParseKeyValue function interprets the attributes["43000x"]["body"] string to extract key-value pairs using : as a delimiter between keys and values, and , as a delimiter between pairs. The function returns a map of extracted key-value pairs. The merge_maps function then integrates these extracted pairs into the existing attributes map. The “upsert” operation ensures that if a key from the extracted pairs already exists in attributes, its value gets updated; if not, the new key-value pair gets inserted.
The second statement takes place when attributes is not initially a map. The operation directly sets attributes to the map result of ParseKeyValue. This transformation replaces any previous data in attributes with the extracted key-value pairs from attributes["43000x"]["body"]. By ensuring that attributes becomes a structured map filled with parsed data, this statement standardizes logs that initially lacked a map-like attribute structure.
2.9 Rename Attribute Keys
A Custom Processor is used to rename specific attribute fields to standardize naming conventions.
- type: ottl_transform
metadata: '{"id":"RvnYOpuPN_CFkHfy5ZC8j","type":"ottl_transform","name":"Rename attribute keys"}'
data_types:
- log
statements: edx_map_keys(attributes, ["DstIP", "DstPort", "Protocol", "SrcIP", "SrcPort"], ["dest_ip", "dest_port", "protocol", "src_ip", "src_port"])
In this statement, edx_map_keys targets the attributes data structure, typically a map of key-value pairs being processed within a log entry. It systematically remaps the keys listed in the first array to the corresponding keys in the second array. Effectively, this means that any instances of “DstIP” in the attributes map will be renamed to “dest_ip”, “DstPort” to “dest_port”, and so forth.
2.10 Delete Unnecessary Fields
A Delete Field Processor is employed to clean up attributes by removing unnecessary fields, focusing on pertinent data.
- type: ottl_transform
metadata: '{"id":"rRJPuizMaHxefNBv8tGXt","type":"delete-field","name":"Delete Field"}'
data_types:
- log
statements: |-
delete_key(attributes, "43000x")
delete_key(attributes, "HOUR")
delete_key(attributes, "MINUTE")
delete_key(attributes, "MONTH")
delete_key(attributes, "MONTHDAY")
delete_key(attributes, "TIME")
delete_key(attributes, "SECOND")
delete_key(attributes, "YEAR")
delete_key(attributes, "timestamp")
delete_key(attributes, "ftd_code")
delete_key(attributes, "regex")
delete_key(attributes, "ftd_drop")
3. Data Output
The processed logs exit the pack via the Output node, routing them to downstream systems for further analysis or storage.
Sample Input
May 06 2025 13:32:24 SNL-FTD-VPN-A01 : %FTD-3-313008: Denied IPv6-ICMP type=134, code=0 from fe80::1ff:fe23:4567:890a on interface ISP1
May 06 2025 13:32:23 localhost CiscoFTD[999]: %FTD-6-305011: Built dynamic TCP translation from inside:172.31.98.44/1772 to outside:100.66.98.44/8256
May 06 2025 13:32:22 SNL-FTD-VPN-A01 : %FTD-2-106017: Deny IP due to Land Attack from 10.123.123.123 to 10.123.123.123