Edge Delta Minio Output

Archive data in Minio.

See the latest version here.

Overview

This output type sends logs to a Minio endpoint.

Example

- name: my-minio
      type: minio
      access_key: my_access_key_123
      secret_key: my_secret_key_123
      endpoint: play.minio.com:9000
      bucket: ed-test-bucket-minio
      disable_ssl: true
      s3_force_path_style: true
      encoding: parquet 
      compression: zstd 

Parameters

name

Required

Enter a descriptive name for the output or integration.

For outputs, this name will be used to map this destination to a workflow.

name: my-minio

integration_name

Optional

This parameter refers to the organization-level integration created in the Integrations page.

If you need to add multiple instances of the same integration into the config, then you can add a custom name to each instance via the name parameter. In this situation, the name should be used to refer to the specific instance of the destination in the workflows.

integration_name: orgs-minio

type: minio

Required

Enter minio.

type: minio

endpoint

Required

Enter the Minio endpoint.

endpoint: play.minio.com:9000

bucket

Required

Enter the Minio bucket to send the archived logs.

bucket: ed-test-bucket-minio

access_key

Required

Enter the access key that has permissions to upload files to the specified bucket.

access_key: my_access_key_123

secret_key

Required

Enter the secret key associated with the specified access key.

secret_key: my_secret_key_123

compression

Optional

Enter a compression type for archiving purposes.

You can enter gzip, zstd, snappy, or uncompressed.

compression: gzip

encoding

Optional

Enter an encoding type for archiving purposes.

You can enter json or parquet.

encoding: parquet 

disable_ssl

Optional

You can disable the SSL requirement when logs are pushed to the Minio endpoint.

disable_ssl: true

s3_force_path_style

Optional

You can force the archive destination to use the {endpoint}/{bucket} format instead of the {bucket}.{endpoint}/ format when reaching buckets.

s3_force_path_style: true

use_native_compression

Optional

Enter true or false to compress parquet-encoded data.

This option will not compress metadata.

This option can be useful with big data cloud applications, such as AWS Athena and Google BigQuery.

Note To use this parameter, you must set the encoding parameter to parquet.

use_native_compression: true

buffer_ttl

Optional

Enter a length of time to retry failed streaming data.

After this length of time is reached, the failed streaming data will no longer be tried.

buffer_ttl: 2h

buffer_path

Optional

Enter a folder path to temporarily store failed streaming data.

The failed streaming data will be retried until the data reaches its destinations or until the Buffer TTL value is reached.

If you enter a path that does not exist, then the agent will create directories, as needed.

buffer_path: /var/log/edgedelta/pushbuffer

buffer_max_bytesize

Optional

Enter the maximum size of failed streaming data that you want to retry.

If the failed streaming data is larger than this size, then the failed streaming data will not be retried.

buffer_max_bytesize: 100MB