Edge Delta ZenDesk Cloud Server Output

Archive data in ZenDesk Cloud Server.

See the latest version here.

Overview

This output types sends logs to a CloudServer endpoint.

Example

- name: my-zenko-cloudserver
      type: zenko
      endpoint: https://XXXXXXXXXX.sandbox.zenko.io
      bucket: ed-test-bucket-zenko
      access_key: my_access_key_123
      secret_key: my_secret_key_123

Parameters

name

Required

Enter a descriptive name for the output or integration.

For outputs, this name will be used to map this destination to a workflow.

name: my-zenko-cloudserver

integration_name

Optional

This parameter refers to the organization-level integration created in the Integrations page.

If you need to add multiple instances of the same integration into the config, then you can add a custom name to each instance via the name parameter. In this situation, the name should be used to refer to the specific instance of the destination in the workflows.

integration_name: orgs-zenkcs

type

Required

Enter zenko.

type: zenko

endpoint

Required

Enter the Zenko endpoint.

endpoint: https://XXXXXXXXXX.sandbox.zenko.io

bucket

Required

Enter the desired Zenko bucket to send the archived logs.

bucket: ed-test-bucket-zenko

access_key

Required

Enter the access key that has permissions to upload files to the specified bucket.

access_key: my_access_key_123

secret_key

Required

Enter the secret key associated with the specified access key.

secret_key: my_secret_key_123

compression

Optional

Enter a compression type for archiving purposes.

You can enter gzip, zstd, snappy, or uncompressed.

compression: gzip

encoding

Optional

Enter an encoding type for archiving purposes.

You can enter json or parquet.

encoding: parquet

use_native_compression

Optional

Enter true or false to compress parquet-encoded data.

This option will not compress metadata.

This option can be useful with big data cloud applications, such as AWS Athena and Google BigQuery.

Note To use this parameter, you must set the encoding parameter to parquet.

use_native_compression: true

buffer_ttle

Optional

Enter a length of time to retry failed streaming data.

After this length of time is reached, the failed streaming data will no longer be tried.

buffer_ttl: 2h

buffer_path

Optional

Enter a folder path to temporarily store failed streaming data.

The failed streaming data will be retried until the data reaches its destinations or until the Buffer TTL value is reached.

If you enter a path that does not exist, then the agent will create directories, as needed.

buffer_path: /var/log/edgedelta/pushbuffer/

buffer_max_bytesize

Optional

Enter the maximum size of failed streaming data that you want to retry.

If the failed streaming data is larger than this size, then the failed streaming data will not be retried.

buffer_max_bytesize: 100MB