Using Secrets for Credentials
6 minute read
Overview
Edge Delta’s secrets management feature allows you to securely store sensitive credentials (such as API keys, passwords, and access tokens) at the pipeline level. Secrets are encrypted at rest and can be referenced in both source and destination node configurations using a special syntax.
Key Features:
- Encrypted Storage: Secrets are encrypted with an
edk1_prefix and never exposed in plaintext via API responses or agent logs - Per-Pipeline Scope: Secrets are configured individually for each pipeline
- Reference Syntax: Use
'{{ SECRET secret-name }}'to reference secrets in configuration fields - Runtime Resolution: Secret values are resolved at runtime and passed directly to destinations
Use Cases
Secrets are ideal for storing:
- Cloud Provider Credentials: AWS Access Keys, GCP service account keys, Azure credentials
- API Tokens: Datadog API keys, Splunk HEC tokens, authentication tokens
- Database Passwords: PostgreSQL passwords, MongoDB credentials
- TLS Certificates: Client certificates, private keys
- HTTP Authentication: Basic auth credentials, bearer tokens
Configure Secrets in the GUI
Step 1: Navigate to Pipeline Settings
- Log in to the Edge Delta web application
- Navigate to Pipelines in the left sidebar
- Select the pipeline where you want to configure secrets
- Click the Settings tab
- Open the Secrets section
Step 2: Create a Secret
Click Add Secret
Provide the following information:
- Secret Name: A unique identifier for the secret (e.g.,
aws-access-key,datadog-api-token) - Value: The sensitive credential value (plaintext - it will be encrypted upon saving)
- Description: A description to help identify the secret’s purpose
- Secret Name: A unique identifier for the secret (e.g.,
Click Save
The secret value is immediately encrypted and stored with an edk1_ prefix. The original plaintext value is never retrievable after creation.
Step 3: Reference the Secret in Configuration
Once created, you can reference the secret in any source or destination node field that supports secrets using the syntax:
'{{ SECRET secret-name }}'
Important: The secret reference must be enclosed in single quotes.
How Secrets Appear in Pipeline YAML
When you view or export your pipeline configuration as YAML, secrets are represented in two places:
1. Secret References in Nodes
In source or destination nodes, secret references appear as template strings:
- name: s3_output
type: s3_output
region: us-west-2
bucket: my-bucket
aws_key_id: '{{ SECRET aws-access-key }}'
aws_sec_key: '{{ SECRET aws-secret-key }}'
The agent resolves these references at runtime to the actual secret values.
2. Encrypted Secret Values
At the bottom of the YAML file, there is a secrets section containing the encrypted values:
secrets:
aws-access-key:
value: edk1_co6MwY568hTS1o+Exs2A1taIzsD38dmH0hTJ0MCHnd6P9pPUd3GUg4jV04zSnMzBes6sn5c+0Z6uleDthe2806czc3D6cjrT9s2KeJTQnMqXes6IihmXht6906aTwNbK8dmHyMrBwI568hHAwhTQy4GHbP==
description: AWS Access Key ID for S3 destination
aws-secret-key:
value: edk1_co6MwY568hTS1o+WwM2XwNGIzsD38dmH0hTJ0MCHnd6bfeHb1kT1dor6cjcWiN681kT4xjfVh3bWljnsiP2PckPGiN60ypz4ck6hwNyR0tCU+urcwogTik2hfZ6Ml3kD/M+hyhaXcOns9ugwhkSKeiDs/6GQiMnQ1hT7mI5J8h+Kw3CHnd6BwMPE0MnR8i+=
description: AWS Secret Access Key for S3 destination
Key points about the secrets section:
- The
valuefield contains the encrypted secret, prefixed withedk1_(indicating encryption version) - These encrypted values are safe to store in version control or share
- The agent decrypts these values at runtime when resolving
{{ SECRET secret-name }}references - You cannot derive the original plaintext secret from these encrypted values
- When you update a secret in the GUI, only this encrypted value changes
Note: The secrets section is automatically managed by the Edge Delta platform. You should not manually edit these encrypted values.
Examples
Example 1: AWS S3 Destination with Secret Credentials
Secrets Configuration:
- Secret Name:
aws-access-key - Secret Name:
aws-secret-key
S3 Destination Node:
- name: s3_destination
type: s3_output
region: us-west-2
bucket: my-logs-bucket
aws_key_id: '{{ SECRET aws-access-key }}'
aws_sec_key: '{{ SECRET aws-secret-key }}'
Example 2: HTTP Destination with Bearer Token
Secrets Configuration:
- Secret Name:
api-bearer-token
HTTP Destination Node:
- name: http_destination
type: http_output
endpoint: https://api.example.com/logs
headers:
Authorization: 'Bearer {{ SECRET api-bearer-token }}'
Example 3: Datadog with API Key
Secrets Configuration:
- Secret Name:
datadog-api-key
Datadog Destination Node:
- name: datadog_destination
type: datadog_output
api_key: '{{ SECRET datadog-api-key }}'
site: datadoghq.com
Example 4: Kafka Source with SASL Password
Secrets Configuration:
- Secret Name:
kafka-sasl-password
Kafka Input Node:
- name: kafka_source
type: kafka_input
brokers:
- kafka.example.com:9092
topics:
- logs-topic
sasl:
enabled: true
mechanism: PLAIN
username: kafka-user
password: '{{ SECRET kafka-sasl-password }}'
Security Considerations
Encryption and Storage
- Secrets are encrypted at rest using AES-256 encryption
- Encrypted values are prefixed with
edk1_to indicate the encryption version - Secrets are stored per-pipeline and are not accessible across pipelines
Access Control
- Only users with pipeline edit permissions can view or manage secrets
- Secret values cannot be retrieved after creation (only updated or deleted)
- API responses return encrypted values with
edk1_prefix, never plaintext
Runtime Behavior
- Secret values are resolved at runtime when the agent loads the configuration
- Resolved values are passed directly to source and destination integrations
- No plaintext secret values appear in agent logs (even at DEBUG level)
Best Practices
- Use Secrets for All Sensitive Data: Never hardcode credentials directly in configuration
- Descriptive Naming: Use clear, descriptive names for secrets (e.g.,
prod-aws-access-key,kafka-prod-password) - Regular Rotation: Update secret values regularly following your security policies
- Minimal Scope: Create separate secrets for different integrations rather than reusing the same credentials
- Audit Trail: Monitor pipeline configuration changes to track secret updates
Updating Secrets
To update a secret value:
- Navigate to the pipeline’s Secrets section
- Select the secret you want to update
- Enter the new secret value
- Click Save or Update
The agent will automatically receive the updated configuration within 1-2 minutes (for coordinator mode deployments) and begin using the new credential value.
Deleting Secrets
To delete a secret:
- Navigate to the pipeline’s Secrets section
- Select the secret you want to delete
- Click Delete or the delete icon
- Confirm the deletion
Warning: Deleting a secret that is referenced in a source or destination node will cause authentication failures. Ensure you update or remove the secret references before deleting.
Troubleshooting
Secret Reference Not Resolving
Symptom: Source or destination shows authentication errors or the literal string '{{ SECRET secret-name }}' appears in logs.
Resolution:
- Verify the secret name matches exactly (case-sensitive)
- Ensure the secret reference is enclosed in single quotes:
'{{ SECRET secret-name }}' - Check that the secret exists in the correct pipeline
- Verify the agent has received the updated configuration (check coordinator logs or agent reload)
Authentication Failures After Secret Update
Symptom: Source or destination authentication fails after updating a secret value.
Resolution:
- Verify the new secret value is correct
- Check the integration’s authentication requirements (format, encoding, etc.)
- Review agent logs for specific error messages from the source or destination
- Ensure the agent has reloaded the configuration (may take 1-2 minutes in coordinator mode)
Secret Not Available in Dropdown
Symptom: Secret doesn’t appear in the GUI dropdown when configuring a source or destination field.
Resolution:
- Ensure the secret was created and saved successfully
- Refresh the browser page
- Verify you’re viewing the correct pipeline
- Check that you have sufficient permissions to view secrets
Alternative Authentication Methods
While secrets are the recommended approach for managing credentials in Edge Delta, some sources and destinations support alternative authentication methods:
IAM Roles (AWS)
For AWS integrations running in AWS environments (EKS, EC2), you can use IAM Roles for Service Accounts (IRSA) or instance profiles instead of static credentials.
Environment Variables
Some integrations can authenticate using environment variables set in the agent’s runtime environment.
Credential Files
Certain integrations support loading credentials from mounted files (e.g., AWS credentials file, GCP service account JSON).
Refer to the specific source or destination node documentation for supported authentication methods.