Edge Delta Agent Helm Values

Values for Installing Edge Delta using Helm.

The following helm values can be customized:

Agent pullPolicy

Variable: agentProps.image.pullPolicy

Description: The agentProps.image.pullPolicy value defines the conditions under which the agent container image should be pulled from a registry. Values can be:

  • Always: The image will be pulled every time the pod starts. This ensures that you always use the latest version of the image even if it’s already present on the node.
  • IfNotPresent: The image will be pulled only if it is not already present on the node. This can reduce network bandwidth and speed up deployments for images that don’t change frequently.
  • Never: The image will never be pulled, and you rely on the image being pre-installed on the node.

The default value is IfNotPresent.

Example:

 --set agentProps.image.pullPolicy=Always

You can describe the pod to confirm the value was applied:

kubectl describe pod <pod-name> 

Annotations

Variable: annotations

Description: The annotations value enables you to add custom annotations to the pods or other Kubernetes objects created by the Helm chart. Annotations can provide metadata that can be used by various tools and processes within the Kubernetes ecosystem.

Example:

--set annotations.example\.com/annotation="my-value"

You can run this command to confirm the annotations:

kubectl describe pods -l app.kubernetes.io/name=edgedelta -n edgedelta

API Key

Variable: apiKey or secretApiKey

Description: The apiKey is a plaintext key used to access the Pipeline configuration in Edge Delta. The secretApiKey is used to alter the kubernetes Secret name and key. To provide a Pipeline ID to the Fleet, you should use either the apiKey or use a Kubernetes Secret, but not both. By default, ed-api-key is the secret’s name and key.

Note: Passing in a secret in plain text using apiKey is not recommended for production due to security concerns. See an example of using a secrets management tool.

Example: This command creates a Kubernetes secret in the edgedelta namespace, with ed-api-key as the secret’s name and key, and 12345678987654321 as the secret’s value.

helm upgrade edgedelta edgedelta/edgedelta -i --version v1.17.0 --set secretApiKey.value=12345678987654321 -n edgedelta --create-namespace

You can run this command to retrieve the secret value:

kubectl get secret -n edgedelta ed-api-key -o jsonpath="{.data['ed-api-key']}" | base64 --decode

Deployment - Autoscaling and Replicas

The Deployment - Autoscaling section encompasses various configurations for enabling and fine-tuning the Horizontal Pod Autoscaler (HPA) behavior in an Edge Delta deployment. Autoscaling allows your deployment to dynamically adjust the number of replica processor pods based on resource utilization metrics such as CPU and memory usage, or custom metrics. The provided example demonstrates a comprehensive setup that integrates several autoscaling parameters.

Note: Changing the deployment kind to Deployment is a requirement for enabling Horizontal Pod Autoscaling (HPA). The default DaemonSet deployment kind does not support dynamic scaling out and in based on resource utilization because its primary goal is to maintain one pod per node rather than reacting to fluctuating loads.

Note: The rollup and Compactor Agents have their own autoscaling parameters.

To install Edge Delta with deployment autoscaling:

helm upgrade edgedelta edgedelta/edgedelta -i --version v1.17.0 \
  --set secretApiKey.value=12345678987654321 \
  --set deployment.kind=Deployment \
  --set deployment.autoscaling.enabled=true \
  --set deployment.autoscaling.minReplicas=1 \
  --set deployment.autoscaling.maxReplicas=5 \
  --set deployment.autoscaling.targetForCPUUtilizationPercentage=80 \
  --set deployment.autoscaling.behavior.scaleDown.stabilizationWindowSeconds=300 \
  -n edgedelta --create-namespace

This command installs the Edge Delta agent with a Horizontal Pod Autoscaler (HPA) enabled, setting the minimum replica count to 1 and the maximum to 5. The HPA is configured to scale based on CPU utilization, targeting an 80% average usage. Additionally, it applies a 300-second stabilization window to prevent frequent scaling down due to temporary spikes.

To verify autoscaling:

kubectl get hpa -n edgedelta
kubectl describe hpa -n edgedelta

Each part of the autoscaling value is discussed next:

Deployment - Autoscaling - Enabled

Variable: deployment.autoscaling.enabled

Description: Enables the creation of a Horizontal Pod Autoscaler (HPA) for processor agents within the Edge Delta deployment. When set to true, the HPA is configured to manage the scaling of pods based on resource utilization.

Example:

--set deployment.autoscaling.enabled=true

Deployment - Autoscaling - External

Variable: deployment.autoscaling.external

Description: Set to true if using an external autoscaler like KEDA (Kubernetes Event-driven Autoscaling). This setting allows integrating with external scaling mechanisms outside of the standard HPA.

Example:

--set deployment.autoscaling.external=false

Deployment - Autoscaling - Min Replicas

Variable: deployment.autoscaling.minReplicas

Description: Specifies the minimum number of replica pods for the deployment. This ensures that the deployment maintains at least this number of replicas at all times.

Example:

--set deployment.autoscaling.minReplicas=1

Deployment - Autoscaling - Max Replicas

Variable: deployment.autoscaling.maxReplicas

Description: Specifies the maximum number of replica pods for the deployment. The HPA will not scale the deployment above this number of replicas.

Example:

--set deployment.autoscaling.maxReplicas=5

Deployment - Autoscaling - Target CPU Utilization Percentage

Variable: deployment.autoscaling.targetForCPUUtilizationPercentage

Description: Defines the target average CPU utilization percentage for the HPA to maintain across the pods of the deployment. The HPA will scale up or down to meet this target when the CPU usage crosses the specified threshold.

Example:

--set deployment.autoscaling.targetForCPUUtilizationPercentage=80

Deployment - Autoscaling - Target Memory Utilization Percentage

Variable: deployment.autoscaling.targetForMemoryUtilizationPercentage

Description: Defines the target average memory utilization percentage for the HPA to maintain across the pods of the deployment. The HPA will scale up or down to meet this target when the memory usage crosses the specified threshold.

Example:

--set deployment.autoscaling.targetForMemoryUtilizationPercentage=80

Deployment - Autoscaling - Custom Metric

Variable: deployment.autoscaling.customMetric

Description: Allows the use of custom metrics for autoscaling targets. This section can be used to configure other metric types beyond CPU and memory for the HPA to evaluate.

Example:

--set deployment.autoscaling.customMetric={type: "Pods", pods: {metric: {name: "packets-per-second"}, target: {type: "AverageValue", averageValue: "1k"}}}

Deployment - Autoscaling - Behavior - Scale Down Stabilization Window Seconds

Variable: deployment.autoscaling.behavior.scaleDown.stabilizationWindowSeconds

Description: Configures the stabilization window in seconds for scaling down the pods via the Horizontal Pod Autoscaler (HPA). During this window, the HPA delays scaling down actions to avoid unnecessary scale-downs due to temporary spikes in metrics.

Example:

--set deployment.autoscaling.behavior.scaleDown.stabilizationWindowSeconds=300

Deployment - Kind

Variable: deployment.kind

Description: The deployment.kind parameter defines how the Processing Agents within the Edge Delta fleet are managed. This parameter can be either DaemonSet for deploying on each node or Deployment for a scalable set of agent pods. The default is to deploy the processor agent as a DaemonSet. See Installing as a Deployment.

Example:

helm upgrade edgedelta edgedelta/edgedelta -i --version v1.17.0 \
  --set secretApiKey.value=12345678987654321 \
  --set deployment.kind=Deployment \
  -n edgedelta --create-namespace

To verify:

kubectl get deployment -n edgedelta
kubectl get daemonset -n edgedelta

Note: Separate configuration sections and Helm values are provided for Rollup and Compactor Agents.

Deployment - Replicas

Variable: deployment.replicas

Description: Specifies the number of pods for the Processor Agents when deployment.kind is set to Deployment. This setting is mutually exclusive with autoscaling, which means it will not apply if deployment.autoscaling.enabled is set to true. The deployment.replicas parameter is especially useful for scenarios where a fixed number of Processor Agents is required, avoiding the complexity of autoscaling. This can simplify resource allocation and predictability in environments where the load is consistent and predictable.

Example:

--set deployment.replicas=3

To install Edge Delta Processor Agents with a static number of 3 replicas:

helm upgrade edgedelta edgedelta/edgedelta -i --version v1.17.0 \
  --set secretApiKey.value=12345678987654321 \
  --set deployment.kind=Deployment \
  --set deployment.replicas=3 \
  --set deployment.autoscaling.enabled=false \
  -n edgedelta --create-namespace

To verify the number of replicas:

kubectl get deployment -n edgedelta
kubectl describe deployment edgedelta -n edgedelta

Deployment - Topology Spread Constraints

The topologySpreadConstraints parameter allows you to define how Edge Delta Processor Agent pods should be distributed across different topology domains (e.g., zones, regions) within a Kubernetes cluster. This configuration ensures better availability and resiliency of the deployed agents by spreading them across various failure domains. The provided example demonstrates a comprehensive setup using several topology spread constraint parameters.

Note: The rollup and Compactor Agents have their own Topology Spread Constraints parameters.

To install Edge Delta with topology spread constraints ensuring that Processor Agent pods are spread across zones:

helm upgrade edgedelta edgedelta/edgedelta -i --version v1.17.0 \
  --set secretApiKey.value=12345678987654321 \
  --set deployment.kind=Deployment \
  --set deployment.topologySpreadConstraints\[0\].maxSkew=1 \
  --set deployment.topologySpreadConstraints\[0\].topologyKey=topology.kubernetes.io/zone \
  --set deployment.topologySpreadConstraints\[0\].whenUnsatisfiable=ScheduleAnyway \
  --set deployment.topologySpreadConstraints\[0\].labelSelector.matchLabels.'edgedelta\/agent-type'=processor \
  -n edgedelta --create-namespace

Note: The square brackets ([0]) in the helm command can cause shell interpretation issues.

By escaping the brackets with backslashes (), you prevented the shell from interpreting them, ensuring the command is passed correctly to helm. For example, --set deployment.topologySpreadConstraints[0].maxSkew=1 becomes --set deployment.topologySpreadConstraints\[0\].maxSkew=1.

Using a values file makes the Helm command cleaner and avoids issues with special characters interpretation by the shell. This approach is highly recommended for complex configurations and for maintaining clear and reusable deployment settings:

deployment:
  kind: Deployment
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: topology.kubernetes.io/zone
      whenUnsatisfiable: ScheduleAnyway
      labelSelector:
        matchLabels:
          edgedelta/agent-type: processor

This command configures the deployment to spread the pods across different zones as evenly as possible with a maxSkew of 1.

To verify the topology spread constraints:

kubectl get deployment -n edgedelta
kubectl describe deployment edgedelta -n edgedelta

Each part of the topologySpreadConstraints value is discussed next:

Deployment - Topology Spread Constraints - Max Skew

Variable: deployment.topologySpreadConstraints[0].maxSkew

Description: This field specifies the maximum allowed difference in the number of pods across the specified topologies. It defines the degree of imbalance that is acceptable between nodes or other topology domains, based on the topologyKey provided. Setting maxSkew to a smaller number forces a more even distribution of pods.

Example:

--set deployment.topologySpreadConstraints\[0\].maxSkew=1

This sets the maxSkew to 1, aiming for a tight distribution of pods across zones.

Deployment - Topology Spread Constraints - Topology Key

Variable: deployment.topologySpreadConstraints[0].topologyKey

Description: This field identifies the key that the system evaluates when determining how to categorize nodes into topologies. Commonly used keys include topology.kubernetes.io/zone for spreading pods across physical zones or topology.kubernetes.io/region for larger geographical areas.

Example:

--set deployment.topologySpreadConstraints\[0\].topologyKey=topology.kubernetes.io/zone

This specifies that the nodes’ physical zones determine the domains over which the pods should be spread.

Deployment - Topology Spread Constraints - When Unsatisfiable

Variable: deployment.topologySpreadConstraints[0].whenUnsatisfiable

Description: This field determines what action to take if the pods cannot be spread as per the maxSkew definition. It manages the scheduling policy when it’s not possible to satisfy the skew criteria. Options include:

  • DoNotSchedule — prevents the scheduler from scheduling the pod if doing so would violate the topology’s spread constraint.
  • ScheduleAnyway — allows the scheduler to schedule the pods even if the spread constraints cannot be fully satisfied, thus, it prioritizes getting pods running over maintaining the spread constraint strictly.

Example:

--set deployment.topologySpreadConstraints\[0\].whenUnsatisfiable=ScheduleAnyway

This setting ensures that pods will still be scheduled even if the maxSkew cannot be exactly met, thus preventing pod scheduling failures due to strict spread constraints.

Deployment - Topology Spread Constraints - Label Selector Match Labels

Variable: deployment.topologySpreadConstraints[0].labelSelector.matchLabels

Description: This configuration allows selecting a subset of pods based on their labels. It is a key-value pair that matches against labels attached to objects (like pods). This criteria is used to determine which pods should be considered as part of the topology spread calculation. By configuring matchLabels, you can fine-tune the policy to only affect certain pods that match these labels, effectively applying the topology spread rules selectively based on the workload characteristics defined by labels.

Example:

--set deployment.topologySpreadConstraints\[0\].labelSelector.matchLabels.'edgedelta\/agent-type'=processor

This applies the constraint only to the pods that have a label edgedelta/agent-type=processor, ensuring only processor type agents are spread across the nodes as per the defined criteria.

Rollup Agents Configuration

Enabling Rollup Agents

Variable: rollUpProps.enabled

Description: Enables the Rollup Agents within the Edge Delta deployment. Rollup agents are responsible for aggregating and rolling up telemetry data and are crucial for efficient data handling and analytics. Rollup agents are enabled by default.

Example:

--set rollUpProps.enabled=true

Rollup pullPolicy

Variable: rollUpProps.image.pullPolicy

Description: The image.pullPolicy defines the conditions under which the rollup container image should be pulled from a registry. Values can be:

  • Always: The image will be pulled every time the pod starts. This ensures that you always use the latest version of the image even if it’s already present on the node.
  • IfNotPresent: The image will be pulled only if it is not already present on the node. This can reduce network bandwidth and speed up deployments for images that don’t change frequently.
  • Never: The image will never be pulled, and you rely on the image being pre-installed on the node.

The default value is IfNotPresent.

Example:

 --set rollUpProps.image.pullPolicy=Always

You can describe the pod to confirm the value was applied:

kubectl describe pod <pod-name> 

Port Configuration

Variable: rollUpProps.port

Description: Specifies the port on which the Rollup Agents listen.

Example:

--set rollUpProps.port=9200

Replica Configuration

Variable: rollUpProps.replicas

Description: Specifies the number of Rollup Agents to deploy. This parameter is mutually exclusive with autoscaling.

Example:

--set rollUpProps.replicas=2

Autoscaling Enabled Configuration

Variable: rollUpProps.autoscaling.enabled

Description: Enables Horizontal Pod Autoscaling (HPA) for Rollup Agents, allowing the number of pods to adjust based on resource usage.

Example:

--set rollUpProps.autoscaling.enabled=false

Memory Limit Configuration

Variable: rollUpProps.goMemLimit

Description: Specifies the memory limit for the Rollup Agents.

Example:

--set rollUpProps.goMemLimit=900MiB

Resource Requests and Limits

Variable: rollUpProps.resources

Description: Specifies the resource requests and limits for the Rollup Agents.

Example:

--set rollUpProps.resources.limits.cpu=1000m \
--set rollUpProps.resources.limits.memory=1Gi \
--set rollUpProps.resources.requests.cpu=200m \
--set rollUpProps.resources.requests.memory=256Mi

Compactor Agents Configuration

Enabling Compactor Agents

Variable: compactorProps.enabled

Description: Enables the Compactor Agents within the Edge Delta deployment.

Example:

--set compactorProps.enabled=true

Compactor pullPolicy

Variable: compactorProps.image.pullPolicy

Description: The image.pullPolicy defines the conditions under which the Compactor Agent container image should be pulled from a registry. Values can be:

  • Always: The image will be pulled every time the pod starts. This ensures that you always use the latest version of the image even if it’s already present on the node.
  • IfNotPresent: The image will be pulled only if it is not already present on the node. This can reduce network bandwidth and speed up deployments for images that don’t change frequently.
  • Never: The image will never be pulled, and you rely on the image being pre-installed on the node.

The default value is IfNotPresent.

Example:

 --set compactorProps.image.pullPolicy=Always

You can describe the pod to confirm the value was applied:

kubectl describe pod <pod-name> 

Port Configuration

Variable: compactorProps.port

Description: Specifies the port on which the Compactor Agents listen.

Example:

--set compactorProps.port=9199

Persistent Volume Configuration

Variable: compactorProps.usePVC

Description: Enabling compactorProps.usePVC (true) configures the Edge Delta Compactor Agent to use a Persistent Volume Claim (PVC) for persisting Compactor data before it is flushed downstream. This ensures that the compactor can reliably store data. If not set (default is false), the compactor will not use a PVC for data storage.

Example:

--set compactorProps.usePVC=true

To check for the existence of the Persistent Volume Claim in the namespace:

kubectl get pods -n edgedelta
kubectl describe pod <compactor-pod-name> -n edgedelta
kubectl get pvc -n edgedelta

Autoscaling Enabled Configuration

Variable: compactorProps.autoscaling.enabled

Description: Enables Horizontal Pod Autoscaling (HPA) for Compactor Agents.

Example:

--set compactorProps.autoscaling.enabled=false

Memory Limit Configuration

Variable: compactorProps.goMemLimit

Description: Specifies the memory limit for the Compactor Agents.

Example:

--set compactorProps.goMemLimit=""

Resource Requests and Limits

Variable: compactorProps.resources

Description: Specifies the resource requests and limits for the Compactor Agents.

Example:

--set compactorProps.resources.limits.cpu=2000m \
--set compactorProps.resources.limits.memory=2Gi \
--set compactorProps.resources.requests.cpu=200m \
--set compactorProps.resources.requests.memory=300Mi

Docker Container Properties

Variable: dockerContainerProps

Description: This helm value configures the path to Docker container logs on a Kubernetes node. It is used by Edge Delta agents for self-discovery, enabling them to access and analyze Docker container logs.

Example:

--set dockerContainerProps.hostPath="/var/lib/docker/containers"

To confirm the existence of the specified path mount in the agent’s pod:

kubectl get pods -n edgedelta

Retrieve the name of one of the Edge Delta agent pods and replace in the following command:

kubectl describe pod <edgedelta-agent-pod-name> -n edgedelta

Look under the Mounts section for the specified path /var/lib/docker/containers.

Edge Delta Custom Tags

Variable: edCustomTags

Description: Custom tags are pipe (|) delimited key:value pairs that are attached to all outgoing data from Edge Delta agents to their configured destinations. These tags can, for example, provide valuable metadata about the data’s origin, such as the cluster name, cloud provider, and region.

Example:

--set edCustomTags="cluster:prod_us_west_2_cluster|provider:aws|region:us_west_2"

To confirm that custom tags have been applied, get the list of pods in the edgedelta namespace:

kubectl get pods -n edgedelta

Describe one of the Edge Delta pods to check for the custom tags:

kubectl describe pod <edge-delta-pod-name> -n edgedelta

Check the Environment section within the edgedelta-agent container:

ED_CUSTOM_TAGS: cluster:prod_us_west_2_cluster|provider:aws|region:us_west_2

Verify logs in Edge Delta: Check the logs in Edge Delta and confirm that they contain the following attributes:

{
  "cluster": "prod_us_west_2_cluster",
  "provider": "aws",
  "region": "us_west_2"
}

Edge Delta Skip TLS Verify

Variable: edSkipTlsVerify

Description: Ignore SSL/TLS certificate errors when providing a client certificate and key directly. This can be useful in environments where self-signed certificates are used or where certificate verification may fail due to other reasons.

Example:

--set edSkipTlsVerify=true

Edge Delta Suppression Mode

Variable: edSuppressionMode

Description: The edSuppressionMode Helm value configures version 2 (deprecated) Edge Delta agents to suppress new issue notifications if similar issues have already been reported by the same or other agents.

Edge Delta Tag Override

Variable: edTagOverride

Description: Specifies a fleet tag that is different from the one configured in the Web App. Use this option to deploy two fleets with the same Pipeline configuration. A best practice is to share Pipeline components using packs, rather than duplicating a pipeline across multiple fleets.

Example:

--set edTagOverride=<new name>

Edge Delta Workflow Prefixes

Variable: edWorkflowPrefixes

Description: A colon-separated list of workflow prefixes to enable all matching workflows in version 2 (deprecated) agents. By default, all other workflow names are enabled when edWorkflows and edWorkflowPrefixes are not configured.

Example: "billing:error"

Edge Delta Workflows

Variable: edWorkflows

Description: A colon-separated list of workflow names to enable matching workflows in version 2 (deprecated) agents. By default, all other workflow names are enabled when edWorkflows and edWorkflowPrefixes are not configured.

Example: "billing-workflow:error-workflow"

HTTP Proxy

Variable: httpProxy

Description: The HTTPProxy helm value allows you to specify an HTTP proxy server that the Edge Delta agents will use for routing outbound HTTP traffic. This setting is useful in environments where direct access to external endpoints is restricted, and traffic must pass through an internal proxy for monitoring, security, or policy enforcement reasons.

Example: In a production environment, you might have a corporate proxy server set up to control and monitor outgoing HTTP requests. The HTTPProxy setting should be configured with the address of this corporate proxy server. This is an example of how you can configure the httpProxy for a production environment where the corporate proxy server is hosted at http://corp-proxy.example.com:8080.

--set httpProxy="http://corp-proxy.example.com:8080"

HTTP Recorder Properties - Enabled

Variable: httpRecorderProps.enabled

Description: Enable httpRecorder, a frontend layer which can consume logs with both http and tcp protocols. It is deployed as a sidecar for each Fleet. It dumps the incoming logs to filesystem (persisted via PVC) and Fleets grab from there.

Example: httpRecorderProps.enabled=false (for disabled)

HTTP Recorder Properties - Image

Variable: httpRecorderProps.image

Description: Specify the httpRecorder image and version tag.

Example: httpRecorderProps.image="gcr.io/edgedelta/httprecorder:latest"

HTTP Recorder Properties - Ingress

Variable: httpRecorderProps.ingress

Description: Configure ingress for httpRecorder if the k8s cluster already has nginx + cert-manager installed. Without ingress enabled you can directly send logs within cluster using http://ed-httprecorder-svc.{namespace}.svc.cluster.local:8080

Example: httpRecorderProps.ingress={class: nginx}

HTTP Recorder Properties - Port

Variable: httpRecorderProps.port

Description: Specify the httpRecorder port.

Example: httpRecorderProps.port=8080

HTTPS Proxy

Variable: httpsProxy

Description: Address to route the Fleet’s outbound traffic through an HTTPS internal proxy.

Example: "https://127.0.0.1:3128"

Image

Variable: image

Description: The Fleet’s Docker image. Optionally override the image tag, which defaults to the chart appVersion.

Example: image="edgedelta/agent:latest"

No Proxy

Variable: noProxy

Description: Disables the proxy for requests that hit a specific destination.

Example: "https://your-endpoint.com"

Node Selector

Variable: nodeSelector

Description: This is a way to specify on which nodes a pod should be scheduled, based on labels on nodes. With nodeSelector: {}, no node selector is set, so the pod can be scheduled on any available node that matches other criteria.

Persisting Cursor Properties - Container Mount Path

Variable: persistingCursorProps.containerMountPath

Description: The container mount path to keep the persisting cursor state.

Example: /var/lib/edgedelta

Persisting Cursor Properties - Enabled

Variable: persistingCursorProps.enabled

Description: Enables or disables the persistent cursor feature.

Example: persistingCursorProps.enabled=false

Persisting Cursor Properties - Host Mount Path

Variable: persistingCursorProps.hostMountPath

Description: The host mount path to keep the persisting cursor state.

Example: /var/lib/edgedelta

Priority Class Name

Variable: priorityClassName

Description: This value can specify the priority of the pods. Higher priority pods can potentially preempt lower priority pods in times of resource scarcity.

Profiler Port

Variable: profilerPort

Description: Specify the port to use if you install Edge Delta with a profiler to monitor CPU and memory statistics. Alternatively, you can use Prometheus with its dedicated endpoint.

Example: profilerPort=6060

Prom Port

Variable: promPort

Description: Specify the metrics endpoint port number that Prometheus can use to scrape metrics.

Example: promPort=8087

Resources - Limits CPU

Variable: resources.limits.cpu

Description: The maximum CPU usage limit for the fleet pods.

Example: resources.limits.cpu=1000m

Resources - Limits Memory

Variable: resources.limits.memory

Description: The maximum memory usage limit for the fleet pods.

Example: resources.limits.memory=2048Mi

Resources - Requests CPU

Variable: resources.requests.cpu

Description: The minimum requested CPU for the fleet pods.

Example: resources.requests.cpu=200m

Resources - Requests Memory

Variable: resources.requests.memory

Description: The minimum requested memory for the fleet pods.

Example: resources.requests.memory=256Mi

Secret API Key - Key

Variable: secretApiKey.key

Description: The reference to use to create the key part of a key/value pair stored in a Kubernetes secret when the secretApiKey.value is passed in.

Example: secretApiKey.key='ed-api-key, username, password'

Secret API Key - Name

Variable: secretApiKey.name

Description: The name to use for the Kubernetes secret object when the secretApiKey.value is passed in.

Example: secretApiKey.name='ed-api-key'

Secret API Key - Value

Variable: secretApiKey.value

Description: The value part of a key/value pair that is saved in a Kubernetes secret. Passing in this parameter saves it in the secret rather than the values file, and it uses the name and key specified by secretApiName and secretApiKey. Use either apiKey or secretApiKey.value, not both, to provide a Pipeline ID to the Fleet.

Example: secretApiKey.value='1a2b3c4d5e6f7g8h9i'

Service Monitor

Variable: serviceMonitor

Description: Enable service monitor to scrape Prometheus metrics from Fleets.

Store Port

Variable: storePort

Description: A port number to expose fleet metrics storage.

Example: storePort=6062

Tolerations

Variable: tolerations

Description: Tolerations allow a pod to be scheduled on nodes with matching taints. It’s a way to ensure that pods are scheduled on appropriate nodes. The default chart setting tolerations: {}, indicates that there are no specific tolerations set, meaning the pods might not be able to be scheduled on tainted nodes unless tolerations are defined. For example, if certain nodes are dedicated for certain purposes or have specific hardware, administrators may apply taints to them, and only pods with the appropriate tolerations will be scheduled there.

Update Strategy

Variable: updateStrategy

Description: This dictates how updates to the application are rolled out. The updateStrategy.type: RollingUpdate strategy means that updates will roll out one pod at a time, rather than taking the entire application down and updating all at once. This provides high availability during updates. Specifically, updateStrategy.rollingUpdate.maxUnavailable: 1 means that during the update, at most one pod can be unavailable.

Volume Mounts

Variable: volumeMounts

Description: Specify where to mount the volumes listed in the volumes parameter into containers.

Example: .spec.containers[*].volumeMounts: - mountPath: /cache name: cache-volume

Volumes

Variable: volumes

Description: Specify the volumes to make available to a pod. This includes the volume type such as ConfigMap or emptyDir.

Example: `.spec.volumes:

  • name: cache-volume emptyDir: sizeLimit: 500Mi`