Install Edge Delta as a Deployment
3 minute read
Installing Edge Delta as a Deployment
The recommended installation method for Edge Delta process agents is as a DaemonSet for comprehensive Kubernetes support. However, you may prefer to deploy Edge Delta as a Deployment. The Deployment option is designed for central processing, receiving data from multiple upstream sources and performing all pipeline operations such transformation, aggregation and filtering.
We highly recommend using the DaemonSet architecture and pushing processing to the edge when possible since it is more efficient, easier to manage, and provides the full feature-set of the Edge Delta agent.
To deploy Edge Delta as a Deployment, set the deployment.kind variable to Deployment (add --set deployment.kind=Deployment
to the installation command or set this in the values.yaml file). To specify the number of replicas, use the deployment.replicas
variable (an integer value without quotes).
NOTE: It is not possible to use
helm upgrade
to change thedeployment.kind
from a DaemonSet to a Deployment or vice versa. If you are moving from onedeployment.kind
to another, you must first completely remove the existing Edge Delta agent installation usinghelm delete edgedelta -n edgedelta
before running a clean Helm install with the new deployment type.
Install with a fixed replica count
Choose this option when you know exact amount of pods needed to handle the incoming data processing need.
helm install edgedelta edgedelta/edgedelta -n edgedelta --create-namespace \
--set secretApiKey.value=123456789 \
--set deployment.kind=Deployment \
--set deployment.replicas=2
Install with autoscaling enabled
This is the most recommended option to enable auto scaling based on the workload. By default autoscaling happens via CPU utilization threshold.
helm install edgedelta edgedelta/edgedelta -n edgedelta --create-namespace \
--set secretApiKey.value=123456789 \
--set deployment.kind=Deployment \
--set deployment.autoscaling.enabled=true \
--set deployment.autoscaling.minReplica=2 \
--set deployment.autoscaling.maxReplica=20
Install with a load balancer for the incoming data
When pipeline listens on a port, deployment agents should be placed behind a load balancer. If multiple ports are used in the pipeline, please add more ports to values.yaml via ports[1], ports[2], etc.
helm upgrade edgedelta ./edgedelta -n edgedelta --create-namespace \
--set secretApiKey.value=123456789 \
--set deployment.kind=Deployment \
--set deployment.autoscaling.enabled=true \
--set deployment.autoscaling.minReplica=2 \
--set deployment.autoscaling.maxReplica=20
--set ports[0].name=processor-port \
--set ports[0].protocol=TCP \
--set ports[0].port=4547 \
--set ports[0].exposeInHost=false \
--set pushService.type=LoadBalancer
Select a load balancer that you are confident operating in a highly available mode. We recommend using a managed load balancer such as AWS ALB, Azure LB, or Google Cloud Network Load Balancer. These managed load balancers are highly available and seamlessly integrate with your cloud environment.
For more information on Kubernetes load balancers, please refer to the Kubernetes Load Balancer Documentation.
Restrictions
Some of features for DaemonSets are not available for Deployments. These are:
- Collecting Kubernetes logs (indirect collection is possible via other collectors, including Edge Delta, which can push data to Deployment installations)
- Collecting Kubernetes metrics
- Collecting Kubernetes traffic metrics
These are not possible because Edge Delta needs one-to-one correspondence with each node in the cluster to have full-fledged data without any duplication. Topology spread constraints, for example, are hard to configure with a Deployment. Therefore, these features as not fully supported yet.