Install the Edge Delta Agent with kubectl

Installing the Edge Delta Fleet using kubectl.

Overview

The Edge Delta Fleet can be installed in a Kubernetes environment using kubectl. It is installed by default as a DaemonSet - pods on every node. It analyses logs from each pod in each node as well as metrics from the cluster, and streams them to the configured destinations.

Install with Default Settings

You can install Edge Delta using Helm without changing any default settings.

Install an Edge Delta Fleet

Use the Kubernetes template option while following these steps:

  1. Click Pipelines.
  2. Click New Fleet.
  3. Select Edge Fleet and click Continue.
  4. Select the appropriate template and click Continue.
  5. Specify a name to identify the Fleet.
  6. Click Generate Config.
  7. Execute the installation commands, they include the unique ID for the Fleet.
  8. Expand the namespaces and select the input sources you want to monitor.
  9. Select the Destination Outputs you want to send processed data to, such as the Edge Delta Observability Platform.
  10. Click Continue.
  11. Click View Dashboard.

Install with Custom Settings

You can create your own custom manifest. To start, download the default manifest and add custom variables to it. Then apply the local file, in this example the custom-agent.yml file in the current folder is applied:

kubectl apply -f custom-agent.yml

mountPath

For custom Kubernetes deployments, you may need to update the mountPath to match the actual path of the container log folder. For some Kubernetes distributions, /docker/containers is used, instead of the standard /var/lib/docker/containers. In these cases, you must update the mountPath in the manifest file (edgedelta-agent.yml) to match the actual path of the container log folder.

SELinux

If you are running a SELinux-enforced Kubernetes cluster, then you need to add the following securityContext configuration to the edgedelta-agent.yml manifest in DaemonSet section. This update will run agent pods in privileged mode to allow the collection of logs of other pods.

     runAsUser: 0
     privileged: true

OpenShift

In an OpenShift cluster, you need to also run the following commands to allow agent pods to run in privileged mode:

oc adm policy add-scc-to-user privileged system:serviceaccount:edgedelta:edgedelta
oc patch namespace edgedelta -p \
'{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'

Run on Specific Nodes

To run the Fleet on specific nodes in your cluster, add a node selector or nodeAffinity section to your pod config file. For example, if the desired nodes are labeled as logging=edgedelta, then adding the following nodeSelector will restrict the Fleet pods to nodes that have the logging=edgedelta label.

spec:  
    nodeSelector:    
        logging: edgedelta

To learn more, see this article.

In-Cluster Data Destinations

Edge Delta pods run in a dedicated edgedelta namespace. If you want to configure an output destination that resides within your Kubernetes cluster, then you must set a resolvable service endpoint in your Pipeline configuration. For example, if you have an elasticsearch-master Elasticsearch service in the elasticsearch namespace with port 9200 in your cluster-domain.example cluster, then you need to specify the Elastic output address as http://elasticsearch-master.elasticsearch.svc.cluster-domain.example:9200. To learn more, see this article.

Run with Persistent Volume

PVC allow persistence in of Compactor data prior to flushing downstream. Without a persistent volume and claim the agent will rely only on system memory. PVC is disabled by default. Enabling PVC improves reliability at the cost of the agent footprint. To run with a PVC:

  • Add env var with name: ED_COMPACTOR_DATA_DIR to the edgedelta-compactor StatefulSet
  • Add the volumeMounts entry with the name: compactor-data to the edgedelta-compactor StatefulSet
  • Add the volumeClaimTemplates section to the edgedelta-compactor StatefulSet

For example, you uncomment the following parameters to enable PVC:

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: edgedelta-compactor
  namespace: edgedelta
  labels:
    app.kubernetes.io/name: edgedelta
    app.kubernetes.io/instance: edgedelta
    edgedelta/agent-type: compactor
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  podManagementPolicy: OrderedReady
  selector:
    matchLabels:
      app.kubernetes.io/name: edgedelta
      app.kubernetes.io/instance: edgedelta
  serviceName: ed-compactor-svc
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/name: edgedelta
        app.kubernetes.io/instance: edgedelta
      
        edgedelta/agent-type: compactor
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: edgedelta
      containers:
      - name: edgedelta-compactor
        image: gcr.io/edgedelta/agent:v0.1.101-rc.1
        ports:
          - name: compactor
            containerPort: 9199
        env:
          - name: ED_AGENT_MODE
            value: compactor
          - name: ED_COMPACTOR_PORT
            value: "9199"
          # - name: ED_COMPACTOR_DATA_DIR
          #   value: /var/edgedelta-compactor
          - name: ED_HOST_OVERRIDE
            valueFrom:
              fieldRef:
               fieldPath: metadata.name
          
          - name: ED_API_KEY
            valueFrom:
              secretKeyRef:
                name: ed-api-key
                key: ed-api-key
          - name: ED_TRACE_FILES
            value: ""
        resources:
            limits:
              cpu: 2000m
              memory: 2000Mi
            requests:
              cpu: 200m
              memory: 300Mi
        imagePullPolicy: Always
        volumeMounts:
          # - name: compactor-data
          #   mountPath: /var/edgedelta-compactor
      terminationGracePeriodSeconds: 60
      volumes:      
  # volumeClaimTemplates:
  # - metadata:
  #     name: compactor-data
  #   spec:
  #     accessModes: [ "ReadWriteOnce" ]
  #     resources:
  #       requests:
  #         storage: 30Gi