GitOps Principles Deployment of Edge Delta

Install Edge Delta declaratively using continuous deployment, locally hosted configurations, managed secrets, and a privately hosted container image.

Overview

In this tutorial you will install the Edge Delta agent on a Kubernetes cluster using Continuous Deployment (CD) with GitOps principles. To do this, you will configure CD tooling (Argo CD) to synchronize your cluster with the manifests on your configuration repository.

Later, when you make updates to the configuration repository, for example by changing the agent version or updating the configuration, they are automatically applied by your CD tooling to the cluster. It does this by polling the configuration regularly. If there is a mismatch between the configuration (desired state) and the cluster (current state), the configuration is pulled and applied to the cluster. This enables cluster self healing.

In addition, in this tutorial you will also save a dependent image (the Edge Delta agent) in your own image repository. Keeping your images and configurations isolated gives you control over changes. With this workflow, change control on the configuration repository and image repository enables you to have change control on your cluster. You can also monitor changes to images and configurations using code diffs before applying them to your image repo, configuration repos, and in turn, cluster. This also makes maintaining custom settings easier as you can isolate all your customizations from vendor updates to the default manifests.

Finally, this tutorial covers secret management.

Process Overview

Complete the following steps. You can use the tutorial that follows as a guide.

  1. Create the Pipeline ID secret
  2. Create the Organization ID secret
  3. Store the Edge Delta image.
  4. Create the Edge Delta configMap
  5. Create the Edge Delta agent manifest
  6. Create application definitions
  7. Push changes to the configuration repo
  8. Create and execute the app-of-apps definition

Install Edge Delta Declaratively

The following tutorial will use ArgoCD to install the Edge Delta agent using public images and the Bitnami secrets manager called Sealed Secrets.

Argo CD is one of several applications that can monitor a target state expressed declaratively in a Git repo and pull changes into the deployed environment when the repo is updated. This approach leverages common code repository management workflows that are likely already in place in your organization. Using a common workflow aligns management of the Edge Delta agent with existing identity and access management, secret management, and version control. It also enables update strategies such as Blue-Green and Canary updates.

High Level Architecture

Kubernetes application definitions, application configurations and environment configurations are stored in Git repositories that are protected with appropriate information security controls. While Argo CD polls the repo regularly for changes, a web-hook can publish a change event to trigger polling. Argo CD performs automatic deployments based on the contents of the repository, which can be tracked by branch, tag, or version to match your organization’s deployment practices. For each component that needs to be deployed, ArgoCD is configured with an ArgoCD application definition. This definition in turn specifies the location of the component’s manifest.

The following example illustrates how to deploy the Edge Delta agent and manage its configuration using GitHub and ArgoCD. The agent is defined with a Kubernetes application definition and the configuration is deployed as a ConfigMap. The Pipeline ID is also stored in the GitHub repository, but it is encrypted using Bitnami Kubeseal along with the Bitnami Sealed-Secret controller. When it is deployed, the controller will decrypt it as a secret. Argo CD is configured with a private SSH key to enable access to the GitHub repo. The app of apps pattern is used to deploy all the ArgoCD application definitions stored in the Argo CD application definition folder.

The GitHub repo will be structured as follows. Indented bullets indicate the subfolder structure:

An argocd_apps folder will contain ArgoCD application definitions for each component. An application_manifests folder contains subfolders for each application. Within each of those folders will be a manifest.

  • argocd_apps
    • api_secret_app.yml
    • edgedelta_agent_config_app.yml
    • edgedelta_app.yml
    • orgidapp.yml
    • sealed_secrets.yml
  • application_manifests
    • apikey
      • apisecret.yml
    • appconfig
      • config.yml
    • edgedelta
      • edmanifest.yml
    • orgid
      • orgid.yml

Prerequisites:

To follow along with this tutorial there are a few prerequisites:

  • A private GitHub repo cloned on the local machine.
  • The repo folder structure defined as per the architecture overview
  • An SSH key for the repo added to the ssh-agent (in this tutorial the private key is in id_ed25519 and it does not have a passphrase).
  • An image repository such as Docker Hub.
  • An Edge Delta Account.
  • Homebrew package manager installed.
  • Docker installed.
  • Kind installed.
  • Helm installed.
  • kubectl installed.

Create a Cluster

Use kind to create a cluster.

kind create cluster --name localimage

Install kubeseal and the Secrets Controller

You need Sealed Secrets to generate the secrets. It consists of kubeseal on the local host and the secrets controller in the cluster.

brew install kubeseal
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm repo update
helm install sealed-secrets -n kube-system --set-string fullnameOverride=sealed-secrets-controller sealed-secrets/sealed-secrets 

In a production environment you may wish to install the chart from a private registry.

Create an Edge Delta Agent

Select the kubernetes template option while following these steps:

  1. Click Pipelines.
  2. Click New Fleet.
  3. Select Edge Fleet.
  4. Optionally, expand Advanced Fleet Configuration and select a pipeline with a configuration you want to duplicate.
  5. Click Continue.
  6. Select the appropriate template and click Continue.
  7. Specify a name to identify the Fleet.
  8. Click Generate Config.
  9. Execute the installation commands, they include the unique ID for the Fleet.
  10. Expand the namespaces and select the input sources you want to monitor.
  11. Select the Destination Outputs you want to send processed data to, such as the Edge Delta Observability Platform.
  12. Click Continue.
  13. Click View Dashboard.

Create a Demo Source Node

Create a demo source node and connect it to the mask_ssn node.

  • name: demo_input
  • speed: 1s
  • error_interval: 1ms
  • error_count: 20

Copy the Pipeline ID, Config YAML, and Organization ID

You need to create a sealed secret using the Pipeline ID. So copy the key for the agent you configured in the previous step. Also copy the Pipeline configuration YAML and your organization ID for use in later steps.

  1. In the Edge Delta App, click Pipelines.
  2. Select the fleet and and click View/Edit Pipeline.
  3. Click Edit YAML and copy the Pipeline ID for use in a later step.
  4. Copy the configuration YAML and paste it in the temporary file called config.yml in the application_manifests folder for use in a later step.
  5. Click Admin - My Organization and copy your organization ID for use in a later step.

Create the Pipeline ID Secret

  1. Open to your cloned repository, for example using VS Code.
  2. Navigate to the application_manifests/apikey folder in the cloned repo.
  3. Use Kubeseal to create the sealed secret for the Edge Delta API in the application_manifests/apikey folder, replacing 123456789 with the Edge Delta Pipeline ID you copied earlier:
kubectl --namespace edgedelta \
    create secret \
    generic ed-api-key \
    --dry-run=client \
    --from-literal ed-api-key="123456789" \
    --output yaml \
    | kubeseal \
    | tee apisecret.yml

This command creates the API secrets key using kubeseal. It saves the sealed secret as application_manifests/apikey/apisecret.yml. This path will be referenced by Argo CD configuration file that you create later.

Create the organization ID sealed secret

  1. Navigate to the application_manifests/orgid folder.
  2. Use Kubeseal to create the organization ID sealed secret for your Edge Delta organization ID in the orgid folder, replacing 123456789 with your ID:
kubectl --namespace edgedelta \
    create secret \
    generic ed-org-id \
    --dry-run=client \
    --from-literal ed-org-id="123456789" \
    --output yaml \
    | kubeseal \
    | tee orgid.yml

This command creates the Organization ID sealed secrets using kubeseal. It saves the sealed secret in application_manifests/orgid/orgid.yml. This path will be referenced by the Argo CD configuration file that you create later.

Back Up the Encryption Key

Back up the Sealed-Secrets main encryption key that was used to encrypt the API and ORG ID secrets. It should not be stored with the sealed secrets on GitHub. When you create the “Production” cluster, or any new cluster that will use the API and Organization ID keys you created, you will apply the main key before installing (or allowing Argo CD to install) Sealed-Secrets.

  1. Navigate out of your cloned repo to a local private folder.
  2. Copy the encryption key and save it as main.key file.
kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml >main.key

Create the Edge Delta Agent Manifest

  1. Open this YAML in a browser: https://raw.githubusercontent.com/edgedelta/k8s/master/edgedelta-agent-k8s-from-helm.yaml
  2. Copy the current Edge Delta Agent manifest.

The current location is also listed in the Kubernetes agent installation instructions in the kubectl apply -f step:

  1. Paste the agent manifest in a file called edmanifest.yml in the application_manifests/edgedelta folder.

Create an Image Store Containing the Edge Delta Images

  1. Create an image store and ensure you have access to it, in this example a docker hub account is used.
  2. Examine the edmanifest.yml file from the previous step and note the image repo location for the agent and the Compactor Agent as well as the version.
      - name: edgedelta-agent
        image: gcr.io/edgedelta/agent:v0.1.95
  1. Pull the specific Edge Delta agent image and version from the public image store.
docker pull gcr.io/edgedelta/agent:v0.1.95
  1. Tag the local image with your image repo URL and the tag
docker tag gcr.io/edgedelta/agent:v0.1.95 <account/reponame>:v0.1.95
  1. Push the local image to your store using the tag.
docker push <account/reponame>:v0.1.95
  1. Clean up the unpacked chart and local images.
docker image rm -f gcr.io/edgedelta/agent:v0.1.95

Create the Edge Delta ConfigMap definition

  1. Navigate to the application_manifests folder.
  2. Use this command to convert the temporary config.yml file into a configMap. The new ConfigMap will also called config.yml but it will be saved in the appconfig folder. Call the configMap edgedelta-agent-config and specify the edgedelta namespace:
kubectl create configmap edgedelta-agent-config --from-file=config.yml -n edgedelta --dry-run=client -o yaml  > appconfig/config.yml

At the time of writing the configMap is as follows:

apiVersion: v1
data:
  config.yml: |
    version: v3

    settings:
      tag: local_images
      log:
        level: info
      archive_flush_interval: 1m0s
      archive_max_byte_limit: 16MB

    links:
    - from: kubernetes_logs
      to: mask_ssn
    - from: ed_component_health
      to: ed_health
    - from: ed_node_health
      to: ed_health
    - from: ed_agent_stats
      to: ed_metrics
    - from: ed_pipeline_io_stats
      to: ed_metrics
    - from: ed_k8s_metrics_input
      to: ed_metrics
    - from: k8s_traffic_input
      to: ed_metrics
    - from: ed_system_stats
      to: ed_metrics
    - from: mask_ssn
      to: drop_trace_level
    - from: mask_ssn
      to: error_monitoring
    - from: mask_ssn
      to: exception_monitoring
    - from: mask_ssn
      to: log_to_patterns
    - from: mask_ssn
      to: negative_sentiment_monitoring
    - from: drop_trace_level
      to: ed_archive
    - from: error_monitoring
      to: ed_metrics
    - from: exception_monitoring
      to: ed_metrics
    - from: negative_sentiment_monitoring
      to: ed_metrics
    - from: log_to_patterns
      to: ed_patterns
    - from: demo_input
      to: mask_ssn

    nodes:
    - name: kubernetes_logs
      type: kubernetes_input
      include:
      - k8s.namespace.name=.*
      exclude:
      - k8s.namespace.name=kube-system
      - k8s.namespace.name=kube-public
      - k8s.namespace.name=kube-node-lease
      - k8s.pod.name=edgedelta
      - k8s.pod.name=prometheus
      - k8s.pod.name=promtail
      - k8s.pod.name=node-exporter
    - name: ed_component_health
      type: ed_component_health_input
    - name: ed_node_health
      type: ed_node_health_input
    - name: ed_agent_stats
      type: ed_agent_stats_input
    - name: ed_pipeline_io_stats
      type: ed_pipeline_io_stats_input
    - name: ed_k8s_metrics_input
      type: ed_k8s_metrics_input
    - name: k8s_traffic_input
      type: k8s_traffic_input
    - name: ed_system_stats
      type: ed_system_stats_input
    - name: mask_ssn
      type: mask
      pattern: \d{3}\-\d{2}-\d{4}
      mask: REDACTED
    - name: drop_trace_level
      type: regex_filter
      pattern: TRACE
      negate: true
    - name: error_monitoring
      type: log_to_metric
      pattern: (?i)error
    - name: exception_monitoring
      type: log_to_metric
      pattern: (?i)exception
    - name: negative_sentiment_monitoring
      type: log_to_metric
      pattern: (?i)(exception|fail|timeout|broken|caught|denied|abort|insufficient|killed|killing|malformed|unsuccessful|outofmemory|panic|undefined)
    - name: log_to_patterns
      type: log_to_pattern
      reporting_frequency: 1m0s
    - name: ed_archive
      type: ed_archive_output
    - name: ed_metrics
      type: ed_metrics_output
    - name: ed_health
      type: ed_health_output
    - name: ed_patterns
      type: ed_patterns_output
    - name: demo_input
      type: demo_input
      speed: 100ms
      error_interval: 1m0s
      error_count: 20    
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: edgedelta-agent-config
  namespace: edgedelta
  1. Delete the temporary configuration file application_manifests/config.yml.

Do not delete application_manifests/appconfig/config.yml.

Customize the Edge Delta Agent Manifest

Open edmanifest.yml and make the following changes to the file to ensure it will work with the ConfigMap and image store:

  1. Configure the agent to not download a configuration from the Edge Delta back end by adding the ED_SKIP_CONF_DOWNLOAD environment variable value of "1".
spec:
  template: 
    spec:
      containers: 
      - name: edgedelta-agent
        env:
          - name: ED_SKIP_CONF_DOWNLOAD
            value: "1"
  1. Create the ED_ORG_ID environmental variable with the secretKeyRef of ed-org-id if it is missing.
spec:
  template: 
    spec:
      containers: 
      - name: edgedelta-agent
        env:
          - name: ED_ORG_ID
            valueFrom:
              secretKeyRef:
                key: ed-org-id
                name: ed-org-id
  1. Add commands to specify the location of the mounted configuration:
spec:
  template: 
    spec:
      containers: 
      - name: edgedelta-agent
        command:
          - /edgedelta/edgedelta
          - -c
          - /config/config.yml

The configuration file name is config.yml to align with the ConfigMap configured in the previous step.

  1. Add a volumeMount called edconfig with the mountPath of /config/. This adds the ConfigMap data to the directory specified in mountPath. This is required to ensure that updates made to the ConfigMap will be automatically applied without the need to restart the pod.
spec:
  template: 
    spec:
      containers: 
      - name: edgedelta-agent
        volumeMounts:
          - name: edconfig
            mountPath: /config/
  1. Add a volume called edconfig with a configMap called edgedelta-agent-config, which is the name of the ConfigMap.
spec:
  template: 
    spec:
      volumes:
        - name: edconfig
          configMap:
            name: edgedelta-agent-config
  1. Update the agent image locations for the agent and compactor:
apiVersion: apps/v1
kind: DaemonSet
spec:
  template:
    spec:
      containers:
      - name: edgedelta-agent
        image: docker.io/account/reponame:v0.1.95
apiVersion: apps/v1
kind: StatefulSet
spec:
  template:
    spec:
      serviceAccountName: edgedelta
      containers:
      - name: edgedelta-compactor
        image: docker.io/account/reponame:v0.1.95

The final result as of time of writing:

apiVersion: v1
kind: Namespace
metadata:
  name: edgedelta
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: edgedelta
  namespace: edgedelta
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-app: edgedelta-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      k8s-app: edgedelta-logging
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: edgedelta-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: edgedelta
      hostPID: true
      hostNetwork: true
      containers:
      - name: edgedelta-agent
        image: docker.io/account/reponame:v0.1.95
        env:
          - name: ED_SKIP_CONF_DOWNLOAD
            value: "1"
          - name: ED_ORG_ID
            valueFrom:
              secretKeyRef:
                key: ed-org-id
                name: ed-org-id
          - name: ED_API_KEY
            valueFrom:
              secretKeyRef:
                key: ed-api-key
                name: ed-api-key
          - name: ED_HOST_OVERRIDE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: ED_LEADER_ELECTION_ENABLED
            value: "1"
          - name: TRACER_SERVER_PORT
            value: "9595"
          - name: ED_ENABLE_TRAFFIC_TRACER
            value: "1"
          - name: ED_SERVICE_DNS_REQUIRED
            value: "1"
          - name: ED_COMPACT_SERVICE_ENDPOINT
            value: ed-compactor-svc.edgedelta.svc.cluster.local:9199
        command:
          - /edgedelta/edgedelta
          - -c
          - /config/config.yml          
        resources:
          limits:
            memory: 2048Mi
          requests:
            cpu: 200m
            memory: 256Mi
        volumeMounts:
          - name: edconfig
            mountPath: /config/          
          - name: varlog
            mountPath: /var/log
            readOnly: true
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
            readOnly: true
          - name: persisting-cursor-storage
            mountPath: /var/lib/edgedelta
          - name: cgroup
            mountPath: /sys/fs/cgroup
          - name: debugfs
            mountPath: /sys/kernel/debug
          - name: netns
            mountPath: /var/run/netns
          - name: proc
            mountPath: /proc
        securityContext:
          privileged: true
      terminationGracePeriodSeconds: 10
      volumes:
        - name: edconfig
          configMap:
            name: edgedelta-agent-config        
        - name: varlog
          hostPath:
            path: /var/log
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: persisting-cursor-storage
          hostPath:
            path: /var/lib/edgedelta
            type: DirectoryOrCreate
        - name: cgroup
          hostPath:
            path: /sys/fs/cgroup
        - name: debugfs
          hostPath:
            path: /sys/kernel/debug
        - name: netns
          hostPath:
            path: /var/run/netns
        - name: proc
          hostPath:
            path: /proc
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: edgedelta
subjects:
- kind: ServiceAccount
  name: edgedelta
  namespace: edgedelta
roleRef:
  kind: ClusterRole
  name: edgedelta
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: edgedelta
  labels:
    k8s-app: edgedelta-logging
rules:
- apiGroups: [""] 
  resources:
  - namespaces
  - pods
  - events
  - nodes
  - nodes/metrics
  - services
  verbs:
  - get
  - watch
  - list
- apiGroups: [""] 
  resources:
  - events
  verbs:
  - create
- apiGroups: ["apps"]
  resources:
  - daemonsets
  - deployments
  - replicasets
  - statefulsets
  verbs:
  - watch
  - list
- apiGroups: ["batch"]
  resources:
  - jobs
  - cronjobs
  verbs:
  - watch
  - list
- apiGroups: ["coordination.k8s.io"]
  resources:
  - leases
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch
  - delete
- apiGroups: ["metrics.k8s.io"]
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: edgedelta
  namespace: edgedelta
  labels:
    k8s-app: edgedelta-logging
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: edgedelta-compactor
  namespace: edgedelta
  labels:
    k8s-app: edgedelta-compactor
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: edgedelta-compactor
  serviceName: ed-compactor-svc
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        k8s-app: edgedelta-compactor
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccountName: edgedelta
      containers:
      - name: edgedelta-compactor
        image: docker.io/account/reponame:v0.1.95
        ports:
          - name: compactor
            containerPort: 9199
        env:
          - name: ED_AGENT_MODE
            value: compactor
          - name: ED_COMPACTOR_PORT
            value: "9199"
          - name: ED_COMPACTOR_DATA_DIR
            value: /var/edgedelta-compactor
          - name: ED_HOST_OVERRIDE
            valueFrom:
              fieldRef:
               fieldPath: metadata.name
          
          - name: ED_API_KEY
            valueFrom:
              secretKeyRef:
                name: ed-api-key
                key: ed-api-key
          - name: ED_TRACE_FILES
            value: ""
        resources:
            limits:
              cpu: 2000m
              memory: 2000Mi
            requests:
              cpu: 1000m
              memory: 1000Mi
        imagePullPolicy: Always
        volumeMounts:
          - name: compactor-data
            mountPath: /var/edgedelta-compactor
      terminationGracePeriodSeconds: 60
      volumes:
  volumeClaimTemplates:
  - metadata:
      name: compactor-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 30Gi
---
kind: Service
apiVersion: v1
metadata:
  name: ed-compactor-svc
  namespace: edgedelta
spec:
  selector:
    k8s-app: edgedelta-compactor
  ports:
    - port: 9199
      name: compactor-port

Create Argo CD application definitions

Next you create the Argo CD application definitions, which point to the manifests and configure the Argo CD settings. They set Argo CD to prune and self-heal resources. This removes resources that no longer have manifests in the configuration repo (prune), and it re-applies configuration repo manifests if conflicting changes are made directly to the cluster (selfHeal).

Note how the resource paths align with the folder structure configured so far.

  1. Populate the following files in the argocd_apps folder.
  • api_secret_app.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: apisecret
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: 'git@github.com:account/reponame.git'
    path: application_manifests/apikey
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: edgedelta
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
    automated:
      prune: true
      selfHeal: true
  • edgedelta_agent_config_app.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: edconfig
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: 'git@github.com:account/reponame.git'
    path: application_manifests/appconfig
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: edgedelta
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
    automated:
      prune: true
      selfHeal: true
  • orgidapp.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: orgsecret
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: 'git@github.com:account/reponame.git'
    path: application_manifests/orgid
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: edgedelta
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
    automated:
      prune: true
      selfHeal: true
  • edgedelta_app.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: edgedelta
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@github.com:account/reponame.git'
    path: application_manifests/edgedelta
  destination:
    server: "https://kubernetes.default.svc"
    namespace: edgedelta
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
  1. Create an application definition for the Sealed-Secrets Controller helm chart from Bitnami. At the time of writing the version is 2.10.0. This definition includes the helm parameter override required in the Sealed-Secrets installation instructions at the time of writing.
  • sealed_secrets.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sealed-secrets
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: 'https://bitnami-labs.github.io/sealed-secrets'
    targetRevision: 2.10.0
    helm:
      parameters:
        - name: fullnameOverride
          value: sealed-secrets-controller
    chart: sealed-secrets
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: kube-system
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
    automated:
      selfHeal: true     

In a production environment you may wish to install the chart from a private registry.

Configure the Repo Location

So far, some of the Argo CD application definitions contain references to a placeholder repo:

repoURL: 'git@github.com:account/reponame.git'

Find & Replace all instances of this reference with your actual repo’s SSH URL.

Push changes to the private repo

Add, commit and push the changes to your repo.

Create a new Cluster

  1. Delete the cluster that was used to create the sealed secrets. A new cluster will be used to demonstrate how a production cluster would be set up with Argo CD and Sealed Secrets.
kind delete cluster
  1. Create a new cluster
kind create cluster

Deploy the Main Encryption Key

Deploy the main encryption key that was backed up earlier. This will be used by Sealed Secrets to decrypt the secrets.

  1. Navigate to the main.key file’s location (it should not be in the cloned repo, it should be held in a secrets management tool that is separate from the API and Organization ID secrets)
  2. Deploy the main.key manifest:
kubectl apply -f main.key

Install Argo CD

Next, install Argo CD in the cluster.

kubectl create namespace argocd 
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

In a production environment you may wish to install Argo CD with a private manifest from from a private image store, similarly to how the Edge Delta agent image is handled in previous steps.

Port Forward Argo CD to the localhost

When the pods are running, open a new terminal and port forward the service to your localhost.

kubectl get pods -n argocd

In a new terminal:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Log into Argo CD

In a terminal that isn’t being use for port forwarding, get the default Argo CD password.

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

Copy the password and use it to log into Argo CD using the admin account:

argocd login localhost:8080

Configure SSH keys

Provide Argo CD with your private key to access your repo. Replace the GitHub location with the SSH address of your own repo and specify the file containing the ssh key for your repo if it isn’t id_ed25519.

argocd repo add git@github.com:account/reponame.git --ssh-private-key-path ~/.ssh/id_ed25519

Create the app-of-apps Definition

  1. Create a file named app-of-apps.yaml and paste the following contents. It should not be in the argocd_apps folder. Replace the repoURL with the SSH address of your own repo.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: app-of-apps
spec:
  destination:
    name: ''
    namespace: argocd
    server: 'https://kubernetes.default.svc'
  source:
    path: argocd_apps
    repoURL: 'git@github.com:account/reponame.git'
    targetRevision: HEAD
  sources: []
  project: default
  syncPolicy:
    syncOptions: []
  1. Deploy the app-of-apps application in the argocd namespace:
kubectl apply -n argocd -f app-of-apps.yml

Sync the app-of-apps application

Use the argocd app sync command to synchronize the app-of-apps application with its configured repo.

argocd app sync app-of-apps 

This will cause it to launch each application mentioned in the argocd_apps folder. In turn they deploy all the manifests including the sealed secrets controller, the secrets, the Pipeline configuration and the agent itself.

You can log into the Argo CD GUI (localhost:8080) using the admin account and the password you got in a previous step to see the deployed applications.

After a few minutes, you can log into Edge Delta to view the output of the demo node starting to populate the interface.

As the cluster is not deployed in a cloud hosted environment you might not see some of the metrics and workloads populated in the interface.

Update the Demo Node Configuration

To update a configuration, for example to change the demo parameters in the configMap, or configure an input, simply make a change to the config.yaml file and push the change to the repo.

  1. Open config.yml
  2. Update the demo_input node
  3. Push changes to the repo.
  4. The edconfig ArgoCD application will show out of sync. You can wait for it to synchronize or click SYNC and synchronize it manually.
  5. You can also view the live manifest in the Argo CD GUI to ensure it has been updated.