Install Linux Agent
  • Dark
    Light

Install Linux Agent

  • Dark
    Light

Overview

You can use this document to learn how to install the Edge Delta agent for your Linux-based operating system.

Note

Before you deploy the agent, we recommend that you review the Review Agent Requirements document.


Step 1: Create a Configuration and Download the Agent

  1. In the Edge Delta App, on the left-side navigation, click Data Pipeline, and then click Agent Settings.
  2. Click Create Configuration.
  3. Select Linux.
  4. Click Save.
  5. In the table, locate the newly created configuration, then click the corresponding vertical green ellipses, and then click Deploy Instructions.
  6. Click Linux.
  7. In the window that appears, copy the command.
    • This window also displays your API key. Copy this key for a later step.

Step 2: Install the Agent

There are 2 ways to install the agent:

  • Option 1: Standard Installation
    • With this option, you will install the agent via cURL in a bash script.
    • This installation is the standard and recommended way to install the agent.
  • Option 2: Offline Installation
    • With this option, you will not use cURL in a bash script on your production environment.
    • You can use this installation method if you have security concerns.

Option 1: Standard Installation

Open a terminal, use sudo and paste the command you copied from the Edge Delta Deployment Instructions. The command includes the API key, in the following example it is 12345.

sudo ED_API_KEY=12345 bash -c "$(curl -L https://release.edgedelta.com/release/install.sh)"

The installation process will deploy Edge Delta into the /opt/edgedelta/agent/path. Additionally, the edgedelta system service will start automatically with default configurations.

Note

The ED_ENV_VARS special variable can be used in the installation command to pass one or more persistent environment variables to the agent, which will run as the system service. To view a full list of variables that the agent supports, see Review Environment Variables for Agent Installation.

sudo ED_API_KEY=12345 \ 
ED_ENV_VARS="MY_VAR1=MY_VALUE_1,MY_VAR2=MY_VALUE_2" \ 
bash -c "$(curl -L https://release.edgedelta.com/release/install.sh)"
Note

The https://release.edgedelta.com/release/install.sh release package:

  • Detects your architecture and operating system, and then
  • Chooses and downloads the latest version of the agent self-extracting script, which includes the content to be extracted at the end of the script.

The script's content and extractable scripts are available for inspection at https://release.edgedelta.com/release/install.sh.

To check the package's integrity, the script header will extract commands and content checksum.

The script will fail if the content has been tempered with.

  • For example, the v0.1.19/edgedelta-linux-amd64.shheader includes:
    • CRCsum="1944320463"
    • MD5="a98b537444f18d97a06b428b9cb223ce"

If the package has not been tempered with, then the script will:

  • Extract the agent into a temporary directory, then
  • Set the apikey file with the given ED_API_KEY environment variable, and then
  • Run unix_install.sh.
    • This command will copy the content to /opt/edgedelta/agent/and then run the following commands to install edgedelta as a system service and start the service:
      • ./edgedelta -s install
      • ./edgedelta -s start

Option 2: Offline Installation

  1. Follow the steps outlined in Option 1 in a non-production machine with the same architecture and OS as the target production machine.
  2. Use the following command to compress the agent folder:
    sudo tar -czvf agent_archive.tgz /opt/edgedelta
    
  3. Copy agent_archive.tgz to the target machine via SSH or other means.
  4. Use the following command to extract the archive under /opt/edgedelta:
    sudo tar -xzvf agent_archive.tgz -C /
    
  5. Use the following commands to install and start the service:
    sudo cd /opt/edgedelta/agent/
    
    sudo ./edgedelta -s install
    
    sudo ./edgedelta -s start
    

(Optional) Step 3: Set Memory Limits for the Agent

Use the command below to set physical and virtual memory limits.

Before you begin

Before you set memory limits, consider the following statements:

  • This action will create a backup file ( .bak ) of the edgedelta.service, in case you need to revert your changes.
  • If the edgedelta service reaches its memory limits, then the edgedelta service will restart.
    • If you notice that the service restarts frequently, then you may want to increase the limits. If you do not want to change your memory limits, then you can enable memory profiling and contact support@edgedelta.com with a heapdump from the profiler.
    • To learn more, see Troubleshoot Memory Limits.
  1. Review the following sample command, where memory limits are set to 500MB for physical and 3GB for virtual:
    sed -i.bak 's/^\[Service\]/\[Service\]\nMemoryLimit=500M\nMemorySwapMax=3G/g' /etc/systemd/system/edgedelta.service
    
  2. To verify that the limits are enabled, run the following commands.
    • memory.limit_in_bytes displays the physical memory limit.
    • memory.memsw.limit_in_bytes displays the virtual memory limit.
cat /sys/fs/cgroup/memory/system.slice/edgedelta.service/memory.limit_in_bytes
cat /sys/fs/cgroup/memory/system.slice/edgedelta.service/memory.memsw.limit_in_bytes

See Linux Agent Troubleshooting for information about memory limit issues. 

Review Example Configuration

The following example configuration displays a default configuration that can be deployed.

You can comment (or uncomment) parameters as need, as well as populate appropriate values to create your desired configuration.

#This is a sample edgedelta agent config. 
version: v2

#Global settings to apply to the agent
agent_settings:
  tag: linux_onboarding
  log:
    level: info
  anomaly_capture_size: 1000
  anomaly_confidence_period: 30m

#Inputs define which datasets to monitor (files, containers, syslog ports, windows events, etc.)
inputs:
  system_stats:
    labels: "system_stats"
  files:
    - labels: "system_logs, auth"
      path: "/var/log/auth.log"
    - labels: "system_logs, syslog"
      path: "/var/log/syslog"
    - labels: "system_logs, secure"
      path: "/var/log/secure"
    - labels: "system_logs, messages"
      path: "/var/log/messages"
  #ports:
  # - labels: "syslog_ports"
  #   protocol: tcp
  #   port: 1514

#Outputs define destinations to send both streaming data, and trigger data (alerts/automation/ticketing)
outputs:
  #Streams define destinations to send "streaming data" such as statistics, anomaly captures, etc. (Splunk, Sumo Logic, New Relic, Datadog, InfluxDB, etc.)
  streams:
    ##Sumo Logic Example
    #- name: sumo-logic-integration
    #  type: sumologic
    #  endpoint: "<ADD SUMO LOGIC HTTPS ENDPOINT>"

    #Splunk Example
    #- name: splunk-integration
    #  type: splunk
    #  endpoint: "<ADD SPLUNK HEC ENDPOINT>"
    #  token: "<ADD SPLUNK TOKEN>"

    ##Datadog Example
    #- name: datadog-integration
    #  type: datadog
    #  api_key: "<ADD DATADOG API KEY>"

    ##New Relic Example
    #- name: new-relic-integration
    #   type: newrelic
    #   endpoint: "<ADD NEW RELIC API KEY>"

    ##Influxdb Example
    #- name: influxdb-integration
    #  type: influxdb
    #  endpoint: "<ADD INFLUXDB ENDPOINT>"
    #  port: <ADD PORT>
    #  features: all
    #  tls:
    #    disable_verify: true
    #  token: "<ADD JWT TOKEN>"
    #  db: "<ADD INFLUX DATABASE>"

  ##Triggers define destinations for alerts/automation (Slack, PagerDuty, ServiceNow, etc)
  triggers:
    ##Slack Example
    #- name: slack-integration
    #  type: slack
    #  endpoint: "<ADD SLACK WEBHOOK/APP ENDPOINT>"


#Processors define analytics and statistics to apply to specific datasets
processors:
  cluster:
    name: clustering
    num_of_clusters: 50          # keep track of only top 50 and bottom 50 clusters
    samples_per_cluster: 2       # keep last 2 messages of each cluster
    reporting_frequency: 30s     # report cluster samples every 30 seconds

#Regexes define specific keywords and patterns for matching, aggregation, statistics, etc. 
  regexes:
    - name: "auth_failed"
      pattern: "\\b(?:[Aa]uthentication failure|FAILED SU|input_userauth_request: invalid user|Invalid user|Failed publickey|Failed password)\\b"
      trigger_thresholds:
        anomaly_probability_percentage: 95

    - name: "auth_success"
      pattern: "\\b(?:su:|sudo:|sshd:|sshd\\[|pam_unix).*(?:\\b[Aa]ccepted|session opened|to\\b.*\\bon)\\b"
      trigger_thresholds:
        anomaly_probability_percentage: 95

    - name: "auth_root"
      pattern: "\\b(?:sudo|root|su)\\b"
      trigger_thresholds:
        anomaly_probability_percentage: 95

    - name: "auth_su_attempt"
      pattern: "\\b(?:su:|su\\[).*(?:[Aa]uthentication failure|FAILED SU|input_userauth_request: invalid user|Invalid user|Failed publickey|Failed password)\\b"
      trigger_thresholds:
        anomaly_probability_percentage: 95

    - name: "system_start"
      pattern: "\\bInitializing cgroup subsys cpuset\\b"
      trigger_thresholds:
        anomaly_probability_percentage: 95

    - name: "system_stop"
      pattern: "\\b(?:exiting|exited|terminating|terminated|shutting)\\b"
      trigger_thresholds:
        anomaly_probability_percentage: 95

    - name: "error-check"
      pattern: "error|ERROR|problem|ERR|Err"
      trigger_thresholds:
        anomaly_probability_percentage: 95

#Workflows define the mapping between input sources, which processors to apply, and which destinations to send the streams/triggers to
workflows:
  system_stats_workflow:
    input_labels:
      - system_stats

  example_workflow:
    input_labels:
      - system_logs
    processors:
      - clustering
      - auth_failed
      - auth_success
      - auth_root
      - auth_su_attempt
      - system_start
      - system_stop
      - error-check
    destinations:
      #- streaming_destination_a    #Replace with configured streaming destination
      #- streaming_destination_b    #Replace with configured streaming destination
      #- trigger_destination_a      #Replace with configured trigger destination
      #- trigger_destination_b      #Replace with configured trigger destination

See Linux Agent Troubleshooting for information about resolving common issues. 

View Your Agent Version

  1. In the Edge Delta App, on the left-side navigation, click Data Pipeline, and then click Pipeline Status.
  2. Navigate to the Active Agents table.
  3. Review the Agent Version column for your corresponding agent.

Upgrade the Agent

To upgrade the agent, you must run the installation command that you previously used to first deploy the agent.

This action will cause the agent to restart (essentially reinstall).

The upgrade process will take 30 seconds or less to complete.

To locate the installation command:

  1. In the Edge Delta App, on the left-side navigation, click Data Pipeline, and then click Agent Settings.
  2. Locate the desired agent configuration, then under Actions, click the vertical ellipses, and then Deploy Instructions.
  3. Click Linux.
  4. Copy and run the command on your command line.

Uninstall the Agent

To uninstall the agent as the root user, review the following script:

#!/bin/bash
set -uex

BITS=$(getconf LONG_BIT)
if [ "$BITS" ==  "32" ]; then
  echo "This script does not support 32 bit OS. Contact info@edgedelta.com"
fi

echo "Removing agent service"
sudo /opt/edgedelta/agent/edgedelta -s uninstall 

echo "Removing agent folder"
sudo rm -rf /opt/edgedelta/agent/

echo "Done"

Was this article helpful?

Changing your password will log you out immediately. Use the new password to log back in.
First name must have atleast 2 characters. Numbers and special characters are not allowed.
Last name must have atleast 1 characters. Numbers and special characters are not allowed.
Enter a valid email
Enter a valid password
Your profile has been successfully updated.