Edge Delta Cloud Pipelines
6 minute read
Overview
Cloud pipelines are Edge Delta agents that are hosted in cloud infrastructure owned and managed by the Edge Delta team.
See Third Party Agents or Agentless integrations.
You might decide to use Cloud pipelines in the following scenarios:
- data sources are serverless workloads such as AWS Lambda functions or events generated from Amazon Kinesis.
- lightweight edge environments host thin data producers, such as Internet of Things.
- you do not want to take on resource management associated with hosting an additional workload.
- security limitations exist for deploying a pipeline in your environment.

You can create, configure, and remove pipeline configurations using the Edge Delta interface. The Cloud pipelines expose an HTTP, HTTPS, or GRPCS endpoint so you can push data to it.
Push or Pull Data Inputs
Cloud pipelines cater for both push or pull data input integrations. To push data to the Cloud pipeline configure your sources such as an OTEL collector, a CDN or Amazon Kinesis to push data to the pipeline endpoint. To pull data into the Cloud pipeline, add a source node such as HTTP pull configured with your data source’s endpoint to the Cloud pipeline’s configuration.
Managing a Cloud Pipeline
Click Pipelines and select Cloud from the filter by filter.

Creating a Cloud Pipeline
- Click Pipelines.
- Click New Pipeline.
- Select Cloud
- Specify a name to identify the pipeline.
- Select Compute Units based on your estimated traffic volume. This is the maximum bandwidth the agent can handle before signalling an error. The number of compute units used per hour counts towards your plan usage.
- Click Deploy Cloud Pipeline.

The new Cloud pipeline is added to the pipelines table.
Note: The endpoint is listed on the pipeline’s settings section. Click the kebab icon and select Settings.
Compute Units
You can change a Cloud pipeline’s resouce capability setting known as compute units. This is the maximum bandwidth the agent can handle before signalling an error.
Bear in mind the number of compute units you use per hour contribute to your plan usage in the form of compute units allocated per day. For example, running one Cloud pipeline with one compute unit (a maximum of 12MB per second or 1TB per day), and another Cloud pipeline with three compute units (36MB per second or 3TB per day) will result in a daily usage of 4 compute units. In addition, data flowing from your workloads to your Cloud pipelines will contribute to your Cloud Ingress plan allocation.
Edit Cloud Pipeline Resources
- Select the cloud pipeline on the Pipelines page.
- Click the kebab icon and select Settings.

The Edit Cloud Pipeline Settings page opens and you can make the changes to the resource settings and view the endpoints.

The Agent Version lists the most recent stable versions and most recent candidate version (containing rc). Choose the latest stable version. If this configuration doesn’t work you can contact Edge Delta support to experiment with the candidate.
Delete a Cloud Pipeline
To delete a Cloud pipeline: Select it on the Pipelines page. Click the kebab icon and select Delete.
Suspend a Cloud Pipeline
You can suspend a cloud pipeline to pause its resource consumption while saving its configuration. Select the pipeline you want to suspend on the Pipelines page. Click the kebab icon and select Suspend. You can resume a suspended cloud pipeline using the Resume option:

Third Party Agents
A cloud pipeline is able to collect data from third party agents. In this scenario you do not need to install an Edge Delta agent in your environment, but rather point your existing agent, such as an OTEL collector, to an Edge Delta cloud pipeline. On the cloud pipeline, you configure an OTLP input node.
OTEL Collector
The OTLP source node consumes data items directly from OTLP configured data sources. The node is configured with the port that the agent should listen on.
Configure OTLP
To configure the OTLP source node, you must obtain the port number from the OTLP configuration:
- Instrumentation Libraries: When using the OpenTelemetry SDKs, the port used to emit OTLP logs is part of the exporter configuration. The endpoint (which includes the host and port) is set when setting up the OpenTelemetry exporter within your application code. See Instrument Code using OpenTelemetry.
- OpenTelemetry Collector: The port number on which the collector should send outgoing OTLP data is specified in the exporter section.
- Zero-Code Instrumentation Agents: Similar to the instrumentation libraries, auto-instrumentation agents are configured to send data to a specified endpoint. This configuration includes the port number to which OTLP logs will be sent. See Instrument Code using OpenTelemetry.
- Sidecars: In a Kubernetes environment, a sidecar that runs an instance of the OpenTelemetry Collector is set up using a configuration file, in which you can find the port for the OTLP receiver and exporter.
- Log Routers and Forwarders: Log routers and forwarders may have plugins or output configurations that support OTLP. Within these configurations,the endpoint is defined, including the port, where the logs should be sent in OTLP format.
Example Collector Configuration (Cloud Pipeline)
gRPC (Cloud Pipeline)
If you are sending OTEL telemetry from the collector to a cloud pipeline, you update the Collector configuration with exporters pointing to the cloud pipeline endpoints. You use secure TLS and port 443:
exporters:
otlp/ed-data-supply_trace:
endpoint: '12345678-1a2b-3c4d-5e6f-7890ghijklmn-grpc-us-west2-cf.aws.edgedelta.com:443'
tls:
insecure: false
otlp/ed-data-supply_metric:
endpoint: '12345678-1a2b-3c4d-5e6f-7890ghijklmn-grpc-us-west2-cf.aws.edgedelta.com:443'
tls:
insecure: false
otlp/ed-data-supply_log:
endpoint: '12345678-1a2b-3c4d-5e6f-7890ghijklmn-grpc-us-west2-cf.aws.edgedelta.com:443'
tls:
insecure: false
Replace the endpoint with one provided in your Cloud pipeline settings. Include the port number but no route is required. Do not include
grpcs://
.
And you update the Collector’s Pipeline to use the new exporters:
service:
extensions:
- health_check
pipelines:
logs:
exporters:
...
- otlp/ed-data-supply_log
...
metrics:
exporters:
...
- otlp/ed-data-supply_metric
...
traces:
exporters:
...
- otlp/ed-data-supply_trace
...
The Cloud pipeline contains an OTLP input node by default, which does not need to be adjusted for this gRPC configuration:
- name: otlp_input
type: otlp_input
port: 4317
protocol: grpc
HTTP (Cloud Pipeline)
To send OTLP telemetry to an Edge Delta Cloud pipeline you configure otlphttp
exporters and disable compression. You use secure TLS and port 443 for HTTPS:
exporters:
otlphttp/ed-data-supply_trace:
endpoint: 'https://12345678-1a2b-3c4d-5e6f-7890ghijklmn-http-us-west2-cf.aws.edgedelta.com:443'
compression: none
tls:
insecure: false
otlphttp/ed-data-supply_metric:
endpoint: 'https://12345678-1a2b-3c4d-5e6f-7890ghijklmn-http-us-west2-cf.aws.edgedelta.com:443'
compression: none
tls:
insecure: false
otlphttp/ed-data-supply_log:
endpoint: 'https://12345678-1a2b-3c4d-5e6f-7890ghijklmn-http-us-west2-cf.aws.edgedelta.com:443'
compression: none
tls:
insecure: false
Replace the endpoint with one provided in your Cloud pipeline settings. You include the port number at the end but no route is required. Unlike gRPC you include
https://
.
And you update the Collector’s Pipeline to use the new exporters:
service:
extensions:
- health_check
pipelines:
logs:
exporters:
...
- otlphttp/ed-data-supply_log
...
metrics:
exporters:
...
- otlphttp/ed-data-supply_metric
...
traces:
exporters:
...
- otlphttp/ed-data-supply_trace
...
The Cloud pipeline contains an HTTP input node by default, which you need to delete. Replace it with an OTLP input node listening on port 80
for HTTP traffic:
- name: otlp_input_80
type: otlp_input
port: 80
protocol: http
- name: otlp_input
type: otlp_input
port: 4317
protocol: grpc
Note: You may also need to include an unused gRPC OTLP node to pass configuration validation.