HTTP(S) Connector

Configure the HTTP(S) connector to receive logs, metrics, and telemetry data pushed to HTTP/HTTPS webhook endpoints from applications and services.

  12 minute read  

Overview

The HTTP(S) connector receives telemetry data, logs, metrics, and events pushed to HTTP/HTTPS endpoints. Edge Delta acts as an HTTP server, accepting data from webhooks, custom applications, monitoring tools, and third-party services that publish data via HTTP POST requests. Content streams into Edge Delta Pipelines for analysis by AI teammates through the Edge Delta MCP connector.

The connector supports various data formats (JSON, text, compressed), configurable authentication (Bearer token, Basic auth, custom headers), TLS/SSL encryption, path-based routing, and automatic metadata extraction from HTTP requests.

When you add this streaming connector, it appears as a HTTP source in your selected pipeline. AI teammates access this data by querying the Edge Delta backend with the Edge Delta MCP connector.

Add the HTTP(S) Connector

To add the HTTP(S) connector, you configure Edge Delta to listen on a specified port and IP address, define path-based routing patterns, and set up authentication to secure the endpoint.

Prerequisites

Before configuring the connector, ensure you have:

  • Edge Delta agent deployed with network access to receive inbound HTTP/HTTPS traffic
  • Firewall rules configured to allow inbound traffic on the designated port
  • Identified port number and authentication method for the endpoint

Configuration Steps

  1. Navigate to AI Team > Connectors in the Edge Delta application
  2. Find the HTTP(S) connector in Streaming Connectors
  3. Click the connector card
  4. Configure the Port number to listen on
  5. Set the Listen address (default: 0.0.0.0)
  6. Configure Read Timeout for incoming connections
  7. Optionally configure Advanced Settings for path filtering, compression, TLS, or authentication
  8. Select a target environment
  9. Click Save

The connector now listens for HTTP POST requests and streams content.

HTTP(S) connector configuration showing port, authentication, and advanced settings

Configuration Options

Connector Name

Name to identify this HTTP(S) connector instance.

Port

Port number to listen on for incoming HTTP/HTTPS requests.

Format: Integer between 1-65535

Examples:

  • 3421 - Standard HTTP port
  • 8443 - Standard HTTPS port
  • 9090 - Custom port

Note: Ports below 1024 require elevated privileges. Use ports 1024+ for non-root deployments.

Listen

IP address the HTTP server will bind to.

Format: Valid IPv4 address

Examples:

  • 0.0.0.0 - All interfaces (default)
  • 192.168.1.100 - Specific interface
  • 127.0.0.1 - Localhost only (testing)

Read Timeout

Maximum time to wait for incoming data on established connections.

Format: Duration with unit

Default: 1 minute

Examples:

  • 30s - 30 seconds for fast timeout
  • 1m - 1 minute (default)
  • 2m - 2 minutes for large payloads

Advanced Settings

Included Paths

Filter incoming requests by URL path using Golang regular expression patterns. Only matching requests are accepted.

Format: Golang regex pattern

Default: .* (accept all paths)

Examples:

  • /v1/.* - All paths under /v1
  • /v1/(logs|metrics|events) - Specific endpoints only
  • /api/.* - All API paths

Use Cases:

  • Organize different data types on different paths
  • Accept only versioned API endpoints
  • Separate production and development traffic

Compression

Compression format expected for incoming data.

Values: None, gzip, zstd, snappy

Default: None (uncompressed)

Use Cases:

  • gzip - Most common, 60-80% bandwidth reduction
  • zstd - High compression ratio
  • snappy - Fast compression/decompression
  • None - No compression (default)

TLS

Optional TLS/SSL configuration for HTTPS encryption.

Configuration Options:

  • Ignore Certificate Check: Disables SSL/TLS certificate verification. Use with caution in testing environments only.
  • CA File: Absolute file path to the CA certificate for SSL/TLS connections
  • CA Path: Absolute path where CA certificate files are located
  • CRT File: Absolute path to the SSL/TLS certificate file
  • Key File: Absolute path to the private key file
  • Key Password: Optional password for the key file
  • Client Auth Type: Client authentication type. Default is noclientcert.
  • Minimum Version: Minimum TLS version. Default is TLSv1_2.
  • Maximum Version: Maximum TLS version allowed for connections

When to Enable:

  • Public endpoints exposed to internet
  • Compliance requirements for encryption
  • Sensitive data transmission
  • Production deployments

Authentication

Authentication settings control access to the HTTP endpoint.

Strategy Options:

  • Bearer Token: Requires Authorization: Bearer <token> header
    • Secret: Token value clients must provide
  • Basic Auth: Standard HTTP Basic Authentication
    • User Name: Username for authentication
    • Password: Password for authentication
  • None: No authentication (use only in secure/isolated networks)

Examples:

  • Bearer token for API integration
  • Basic auth for simple username/password
  • None for VPC-internal endpoints with network isolation

Security Note: Always use authentication for public endpoints. Combine with TLS for production.

Metadata Level

This option is used to define which detected resources and attributes to add to each data item as it is ingested by Edge Delta. You can select:

  • Required Only: This option includes the minimum required resources and attributes for Edge Delta to operate.
  • Default: This option includes the required resources and attributes plus those selected by Edge Delta
  • High: This option includes the required resources and attributes along with a larger selection of common optional fields.
  • Custom: With this option selected, you can choose which attributes and resources to include. The required fields are selected by default and can’t be unchecked.

Based on your selection in the GUI, the source_metadata YAML is populated as two dictionaries (resource_attributes and attributes) with Boolean values.

See Choose Data Item Metadata for more information on selecting metadata.

HTTP(S)-specific metadata included:

  • server.port - Port number server is listening on
  • http.route - URL path of the request
  • http.scheme - HTTP or HTTPS
  • http.method - HTTP method (POST, GET, PUT)
  • http.request.method - Detailed request method

Rate Limit

The rate_limit parameter enables you to control data ingestion based on system resource usage. This advanced setting helps prevent source nodes from overwhelming the agent by automatically throttling or stopping data collection when CPU or memory thresholds are exceeded.

Use rate limiting to prevent runaway log collection from overwhelming the agent in high-volume sources, protect agent stability in resource-constrained environments with limited CPU/memory, automatically throttle during bursty traffic patterns, and ensure fair resource allocation across source nodes in multi-tenant deployments.

When rate limiting triggers, pull-based sources (File, S3, HTTP Pull) stop fetching new data, push-based sources (HTTP, TCP, UDP, OTLP) reject incoming data, and stream-based sources (Kafka, Pub/Sub) pause consumption. Rate limiting operates at the source node level, where each source with rate limiting enabled independently monitors and enforces its own thresholds.

Configuration Steps:

  1. Click Add New in the Rate Limit section
  2. Click Add New for Evaluation Policy
  3. Select Policy Type:
  • CPU Usage: Monitors CPU consumption and rate limits when usage exceeds defined thresholds. Use for CPU-intensive sources like file parsing or complex transformations.
  • Memory Usage: Monitors memory consumption and rate limits when usage exceeds defined thresholds. Use for memory-intensive sources like large message buffers or caching.
  • AND (composite): Combines multiple sub-policies with AND logic. All sub-policies must be true simultaneously to trigger rate limiting. Use when you want conservative rate limiting (both CPU and memory must be high).
  • OR (composite): Combines multiple sub-policies with OR logic. Any sub-policy can trigger rate limiting. Use when you want aggressive rate limiting (either CPU or memory being high triggers).
  1. Select Evaluation Mode. Choose how the policy behaves when thresholds are exceeded:
  • Enforce (default): Actively applies rate limiting when thresholds are met. Pull-based sources (File, S3, HTTP Pull) stop fetching new data, push-based sources (HTTP, TCP, UDP, OTLP) reject incoming data, and stream-based sources (Kafka, Pub/Sub) pause consumption. Use in production to protect agent resources.
  • Monitor: Logs when rate limiting would occur without actually limiting data flow. Use for testing thresholds before enforcing them in production.
  • Passthrough: Disables rate limiting entirely while keeping the configuration in place. Use to temporarily disable rate limiting without removing configuration.
  1. Set Absolute Limits and Relative Limits (for CPU Usage and Memory Usage policies)

Note: If you specify both absolute and relative limits, the system evaluates both conditions and rate limiting triggers when either condition is met (OR logic). For example, if you set absolute limit to 1.0 CPU cores and relative limit to 50%, rate limiting triggers when the source uses either 1 full core OR 50% of available CPU, whichever happens first.

  • For CPU Absolute Limits: Enter value in full core units:

    • 0.1 = one-tenth of a CPU core
    • 0.5 = half a CPU core
    • 1.0 = one full CPU core
    • 2.0 = two full CPU cores
  • For CPU Relative Limits: Enter percentage of total available CPU (0-100):

    • 50 = 50% of available CPU
    • 75 = 75% of available CPU
    • 85 = 85% of available CPU
  • For Memory Absolute Limits: Enter value in bytes

    • 104857600 = 100Mi (100 × 1024 × 1024)
    • 536870912 = 512Mi (512 × 1024 × 1024)
    • 1073741824 = 1Gi (1 × 1024 × 1024 × 1024)
  • For Memory Relative Limits: Enter percentage of total available memory (0-100)

    • 60 = 60% of available memory
    • 75 = 75% of available memory
    • 80 = 80% of available memory
  1. Set Refresh Interval (for CPU Usage and Memory Usage policies). Specify how frequently the system checks resource usage:
  • Recommended Values:
    • 10s to 30s for most use cases
    • 5s to 10s for high-volume sources requiring quick response
    • 1m or higher for stable, low-volume sources

The system fetches current CPU/memory usage at the specified refresh interval and uses that value for evaluation until the next refresh. Shorter intervals provide more responsive rate limiting but incur slightly higher overhead, while longer intervals are more efficient but slower to react to sudden resource spikes.

The GUI generates YAML as follows:

# Simple CPU-based rate limiting
nodes:
  - name: <node name>
    type: <node type>
    rate_limit:
      evaluation_policy:
        policy_type: cpu_usage
        evaluation_mode: enforce
        absolute_limit: 0.5  # Limit to half a CPU core
        refresh_interval: 10s
# Simple memory-based rate limiting
nodes:
  - name: <node name>
    type: <node type>
    rate_limit:
      evaluation_policy:
        policy_type: memory_usage
        evaluation_mode: enforce
        absolute_limit: 536870912  # 512Mi in bytes
        refresh_interval: 30s

Composite Policies (AND / OR)

When using AND or OR policy types, you define sub-policies instead of limits. Sub-policies must be siblings (at the same level)—do not nest sub-policies within other sub-policies. Each sub-policy is independently evaluated, and the parent policy’s evaluation mode applies to the composite result.

  • AND Logic: All sub-policies must evaluate to true at the same time to trigger rate limiting. Use when you want conservative rate limiting (limit only when CPU AND memory are both high).
  • OR Logic: Any sub-policy evaluating to true triggers rate limiting. Use when you want aggressive protection (limit when either CPU OR memory is high).

Configuration Steps:

  1. Select AND (composite) or OR (composite) as the Policy Type
  2. Choose the Evaluation Mode (typically Enforce)
  3. Click Add New under Sub-Policies to add the first condition
  4. Configure the first sub-policy by selecting policy type (CPU Usage or Memory Usage), selecting evaluation mode, setting absolute and/or relative limits, and setting refresh interval
  5. In the parent policy (not within the child), click Add New again to add a sibling sub-policy
  6. Configure additional sub-policies following the same pattern

The GUI generates YAML as follows:

# AND composite policy - both CPU AND memory must exceed limits
nodes:
  - name: <node name>
    type: <node type>
    rate_limit:
      evaluation_policy:
        policy_type: and
        evaluation_mode: enforce
        sub_policies:
          # First sub-policy (sibling)
          - policy_type: cpu_usage
            evaluation_mode: enforce
            absolute_limit: 0.75  # Limit to 75% of one core
            refresh_interval: 15s
          # Second sub-policy (sibling)
          - policy_type: memory_usage
            evaluation_mode: enforce
            absolute_limit: 1073741824  # 1Gi in bytes
            refresh_interval: 15s
# OR composite policy - either CPU OR memory can trigger
nodes:
  - name: <node name>
    type: <node type>
    rate_limit:
      evaluation_policy:
        policy_type: or
        evaluation_mode: enforce
        sub_policies:
          - policy_type: cpu_usage
            evaluation_mode: enforce
            relative_limit: 85  # 85% of available CPU
            refresh_interval: 20s
          - policy_type: memory_usage
            evaluation_mode: enforce
            relative_limit: 80  # 80% of available memory
            refresh_interval: 20s
# Monitor mode for testing thresholds
nodes:
  - name: <node name>
    type: <node type>
    rate_limit:
      evaluation_policy:
        policy_type: memory_usage
        evaluation_mode: monitor  # Only logs, doesn't limit
        relative_limit: 70  # Test at 70% before enforcing
        refresh_interval: 30s

Target Environments

Select the Edge Delta pipeline (environment) where you want to deploy this connector.

How to Use the HTTP(S) Connector

The HTTP(S) connector integrates seamlessly with AI Team, enabling analysis of webhook events and HTTP-pushed data. AI teammates automatically leverage the ingested data based on the queries they receive and the context of the conversation.

Use Case: Webhook Event Processing

Receive webhook notifications from third-party services like payment processors, order management systems, or inventory platforms. AI teammates analyze patterns, detect anomalies, and correlate events across multiple services. When combined with PagerDuty alerts, teammates automatically query recent webhook events during incident investigation to identify which external service triggered the issue and correlate with internal system behavior.

Configuration: Port 8443 with HTTPS enabled, path filter /v1/webhooks/.*, Bearer token authentication.

Use Case: Serverless Function Logging

Accept structured logs from AWS Lambda, Azure Functions, or other serverless platforms via simple HTTP POST requests. AI teammates analyze logs to identify performance bottlenecks, memory issues, and execution failures. This is valuable when investigating function failures—teammates can correlate error patterns with execution duration and memory usage to recommend configuration optimizations.

Configuration: Port 3421, path filter /v1/logs, gzip compression enabled, no auth (VPC-internal).

Use Case: Custom Application Metrics

Collect custom business metrics (conversion rates, cart abandonment, user engagement) that applications send via HTTP. AI teammates correlate business metrics with operational telemetry to identify how application performance affects business outcomes. When integrated with Jira, teammates can automatically document business impact by combining metric trends with error rates.

Configuration: Port 3421, path filter /v1/metrics, Basic authentication, no compression for small payloads.

Troubleshooting

Connection refused errors: Verify Edge Delta is listening on configured port (netstat -tuln | grep 3421). Check firewall rules allow inbound traffic.

Request timeout errors: Increase Read Timeout setting for large payloads or slow networks. Check network latency with ping/traceroute.

401 Unauthorized errors: Verify authentication credentials exactly match configuration. Check header format for Bearer tokens (Authorization: Bearer token).

SSL/TLS handshake failures: Ensure TLS is enabled with valid certificate and private key. Verify certificate matches hostname. Check certificate hasn’t expired.

404 Not Found on valid endpoint: Check Included Paths regex pattern matches request path. Test regex with online validator. Verify configuration deployed to all agents.

Data sent but not appearing: Verify JSON payload is well-formed. Ensure Content-Type: application/json header set. Check for pipeline filters dropping data.

Compression errors: Ensure Compression setting matches client’s compression format. Verify Content-Encoding header set correctly. Test decompression manually.

High latency: Check network round-trip time. Enable compression for large payloads. Monitor Edge Delta agent resource usage (CPU, memory).

Too many connections: Configure HTTP client to use connection pooling and keep-alive. Reuse client objects across requests. Check file descriptor limits.

Sending Data Examples

Python

import requests

url = "http://edge-delta-host:3421/v1/logs"
log_event = {
    "timestamp": "2024-10-01T14:30:00Z",
    "level": "ERROR",
    "message": "Payment processing failed",
    "service": "payment-service"
}

response = requests.post(url, json=log_event)
print(f"Status: {response.status_code}")

cURL

# With authentication
curl -X POST https://edge-delta-host:8443/v1/events \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{"event":"user_login","timestamp":"2024-10-01T14:30:00Z"}'

# Compressed
echo '{"level":"ERROR","message":"Timeout"}' | gzip | \
  curl -X POST http://edge-delta-host:3421/v1/logs \
    -H "Content-Encoding: gzip" \
    --data-binary @-

JavaScript/Node.js

const axios = require('axios');

const logEvent = {
    timestamp: new Date().toISOString(),
    level: "ERROR",
    message: "Database timeout"
};

axios.post('http://edge-delta-host:3421/v1/logs', logEvent)
    .then(res => console.log(`Sent: ${res.status}`))
    .catch(err => console.error(`Failed: ${err.message}`));

Next Steps

For additional help, visit AI Team Support.