Service Exposure
4 minute read
This page covers Helm values for exposing Edge Delta agent ports and creating Kubernetes Services for data ingestion.
Container ports
Variable: ports
Description: Defines additional container ports that the agent listens on for incoming data. Use this when the pipeline includes an input node (such as OTLP) that accepts external traffic. Each entry specifies a port name, protocol, and port number.
Example:
ports:
- name: otlp-grpc
protocol: TCP
port: 4317
- name: otlp-http
protocol: TCP
port: 4318
The same configuration using inline flags:
--set ports[0].name=otlp-grpc \
--set ports[0].protocol=TCP \
--set ports[0].port=4317
Host exposure
Variable: ports[].exposeInHost
Description: When set to true, the container port binds directly to the Kubernetes node’s network interface via hostPort. This is only valid for DaemonSet deployments. Use it when clients send data directly to nodes and cannot use DNS-based service discovery. For most scenarios, prefer Service-based exposure with pushService.
Warning: Using
hostPortcan cause port conflicts and limits pod scheduling flexibility. Only use this when DaemonSet agents must receive traffic directly on nodes.
Example:
ports:
- name: otlp-grpc
protocol: TCP
port: 4317
exposeInHost: true
The same configuration using inline flags:
--set ports[0].exposeInHost=true
Push Service
Variable: pushService
Description: Creates a Kubernetes Service (ed-data-supply-svc) that routes traffic to Edge Delta agent pods when ports are defined. Configure the Service type to control how agents are reachable. This variable is available in the main edgedelta chart only (not the edgedelta-gateway chart).
| Variable | Default | Description |
|---|---|---|
pushService.type | ClusterIP | Service type: ClusterIP, NodePort, or LoadBalancer |
pushService.annotations | {} | Annotations for the Service (for example, cloud load balancer configuration) |
pushService.loadBalancerIP | "" | Static IP for LoadBalancer type |
pushService.clusterIP | "" | Static ClusterIP (rarely needed) |
pushService.sessionAffinity | "" | Session affinity setting (ClientIP or empty) |
pushService.sessionAffinityTimeout | 10800 | Timeout in seconds for session affinity |
Example with LoadBalancer:
ports:
- name: otlp-grpc
protocol: TCP
port: 4317
pushService:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Example with NodePort:
--set pushService.type=NodePort
For a full load balancer deployment example, see Install as a Deployment.
Gateway chart service exposure
The edgedelta-gateway chart does not include pushService. To expose a gateway pipeline, configure the ports value and create a Kubernetes Service manually.
Configure container ports in the gateway Helm values:
ports:
- name: otlp-grpc
protocol: TCP
port: 4317
Create a Kubernetes Service targeting the gateway pods:
apiVersion: v1
kind: Service
metadata:
name: edgedelta-gateway-otlp
namespace: edgedelta
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: edgedelta
edgedelta/agent-type: processor
ports:
- name: otlp-grpc
port: 4317
targetPort: 4317
protocol: TCP
Apply the Service:
kubectl apply -f gateway-service.yaml
Note: Replace the
selectorlabels to match your Helm release. Check the labels on your gateway pods withkubectl get pods -n edgedelta --show-labels.
Choosing a Service type
Select a Service type based on your network architecture and access requirements:
| Service Type | Access Scope | Best For |
|---|---|---|
ClusterIP | Within cluster only | Other in-cluster workloads sending OTLP to Edge Delta |
NodePort | Node IP + port (range 30000-32767) | VPC peering, internal networks, environments without cloud load balancers |
LoadBalancer | External via cloud load balancer | Production external ingestion, cross-VPC traffic |
Note: If you need to preserve client source IPs, set
externalTrafficPolicy: Localon the Service. This routes traffic only to pods on the receiving node, which may cause uneven distribution if pods are not spread across all nodes.
Network security
The Edge Delta Helm chart includes Cilium network policy support for controlling egress traffic. For clusters using other CNIs (Calico, Weave, Flannel), you can create a standard Kubernetes NetworkPolicy to restrict which sources can send data to your agents:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: edgedelta-otlp-ingress
namespace: edgedelta
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: edgedelta
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 10.0.0.0/8 # Adjust to your VPC CIDR
ports:
- port: 4317
protocol: TCP
- port: 4318
protocol: TCP
Adjust the CIDR range to match your environment, or use namespaceSelector and podSelector to allow traffic from specific workloads.
Verifying service exposure
After configuring ports and Services, verify the setup:
kubectl get svc -n edgedelta
kubectl get endpoints -n edgedelta
Check container ports in the pod spec:
kubectl get pods -n edgedelta -o jsonpath='{.items[0].spec.containers[0].ports}'
For NodePort Services, identify the assigned port:
kubectl get svc ed-data-supply-svc -n edgedelta -o jsonpath='{.spec.ports[*].nodePort}'
To test OTLP connectivity from within the cluster, determine the service endpoint and send a test span:
OTLP_ENDPOINT=$(kubectl get svc ed-data-supply-svc -n edgedelta -o jsonpath='{.spec.clusterIP}'):4317
Check agent logs for received data:
kubectl logs -n edgedelta -l app.kubernetes.io/name=edgedelta --tail=50 | grep -i otlp
Related content
- Install as a Deployment for load balancer examples
- Networking for proxy and network policy configuration
- Data Privacy and Compliance for security considerations
- Integrate Node, Coordinator, and Gateway Pipelines for cross-cluster connectivity