fairly new to Kubernetes in general but also Azure Kubernetes Services. I have a single cluster with a telemetry asterix adapter service/pod that is designed to ingest UDP data from ADSB sensors via a public IP circuit. I created a public IP and LoadBalancer service on my cluster in the same namespace using a generic YAML provided by Microsoft (modified slightly for this projects requirements) and deployed. Will post YAML below.
I am able to ping the public IP generated via the YAML and the circuit with the ADSB sensor has been set up via the IP provided via the contractor, but not seeing any packets in logs for my telemetry asterix adapter pod. I am using source port of 1025 and target port of 6000 and that is what the telemetry asterix adapter is using via NettyUDP. I believe the connection between the loadbalancer service, and that pod is set with the selector in the YAML.
Is there something that I am missing? I assume that the loadbalancer service is not connected to the desired pod as I don't see anything in the logs but am able to ping the IP.
kind: Service
apiVersion: v1
metadata:
name: telemetry-asterix-adapter-svc
namespace: utm
uid: fac3e2f1-50e1-49f3-9624-2b49fe5bec39
resourceVersion: '15394560'
creationTimestamp: '2025-03-04T18:39:58Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"telemetry-asterix-adapter-svc","namespace":"utm"},"spec":{"loadBalancerSourceRanges":["71.###.###.###/32","71.###.###.###/32"],"ports":[{"port":1025,"protocol":"UDP","targetPort":6000}],"selector":{"app":"telemetry-asterix-adapter"},"type":"LoadBalancer"}}
finalizers:
- service.kubernetes.io/load-balancer-cleanup
managedFields:
- manager: cloud-controller-manager
operation: Update
apiVersion: v1
time: '2025-03-11T16:08:39Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
subresource: status
- manager: kubectl-client-side-apply
operation: Update
apiVersion: v1
time: '2025-03-11T20:13:05Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:loadBalancerSourceRanges: {}
f:ports:
.: {}
k:{"port":1025,"protocol":"UDP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
spec:
ports:
- protocol: UDP
port: 1025
targetPort: 6000
nodePort: 31780
selector:
app: telemetry-asterix-adapter
clusterIP: 10.0.203.107
clusterIPs:
- 10.0.203.107
type: LoadBalancer
sessionAffinity: None
loadBalancerSourceRanges:
- 71.###.###.###/32
- 71.###.###.###/32
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
allocateLoadBalancerNodePorts: true
internalTrafficPolicy: Cluster
status:
loadBalancer:
ingress:
- ip: 62.##.##.###
ipMode: VIP
Pod Manifest:
Name: telemetry-asterix-adapter-f8bb6f48d-2mqf6
Namespace: utm
Priority: 0
Service Account: default
Node: aks-nodepool1-25615987-vmss000001/10.64.80.12
Start Time: Thu, 13 Mar 2025 13:09:14 +0000
Labels: app=telemetry-asterix-adapter
pod-template-hash=f8bb6f48d
Annotations: kubectl.kubernetes.io/restartedAt: 2025-03-13T13:09:13Z
Status: Running
IP: 10.64.82.134
IPs:
IP: 10.64.82.134
Controlled By: ReplicaSet/telemetry-asterix-adapter-f8bb6f48d
Containers:
telemetry-asterix-adapter:
Container ID: containerd://88a01df213e0ec4732dee857798f61d73e9296b9f24ab4b1f61d7a6425c75e93
Image: crfusademousgv634.azurecr.us/utm-services/telemetry-asterix:3.5.0
Image ID: crfusademousgv634.azurecr.us/utm-services/telemetry-asterix@sha256:4c44d3b8946c6cecaa28d6637104b3f336776a4062f372a33a53238cec3a132f
Ports: 6000/UDP, 8080/TCP
Host Ports: 0/UDP, 0/TCP
State: Running
Started: Thu, 13 Mar 2025 13:09:15 +0000
Ready: True
Restart Count: 0
Limits:
memory: 512Mi
Requests:
memory: 512Mi
Environment Variables from:
telemetry-asterix-adapter ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6r4q (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-g6r4q:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Logs from pod that I am hoping to ingest UDP data with:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.9)
2025-03-13 13:09:20.002 INFO 1 --- [ main] c.f.s.t.a.AsterixAdapterApp : Starting AsterixAdapterApp v3.5.0 using Java 11.0.16 on telemetry-asterix-adapter-f8bb6f48d-2mqf6 with PID 1 (/opt/adapter/adapter.jar started by ? in /opt/adapter)
2025-03-13 13:09:20.018 DEBUG 1 --- [ main] c.f.s.t.a.AsterixAdapterApp : Running with Spring Boot v2.7.9, Spring v5.3.25
2025-03-13 13:09:20.019 INFO 1 --- [ main] c.f.s.t.a.AsterixAdapterApp : No active profile set, falling back to 1 default profile: "default"
2025-03-13 13:09:26.636 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2025-03-13 13:09:26.682 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2025-03-13 13:09:26.683 INFO 1 --- [ main] .apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.71]
2025-03-13 13:09:26.968 INFO 1 --- [ main] a.c.c.C.[.[.[/telemetry-asterix-adapter] : Initializing Spring embedded WebApplicationContext
2025-03-13 13:09:26.969 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 6760 ms
2025-03-13 13:09:28.505 INFO 1 --- [ main] c.f.s.t.asterixadapter.grpc.GrpcClient : Create gRPC client at address: telemetry-manager-ng.utm.svc.cluster.local:8081
2025-03-13 13:09:38.298 INFO 1 --- [ main] o.a.c.c.s.CamelHttpTransportServlet : Initialized CamelHttpTransportServlet[name=CamelServlet, contextPath=/telemetry-asterix-adapter]
2025-03-13 13:09:38.304 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '/telemetry-asterix-adapter'
2025-03-13 13:09:40.204 INFO 1 --- [ main] o.a.cponentty.NettyComponent : Creating shared NettyConsumerExecutorGroup with 3 threads
2025-03-13 13:09:40.551 INFO 1 --- [ main] c.n.SingleUDPNettyServerBootstrapFactory : ConnectionlessBootstrap binding to 0.0.0.0:6000
2025-03-13 13:09:40.837 INFO 1 --- [ main] o.a.camelponentty.NettyConsumer : Netty consumer bound to: 0.0.0.0:6000
2025-03-13 13:09:40.841 INFO 1 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Routes startup (total:2 started:2)
2025-03-13 13:09:40.841 INFO 1 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Started route1 (netty://UDP://0.0.0.0:6000)
2025-03-13 13:09:40.841 INFO 1 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Started route2 (rest://post:telemetry)
2025-03-13 13:09:40.841 INFO 1 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Apache Camel 3.14.1 (camel-1) started in 2s483ms (build:178ms init:1s560ms start:745ms)
2025-03-13 13:09:40.980 INFO 1 --- [ main] c.f.s.t.a.AsterixAdapterApp : Started AsterixAdapterApp in 23.126 seconds (JVM running for 25.782)
I have tried modifying the YAML and updating the loadbalancer service, removing the whitelist on source IP, tried sending test UDP packets via another device, some modification of NSGs...
Expecting to see at least some data in the logs for the pod showing incoming UDP packets.
fairly new to Kubernetes in general but also Azure Kubernetes Services. I have a single cluster with a telemetry asterix adapter service/pod that is designed to ingest UDP data from ADSB sensors via a public IP circuit. I created a public IP and LoadBalancer service on my cluster in the same namespace using a generic YAML provided by Microsoft (modified slightly for this projects requirements) and deployed. Will post YAML below.
I am able to ping the public IP generated via the YAML and the circuit with the ADSB sensor has been set up via the IP provided via the contractor, but not seeing any packets in logs for my telemetry asterix adapter pod. I am using source port of 1025 and target port of 6000 and that is what the telemetry asterix adapter is using via NettyUDP. I believe the connection between the loadbalancer service, and that pod is set with the selector in the YAML.
Is there something that I am missing? I assume that the loadbalancer service is not connected to the desired pod as I don't see anything in the logs but am able to ping the IP.
kind: Service
apiVersion: v1
metadata:
name: telemetry-asterix-adapter-svc
namespace: utm
uid: fac3e2f1-50e1-49f3-9624-2b49fe5bec39
resourceVersion: '15394560'
creationTimestamp: '2025-03-04T18:39:58Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"telemetry-asterix-adapter-svc","namespace":"utm"},"spec":{"loadBalancerSourceRanges":["71.###.###.###/32","71.###.###.###/32"],"ports":[{"port":1025,"protocol":"UDP","targetPort":6000}],"selector":{"app":"telemetry-asterix-adapter"},"type":"LoadBalancer"}}
finalizers:
- service.kubernetes.io/load-balancer-cleanup
managedFields:
- manager: cloud-controller-manager
operation: Update
apiVersion: v1
time: '2025-03-11T16:08:39Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
subresource: status
- manager: kubectl-client-side-apply
operation: Update
apiVersion: v1
time: '2025-03-11T20:13:05Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:loadBalancerSourceRanges: {}
f:ports:
.: {}
k:{"port":1025,"protocol":"UDP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
spec:
ports:
- protocol: UDP
port: 1025
targetPort: 6000
nodePort: 31780
selector:
app: telemetry-asterix-adapter
clusterIP: 10.0.203.107
clusterIPs:
- 10.0.203.107
type: LoadBalancer
sessionAffinity: None
loadBalancerSourceRanges:
- 71.###.###.###/32
- 71.###.###.###/32
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
allocateLoadBalancerNodePorts: true
internalTrafficPolicy: Cluster
status:
loadBalancer:
ingress:
- ip: 62.##.##.###
ipMode: VIP
Pod Manifest:
Name: telemetry-asterix-adapter-f8bb6f48d-2mqf6
Namespace: utm
Priority: 0
Service Account: default
Node: aks-nodepool1-25615987-vmss000001/10.64.80.12
Start Time: Thu, 13 Mar 2025 13:09:14 +0000
Labels: app=telemetry-asterix-adapter
pod-template-hash=f8bb6f48d
Annotations: kubectl.kubernetes.io/restartedAt: 2025-03-13T13:09:13Z
Status: Running
IP: 10.64.82.134
IPs:
IP: 10.64.82.134
Controlled By: ReplicaSet/telemetry-asterix-adapter-f8bb6f48d
Containers:
telemetry-asterix-adapter:
Container ID: containerd://88a01df213e0ec4732dee857798f61d73e9296b9f24ab4b1f61d7a6425c75e93
Image: crfusademousgv634.azurecr.us/utm-services/telemetry-asterix:3.5.0
Image ID: crfusademousgv634.azurecr.us/utm-services/telemetry-asterix@sha256:4c44d3b8946c6cecaa28d6637104b3f336776a4062f372a33a53238cec3a132f
Ports: 6000/UDP, 8080/TCP
Host Ports: 0/UDP, 0/TCP
State: Running
Started: Thu, 13 Mar 2025 13:09:15 +0000
Ready: True
Restart Count: 0
Limits:
memory: 512Mi
Requests:
memory: 512Mi
Environment Variables from:
telemetry-asterix-adapter ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6r4q (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-g6r4q:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Logs from pod that I am hoping to ingest UDP data with:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.9)
2025-03-13 13:09:20.002 INFO 1 --- [ main] c.f.s.t.a.AsterixAdapterApp : Starting AsterixAdapterApp v3.5.0 using Java 11.0.16 on telemetry-asterix-adapter-f8bb6f48d-2mqf6 with PID 1 (/opt/adapter/adapter.jar started by ? in /opt/adapter)
2025-03-13 13:09:20.018 DEBUG 1 --- [ main] c.f.s.t.a.AsterixAdapterApp : Running with Spring Boot v2.7.9, Spring v5.3.25
2025-03-13 13:09:20.019 INFO 1 --- [ main] c.f.s.t.a.AsterixAdapterApp : No active profile set, falling back to 1 default profile: "default"
2025-03-13 13:09:26.636 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2025-03-13 13:09:26.682 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2025-03-13 13:09:26.683 INFO 1 --- [ main] .apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.71]
2025-03-13 13:09:26.968 INFO 1 --- [ main] a.c.c.C.[.[.[/telemetry-asterix-adapter] : Initializing Spring embedded WebApplicationContext
2025-03-13 13:09:26.969 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 6760 ms
2025-03-13 13:09:28.505 INFO 1 --- [ main] c.f.s.t.asterixadapter.grpc.GrpcClient : Create gRPC client at address: telemetry-manager-ng.utm.svc.cluster.local:8081
2025-03-13 13:09:38.298 INFO 1 --- [ main] o.a.c.c.s.CamelHttpTransportServlet : Initialized CamelHttpTransportServlet[name=CamelServlet, contextPath=/telemetry-asterix-adapter]
2025-03-13 13:09:38.304 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '/telemetry-asterix-adapter'
2025-03-13 13:09:40.204 INFO 1 --- [ main] o.a.cponentty.NettyComponent : Creating shared NettyConsumerExecutorGroup with 3 threads
2025-03-13 13:09:40.551 INFO 1 --- [ main] c.n.SingleUDPNettyServerBootstrapFactory : ConnectionlessBootstrap binding to 0.0.0.0:6000
2025-03-13 13:09:40.837 INFO 1 --- [ main] o.a.camelponentty.NettyConsumer : Netty consumer bound to: 0.0.0.0:6000
2025-03-13 13:09:40.841 INFO 1 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Routes startup (total:2 started:2)
2025-03-13 13:09:40.841 INFO 1 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Started route1 (netty://UDP://0.0.0.0:6000)
2025-03-13 13:09:40.841 INFO 1 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Started route2 (rest://post:telemetry)
2025-03-13 13:09:40.841 INFO 1 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Apache Camel 3.14.1 (camel-1) started in 2s483ms (build:178ms init:1s560ms start:745ms)
2025-03-13 13:09:40.980 INFO 1 --- [ main] c.f.s.t.a.AsterixAdapterApp : Started AsterixAdapterApp in 23.126 seconds (JVM running for 25.782)
I have tried modifying the YAML and updating the loadbalancer service, removing the whitelist on source IP, tried sending test UDP packets via another device, some modification of NSGs...
Expecting to see at least some data in the logs for the pod showing incoming UDP packets.
Share Improve this question edited Mar 15 at 15:52 Evan Albrecht asked Mar 14 at 16:37 Evan AlbrechtEvan Albrecht 92 bronze badges 1- Tou can check the below possiblities to resolve the issue. Traffic b/w load balancer and telemetry should not block Load balancer should be correctly configured to correct targeted port (6000) Check the telemetry asterix adapter pod for any errors, it might indicate why packets are not being received. Ensure the port (1025) and the target port (6000) should be enabled Since you're using NettyUDP, make sure that the UDP traffic is correctly routed Dubble check does the YML manifest file has any typo issue or not. – Venkat V Commented Mar 27 at 12:10
1 Answer
Reset to default 0Azure Load Balancer does not validate UDP health natively. Without a TCP port for health checks, the backend pool may be marked as unhealthy, even if the pod is up. UDP traffic is connectionless, so debugging requires low-level inspection or packet logging. The issue is the pod never actually received the UDP packets due to the above reason.
To overcome this, and to expose a UDP service behind AKS Load Balancer, you can expose a dummy TCP port (like 8080) on the pod. This allows the Azure Load Balancer to consider the backend healthy. Your actual UDP-based app can still bind to 6000 as usual. The TCP port (even if unused by your app) just ensures Azure forwards traffic to the pod.
ports:
- containerPort: 6000
protocol: UDP
- containerPort: 8080
protocol: TCP
LoadBalancer service YAML should expose both UDP and TCP
apiVersion: v1
kind: Service
metadata:
name: telemetry-asterix-adapter-svc
namespace: utm
spec:
type: LoadBalancer
selector:
app: telemetry-asterix-adapter
externalTrafficPolicy: Cluster
ports:
- name: udp-port
protocol: UDP
port: 1025
targetPort: 6000
- name: health-port
protocol: TCP
port: 8080
targetPort: 8080
Since Netty logs may not show raw UDP activity easily, you can validate this using a simple Alpine pod with socat
apiVersion: v1
kind: Pod
metadata:
name: udp-echo-server
namespace: udp-test
labels:
app: udp-echo
spec:
containers:
- name: udp-echo
image: alpine
command: ["/bin/sh"]
args: ["-c", "apk add --no-cache socat && socat -v UDP-RECV:6000 STDOUT"]
ports:
- containerPort: 6000
protocol: UDP
- containerPort: 8080
protocol: TCP
Then expose it with a Load Balancer service
and test the UDP ingestion as below-
kubectl run udp-client --rm -it --image=busybox --restart=Never --namespace=udp-test -- /bin/sh
echo "hello after socat fix" | nc -u <LB_PUBLIC_IP> 1025
you can confirm the message in pod logs using-
kubectl logs -n udp-test udp-echo-server
Looks good.
Once you add the TCP port for health probes, UDP packets will start flowing through and your application received them without any other change as you can see in my example.