最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

amazon web services - How to access internal loadbalancer dns within the vpc in aws - Stack Overflow

programmeradmin1浏览0评论

I deployed an internal nginx-ingress-controller in my eks cluster with nodes deployed on a private network.

controller:
  ingressClassByName: true
  ingressClassResource:
    name: nginx-ingress-controller
    enabled: true
    default: false
    controllerValue: "k8s.io/ingress-nginx-internal"
  kind: DaemonSet
  service:
    type: LoadBalancer
    external:
      enabled: false
    externalTrafficPolicy: Local
    internal:
      enabled: true
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
        service.beta.kubernetes.io/aws-load-balancer-target-type: "ip"
        service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
        service.beta.kubernetes.io/aws-load-balancer-name: "k8s-nlb"
        service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
        service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-123, subnet-456"

I can see the loadbalancer as well

➜  ~ kubectl get svc -n ingress-nginx
NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                                                        PORT(S)                      AGE
ingress-nginx-private-controller-admission   ClusterIP      172.20.213.239   <none>                                                                             443/TCP                      10m
ingress-nginx-private-controller-internal    LoadBalancer   172.20.21.126    ad6cf7cdec1a148dfa354e73edba86c8-d17c74cfbc4dbba0.elb.eu-central-1.amazonaws   80:30516/TCP,443:31361/TCP   10m

I created a sample pod in default namespace and when i do a curl to the internal loadbalancer i can see the dns resolution to a private-ip but the connection timed out when connecting to 80 port.

curl -iv ad6cf7cdec1a148dfa354e73edba86c8-d17c74cfbc4dbba0.elb.eu-central-1.amazonaws
* Host ad6cf7cdec1a148dfa354e73edba86c8-d17c74cfbc4dbba0.elb.eu-central-1.amazonaws:80 was resolved.
* IPv6: (none)
* IPv4: 10.12.4.100, 10.12.5.26, 10.12.6.75
*   Trying 10.12.4.100:80...
* connect to 10.12.4.100 port 80 from 10.12.5.175 port 33858 failed: Operation timed out
*   Trying 10.12.5.26:80...
* ipv4 connect timeout after 85044ms, move on!
*   Trying 10.12.6.75:80...
* Connection timed out after 300006 milliseconds
* closing connection #0
curl: (28) Connection timed out after 300006 milliseconds

port 80 can be accessed when i do port forward like this

➜  ~ sudo kubectl port-forward svc/ingress-nginx-private-controller-internal -n ingress-nginx 80:80
Password:
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
Handling connection for 80
➜  ~ curl -iv localhost
* Host localhost:80 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:80...
* Connected to localhost (::1) port 80
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
< Date: Wed, 19 Mar 2025 04:43:40 GMT
Date: Wed, 19 Mar 2025 04:43:40 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 146
Content-Length: 146
< Connection: keep-alive
Connection: keep-alive
<

<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host localhost left intact

can anyone please tell me how what i am missing ? and how to access the internal loadbalancer from any ip in the vpc ?

I deployed an internal nginx-ingress-controller in my eks cluster with nodes deployed on a private network.

controller:
  ingressClassByName: true
  ingressClassResource:
    name: nginx-ingress-controller
    enabled: true
    default: false
    controllerValue: "k8s.io/ingress-nginx-internal"
  kind: DaemonSet
  service:
    type: LoadBalancer
    external:
      enabled: false
    externalTrafficPolicy: Local
    internal:
      enabled: true
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
        service.beta.kubernetes.io/aws-load-balancer-target-type: "ip"
        service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
        service.beta.kubernetes.io/aws-load-balancer-name: "k8s-nlb"
        service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
        service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-123, subnet-456"

I can see the loadbalancer as well

➜  ~ kubectl get svc -n ingress-nginx
NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                                                        PORT(S)                      AGE
ingress-nginx-private-controller-admission   ClusterIP      172.20.213.239   <none>                                                                             443/TCP                      10m
ingress-nginx-private-controller-internal    LoadBalancer   172.20.21.126    ad6cf7cdec1a148dfa354e73edba86c8-d17c74cfbc4dbba0.elb.eu-central-1.amazonaws   80:30516/TCP,443:31361/TCP   10m

I created a sample pod in default namespace and when i do a curl to the internal loadbalancer i can see the dns resolution to a private-ip but the connection timed out when connecting to 80 port.

curl -iv ad6cf7cdec1a148dfa354e73edba86c8-d17c74cfbc4dbba0.elb.eu-central-1.amazonaws
* Host ad6cf7cdec1a148dfa354e73edba86c8-d17c74cfbc4dbba0.elb.eu-central-1.amazonaws:80 was resolved.
* IPv6: (none)
* IPv4: 10.12.4.100, 10.12.5.26, 10.12.6.75
*   Trying 10.12.4.100:80...
* connect to 10.12.4.100 port 80 from 10.12.5.175 port 33858 failed: Operation timed out
*   Trying 10.12.5.26:80...
* ipv4 connect timeout after 85044ms, move on!
*   Trying 10.12.6.75:80...
* Connection timed out after 300006 milliseconds
* closing connection #0
curl: (28) Connection timed out after 300006 milliseconds

port 80 can be accessed when i do port forward like this

➜  ~ sudo kubectl port-forward svc/ingress-nginx-private-controller-internal -n ingress-nginx 80:80
Password:
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
Handling connection for 80
➜  ~ curl -iv localhost
* Host localhost:80 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:80...
* Connected to localhost (::1) port 80
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
< Date: Wed, 19 Mar 2025 04:43:40 GMT
Date: Wed, 19 Mar 2025 04:43:40 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 146
Content-Length: 146
< Connection: keep-alive
Connection: keep-alive
<

<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host localhost left intact

can anyone please tell me how what i am missing ? and how to access the internal loadbalancer from any ip in the vpc ?

Share Improve this question asked Mar 19 at 4:46 user3398900user3398900 8434 gold badges15 silver badges32 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

It could be caused by multiple things but since you faced an timeout error you can start by checking the following configurations.

  1. Verify Security Groups Attached to the Load Balancer
 aws ec2 describe-security-groups --group-ids <your-security-group-id>

Make sure it has inbound rules allowing traffic on ports 80 and 443 from within the VPC CIDR (ipv4 | ipv6) range

  1. Check the Subnets Used by the Load Balancer
aws ec2 describe-subnets --subnet-ids subnet-123 subnet-456

Look for MapPublicIpOnLaunch: false and PrivateDnsNameOptions to confirm these subnets are truly private.Ensure these subnets are correctly configured to route traffic internally. 3. Check Kubernetes Network Policies If you have NetworkPolicies enabled, they may be blocking traffic between the Load Balancer and the Ingress Controller pods. List your network policies:

kubectl get networkpolicies -A

Allow traffic to ingress-nginx pods on port 80 and 443.

Adding Security groups to NLB

controller:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"   # Keep NLB
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
      service.beta.kubernetes.io/aws-load-balancer-target-type: "instance"  # Switch from "ip" to "instance"
      service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-xxxxxxxxxxxx" # Add your security group
发布评论

评论列表(0)

  1. 暂无评论