最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

azure - Terraform Kubernetes Provider Error: "dial tcp 127.0.0.1:80: connect: connection refused - Stack Overflow

programmeradmin5浏览0评论

I'm using Terraform to deploy an Azure Kubernetes Service (AKS) cluster and configure role-based access control (RBAC). However, I'm encountering an error when trying to create Kubernetes namespaces with the Terraform kubernetes provider.

Terraform Configuration:

I have the following setup:

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "3.98.0"
    }
  }
}

provider "azurerm" {
  features {}
}

provider "kubernetes" {
  config_path = "~/.kube/config"
}

resource "null_resource" "validate_cluster" {
  provisioner "local-exec" {
    command = <<EOT
az aks get-credentials --resource-group ${var.resource_group_name} --name ${var.aks_cluster_name} --overwrite-existing
kubectl get nodes
EOT
  }
  depends_on = [var.aks_cluster_name]
}

resource "kubernetes_namespace" "dev" {
  metadata {
    name = "dev"
  }
  depends_on = [null_resource.validate_cluster]
}

Error Message:

When running terraform apply, I get the following error:

Error: Post "http://localhost/api/v1/namespaces": dial tcp 127.0.0.1:80: connect: connection refused │ │ with module.rbac.kubernetes_namespace.dev, │ on modules/rbac/main.tf line 13, in resource "kubernetes_namespace" "dev": │ 13: resource "kubernetes_namespace" "dev" {

What I've Tried:

Verified that kubectl is working by running kubectl get nodes after manually running az aks get-credentials. This works fine. Checked that ~/.kube/config exists and contains the correct cluster information. Ensured that the AKS cluster is up and running.

Any insights or solutions would be greatly appreciated!

I'm using Terraform to deploy an Azure Kubernetes Service (AKS) cluster and configure role-based access control (RBAC). However, I'm encountering an error when trying to create Kubernetes namespaces with the Terraform kubernetes provider.

Terraform Configuration:

I have the following setup:

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "3.98.0"
    }
  }
}

provider "azurerm" {
  features {}
}

provider "kubernetes" {
  config_path = "~/.kube/config"
}

resource "null_resource" "validate_cluster" {
  provisioner "local-exec" {
    command = <<EOT
az aks get-credentials --resource-group ${var.resource_group_name} --name ${var.aks_cluster_name} --overwrite-existing
kubectl get nodes
EOT
  }
  depends_on = [var.aks_cluster_name]
}

resource "kubernetes_namespace" "dev" {
  metadata {
    name = "dev"
  }
  depends_on = [null_resource.validate_cluster]
}

Error Message:

When running terraform apply, I get the following error:

Error: Post "http://localhost/api/v1/namespaces": dial tcp 127.0.0.1:80: connect: connection refused │ │ with module.rbac.kubernetes_namespace.dev, │ on modules/rbac/main.tf line 13, in resource "kubernetes_namespace" "dev": │ 13: resource "kubernetes_namespace" "dev" {

What I've Tried:

Verified that kubectl is working by running kubectl get nodes after manually running az aks get-credentials. This works fine. Checked that ~/.kube/config exists and contains the correct cluster information. Ensured that the AKS cluster is up and running.

Any insights or solutions would be greatly appreciated!

Share Improve this question asked Mar 19 at 12:58 pavan trivedipavan trivedi 1 2
  • In the source code for the hashicorp/kubernetes provider I see that it emits an internal log line [DEBUG] Using kubeconfig: followed by the finalized kubeconfig path while it's initializing. Can you repeat your experiment with the environment variable TF_LOG=DEBUG set, and then search the verbose log output for "Using kubeconfig" and update your question to include all of the log lines you find like that? (Or if you don't find any, share that too!) – Martin Atkins Commented Mar 19 at 18:03
  • When you do that, I suggest running terraform plan -out=tfplan and terraform apply tfplan separately so that you can clearly see which log output comes from the plan phase and what comes from the apply phase, since both of them will separately initialize this provider and so each should generate its own version of this log line. If there is a difference between the log lines in the two phases then that might help explain the problem. – Martin Atkins Commented Mar 19 at 18:06
Add a comment  | 

2 Answers 2

Reset to default 0

in order to configure the Terraform Kubernetes provider correctly I suggest to use this config:

data "azurerm_kubernetes_cluster" "aks" {
# you can add the line below if you created the AKS cluster previously in your terraform code
#  depends_on          = [module.aks_module_name]
  name                = "your cluster name"
  resource_group_name = "you cluster rg"
}

provider "kubernetes" {
  host                   = data.azurerm_kubernetes_cluster.default.kube_config.0.host
  client_certificate     = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
  client_key             = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}

Just for info the bloc data helps you to get the AKS config dynamically from Azure better that putting the kube_config in a file

Terraform Kubernetes Provider connection refused while using terraform

In general, this issue seems to occur when Terraform's Kubernetes provider is unable to connect to the AKS API server.

Why this happens is because of incorrect Kubernetes provider configuration, missing authentication, or kubectl context issues because of dependencies of provision of resource completely.

As you're shared in the configuration you are using.

provider "kubernetes" {
  config_path = "~/.kube/config"
}

Which depends on path specified, which may not be available when Terraform runs. When I tried the same approach, I faced the similar issue you faced.

This type of issue occurs when you try to define the entire configuration in the single file.

Instead, you can try this by make sure the kubernetes Provider to use AKS Credentials directly.

Configuration:

resource "azurerm_kubernetes_cluster" "aks" {
  name                = var.aks_cluster_name
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = "aks-demo"

  default_node_pool {
    name       = "default"
    node_count = 2
    vm_size    = "Standard_DS2_v2"
  }

  identity {
    type = "SystemAssigned"
  }

  role_based_access_control_enabled = true
}

data "azurerm_kubernetes_cluster" "aks" {
  name                = var.aks_cluster_name
  resource_group_name = var.resource_group_name
}

provider "kubernetes" {
  host                   = data.azurerm_kubernetes_cluster.aks.kube_config.0.host
  client_certificate     = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
  client_key             = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}

resource "kubernetes_namespace" "dev" {
  metadata {
    name = "dev"
  }
  depends_on = [azurerm_kubernetes_cluster.aks]
}

Deployment:

Refer:

https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#credentials-config

Get "http://localhost/api/v1/namespaces/***/***/***": dial tcp 127.0.0.1:80: connect: connection refused answered by Rémy Ntshaykolo

发布评论

评论列表(0)

  1. 暂无评论