本文介绍了如何在 Azure 中使用 terraform 创建 kubernetes 集群后自动对其进行身份验证?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试通过 terraform 创建一个 kubernetes 集群、命名空间和机密.集群创建成功,但是构建在集群上的资源创建失败.

I try to create a kubernetes cluster, namespace and secrets via terraform.The cluster is created, but the resources building upon the cluster fail to be created.

这是 terraform 在创建 kubernetes 集群后,即将创建命名空间时抛出的错误消息:

This is the error message thrown by terraform after creation of the kubernetes cluster, when the namespace is to be created:

azurerm_kubernetes_cluster_node_pool.mypool: Creation complete after 6m4s [id=/subscriptions/aaabcde1-abcd-abcd-abcd-aaaaaaabdce/resourcegroups/myrg/providers/Microsoft.ContainerService/managedClusters/my-aks/agentPools/win]
Error: Post https://my-aks-abcde123.hcp.australiaeast.azmk8s.io:443/api/v1/namespaces: dial tcp: lookup my-aks-abcde123.hcp.australiaeast.azmk8s.io on 10.128.10.5:53: no such host

  on mytf.tf line 114, in resource "kubernetes_namespace" "my":
 114: resource "kubernetes_namespace" "my" {

手动解决方法:

我可以通过命令行手动验证 kubernetes 集群并通过另一个 terraform apply 应用未完成的 terraform 更改来解决此问题:

Manual workaround:

I can resolve this by manually authenticating against the kubernetes cluster via the command line and applying the outstanding terraform changes via another terraform apply:

az aks get-credentials -g myrg -n my-aks --overwrite-existing

自动解决方法尝试:

我尝试自动执行此身份验证步骤失败.我尝试在 kubernetes 集群的定义中使用本地 exec 配置程序,但没有成功:

Automated workaround attempt:

My attempt to automate this authentication step failed. I have tried with a local exec provisioner inside the definition of the kubernetes cluster, without success:

resource "azurerm_kubernetes_cluster" "myCluster" {
  name                = "my-aks"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = "my-aks"
  network_profile {
    network_plugin      = "azure"
  }

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_B2s"
  }
  service_principal {
    client_id     = azuread_service_principal.tfapp.application_id
    client_secret = azuread_service_principal_password.tfapp.value
  }
  tags = {
    Environment = "demo"
  }
  windows_profile {
    admin_username = "myself"
    admin_password = random_string.password.result
  }
  provisioner "local-exec" {
    command="az aks get-credentials -g myrg -n my-aks --overwrite-existing"
  }
}

这是一个资源创建失败的例子:

This is an example of a resource that fails to be created:

resource "kubernetes_namespace" "my" {
  metadata {
    name = "my-namespace"
  }
}

有没有一种方法可以完全自动化创建我的资源,包括那些基于 Kubernetes 集群的资源,而无需手动身份验证?

Is there a way to fully automate the creation of my resources, including those that are based on the kubernetes cluster, without manual authentication?

推荐答案

根据您的要求,我认为您可以将 AKS 集群的创建与 AKS 集群中资源的创建分开.

For your requirements, I think you can separate the creation of the AKS cluster from the creation of the resources in the AKS cluster.

在创建AKS集群时,只需将provisioner local-exec 放在null_resource 中,如下所示:

In the creation of the AKS cluster, you just need to put the provisioner local-exec in the null_resource like this:

resource "null_resource" "example" {
  provisioner "local-exec" {
    command="az aks get-credentials -g ${azurerm_resource_group.rg.name} -n my-aks --overwrite-existing"
  }
}

AKS 群集创建完成后.然后你再次通过 Terraform 创建你的命名空间.

When the AKS cluster creation is finished. Then you go to create your namespace through the Terraform again.

通过这种方式,您无需手动进行身份验证.只需执行 Terraform 代码.

In this way, you do not need to manually authenticate. Just execute the Terraform code.

这篇关于如何在 Azure 中使用 terraform 创建 kubernetes 集群后自动对其进行身份验证?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-24 22:53