问题描述
我正在考虑在AKS上设置GPU节点集群.
I'm looking at setting up a cluster of GPU nodes on AKS.
为了避免手动安装nvidia设备守护程序,显然我可以注册 GPUDedicatedVHDPreview
并发送带有AKS自定义标头的 UseGPUDedicatedVHD = true
("> https://docs.microsoft.com/zh-CN/azure/aks/gpu-cluster).
In order to avoid installing the nvidia device daemonset manually, apparently I can register for the GPUDedicatedVHDPreview
and send UseGPUDedicatedVHD=true
with AKS custom headers (https://docs.microsoft.com/en-us/azure/aks/gpu-cluster).
我了解如何在命令行上执行此操作,但是我不了解如何使用 azurerm_kubernetes_cluster
terraform提供程序来执行此操作.
I understand how to do this on the command line, but I don't understand how I can do it using the azurerm_kubernetes_cluster
terraform provider.
有可能吗?
推荐答案
在撰写本文时,这似乎尚不可能,如以下未解决的问题所示: https://github.com/terraform-providers/terraform-provider-azurerm/issues/6793
It looks like, at the time of writing, this isn't possible yet, as indicated by this open issue: https://github.com/terraform-providers/terraform-provider-azurerm/issues/6793
这篇关于是否可以将aks自定义标头与azurerm_kubernetes_cluster资源一起使用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!