本文介绍了如何在 CI/CD 中部署 Google Cloud Functions 而无需重新部署未更改的 Cloud Functions 以避免配额?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Cloud Build 的创建配额为 30.如果我们有超过 30 个 Cloud Functions,则可以轻松达到此配额.有没有办法部署 30 多个人们使用的云功能,最好是足够聪明,不会部署未经修改的云功能?

Cloud Build has a create quota of 30. If we have more than 30 Cloud Functions, this quota can easily be reached. Is there a way to deploy more than 30 Cloud Functions that people use, that preferably is smart enough to not deploy unmodified Cloud Functions?

推荐答案

根据我们在 GCP 社区 slack 频道的对话,这里有一个想法和一个小例子.该示例描述了一个云功能,但可以轻松扩展到任意一组云功能.

Following our conversation in GCP community slack channel, here is an idea with a small example. The example depicts one cloud function but easily can be extended to an arbitrary set of cloud functions.

请记住 - 这不是我的发明 - 可以在互联网上找到大量示例.

Bear in mind - this is not my invention - one can find plenty of examples in the internet.

CICD 在 Cloud Build 中使用 Terraform(简单地说 - 云构建 yaml 文件包含terraform init"和terraform apply").因此,推送(或拉取请求)会触发执行 Terraform 的 Cloud Build 作业.

The CICD uses Terraform inside Cloud Build (simply speaking - cloud build yaml file contains 'terraform init' and `terraform apply'). Thus, push (or pull request) triggers a Cloud Build job, which executes Terraform.

在这个问题的范围内 - terraform 脚本应该有 4 个元素:

In the scope of this question - terraform script should have 4 elements:

1/带有云功能代码的 zip 存档的名称 - 应该在 GCS 存储桶中:

1/ A name of the zip archive with the cloud function code - as it should be in the GCS bucket:

locals {
  cf_zip_archive_name = "cf-some-prefix-${data.archive_file.cf_source_zip.output_sha}.zip"
}

2/一个 zip 存档:

2/ A zip archive:

data "archive_file" "cf_source_zip" {
  type        = "zip"
  source_dir  = "${path.module}/..<<path + directory to the CF code>>"
  output_path = "${path.module}/tmp/some-name.zip"
}

3/存储桶中的 GCS 对象(假设存储桶已经存在,或在此问题范围之外创建):

3/ A GCS object in a bucket (under assumption that the bucket is already exist, or created outside of the scope of this question):

resource "google_storage_bucket_object" "cf_source_zip" {
  name         = local.cf_zip_archive_name
  source       = data.archive_file.cf_source_zip.output_path
  content_type = "application/zip"
  bucket       = google_storage_bucket.cf_source_archive_bucket.name
}

4/一个云函数(仅显示2个参数):

4/ A Cloud Function (only 2 parameters are shown):

resource "google_cloudfunctions_function" "sf_integrations" {

  source_archive_bucket = google_storage_bucket.cf_source_archive_bucket.name
  source_archive_object = google_storage_bucket_object.cf_source_zip.name

}

它是如何协同工作的 =>

How it works all together =>

触发 Terraform 时,会创建 zip 文件,以防云功能代码被修改.zip 文件的 SHA 哈希码不同(如果代码已被修改).因此,具有 GCS 对象名称的局部变量获得不同的值.这意味着 zip 文件以新名称上传到 GCS 存储桶.由于源代码对象现在有了新名称 source_archive_object = google_storage_bucket_object.cf_source_zip.name,terraform 发现必须重新部署云函数(因为状态文件具有存档对象的旧名称).重新部署云功能.

When the Terraform is triggered, the zip file is created in case the cloud function code has been modified. SHA hash code of the zip file is different (if the code has been modified). Thus, the local variable with the GCS object name gets different value. It means that the zip file is uploaded to the GCS bucket with the new name. As the source code object has now a new name source_archive_object = google_storage_bucket_object.cf_source_zip.name, terraform finds out that the cloud function has to be redeployed (because the state file has the old name of the archive object). The cloud function is redeployed.

另一方面,如果代码未修改 - 名称 source_archive_object = google_storage_bucket_object.cf_source_zip.name 未修改,因此 Terraform 不会部署任何内容.

On the other hand, if the code is not modified - the name source_archive_object = google_storage_bucket_object.cf_source_zip.name is not modified, so Terraform does not deploy anything.

显然,如果修改了其他参数 - 重新部署仍然会继续进行.

Obviously, if other parameters are modified - the redeployment goes ahead anyway.

这篇关于如何在 CI/CD 中部署 Google Cloud Functions 而无需重新部署未更改的 Cloud Functions 以避免配额?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-31 11:52