问题描述
几天来我一直在努力解决这个谜,但是没有喜悦.基本上, Terraform无法承担角色,并且失败:
I'm trying to solve this mystery for few days now, but no joy. Basically, Terraform cannot assume role and failing with:
Initializing the backend...
2019/10/28 09:13:09 [DEBUG] New state was assigned lineage "136dca1a-b46b-1e64-0ef2-efd6799b4ebc"
2019/10/28 09:13:09 [INFO] Setting AWS metadata API timeout to 100ms
2019/10/28 09:13:09 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2019/10/28 09:13:09 [INFO] AWS Auth provider used: "SharedCredentialsProvider"
2019/10/28 09:13:09 [INFO] Attempting to AssumeRole arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np (SessionName: "terra_cnp", ExternalId: "", Policy: "")
Error: The role "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np" cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
在AWS中:
我有角色: terraform-admin-np
和2个 AWS托管策略: AmazonS3FullAccess
& AdministratorAccess
以及与此相关的信任关系:
In AWS:
I have role: terraform-admin-np
with 2 AWS managed policy: AmazonS3FullAccess
& AdministratorAccess
and a trust relationship with this:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": "sts:AssumeRole"
}
]
}
然后我有一个用户,并附有政策文件:
Then I have an user with policy document attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TfFullAccessSts",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"sts:DecodeAuthorizationMessage",
"sts:AssumeRoleWithSAML",
"sts:AssumeRoleWithWebIdentity"
],
"Resource": "*"
},
{
"Sid": "TfFullAccessAll",
"Effect": "Allow",
"Action": "*",
"Resource": [
"*",
"arn:aws:ec2:region:account:network-interface/*"
]
}
]
}
和 S3存储桶: txxxxxxxxxxxxxxte
,并附有以下政策文件:
and a S3 bucket: txxxxxxxxxxxxxxte
with this policy document attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TFStateListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::txxxxxxxxxxxxxxte"
},
{
"Sid": "TFStateGetPutObject",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::txxxxxxxxxxxxxxte/*"
}
]
}
在Terraform中:
provider.tf
中的代码段:
###---- Default Backend and Provider config values -----------###
terraform {
required_version = ">= 0.12"
backend "s3" {
encrypt = true
}
}
provider "aws" {
region = var.region
version = "~> 2.20"
profile = var.profile
assume_role {
role_arn = var.role_arn
session_name = var.session_name
}
}
来自 tgw_cnp.tfvars
后端配置的代码段:
Snippet from tgw_cnp.tfvars
backend config:
## S3 backend config
key = "backend/tgw_cnp_state"
bucket = "txxxxxxxxxxxxxxte"
region = "us-east-2"
profile = "local-tgw"
role_arn = "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np"
session_name = "terra_cnp"
然后以这种方式运行:
TF_LOG=debug terraform init -backend-config=tgw_cnp.tfvars
有了这个,我可以使用AWS CLI承担角色,而没有任何问题:
With this, I can assume role using AWS CLI without any issue:
# aws --profile local-tgw sts assume-role --role-arn "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np" --role-session-name AWSCLI
{
"Credentials": {
"AccessKeyId": "AXXXXXXXXXXXXXXXXXXA",
"SecretAccessKey": "UixxxxxxxxxxxxxxxxxxxxxxxxxxxxMt",
"SessionToken": "FQoGZXIvYXdzEJb//////////wEaD......./5LFwNWf6riiNw9vtBQ==",
"Expiration": "2019-10-28T13:39:41Z"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA2P7ZON5TSWMOBQEBC:AWSCLI",
"Arn": "arn:aws:sts::72xxxxxxxxxx:assumed-role/terraform-admin-np/AWSCLI"
}
}
但是terraform失败,并出现上述错误.知道我在做什么错吗?
but terraform fails with the above error. Any idea what's I'm doing wrong?
推荐答案
好的,回答我自己的问题........现在工作了.我犯了一个愚蠢的错误- tgw_cnp.tfvars 中的 region
是错误的,但我一直很想念.在AWS CLI中,因为我不必指定 region (它是从配置文件中获取的),所以它可以正常工作,但是在TF中,我指定了区域并且值是错误的,因此失败.错误报告中的建议有些误导.
Okay, answering to my own question........It worked now. I have had a silly mistake - the region
in tgw_cnp.tfvars was wrong, which I was keep missing out. In AWS CLI as I didn't have to specify the region (it was getting it from the profile), so it worked without any issue but in TF I specified the region and the value was wrong, hence the failure. The suggestions in the error reporting was a bit misleading.
我可以确认上面的配置工作正常.现在一切都很好.
I can confirm the above config works fine. It's all good now.
这篇关于Terraform:假设_角色出现问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!