问题描述
我的kubectl作业无效.我正在调试它,并将其提取到yaml文件中,可以看到以下内容:
I have kubectl job that is invalid. I am debugging it and I extracted it to yaml file and I can see this:
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: 2020-03-19T21:40:11Z
labels:
app: vault-unseal-app
job-name: vault-unseal-vault-unseal-1584654000
name: vault-unseal-vault-unseal-1584654000
namespace: infrastructure
ownerReferences:
- apiVersion: batch/v1beta1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: vault-unseal-vault-unseal
uid: c9965fdb-4fbb-11e9-80d7-061cf1426d5a
resourceVersion: "163413544"
selfLink: /apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000
uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4
spec:
backoffLimit: 0
completions: 1
parallelism: 1
selector:
matchLabels:
app: vault-unseal-app
template:
metadata:
creationTimestamp: null
labels:
app: vault-unseal-app
job-name: vault-unseal-vault-unseal-1584654000
spec:
containers:
- env:
- name: VAULT_ADDR
value: http://vault-vault:8200
- name: VAULT_SKIP_VERIFY
value: "1"
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
key: vault_token
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_0
valueFrom:
secretKeyRef:
key: unseal_key_0
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_1
valueFrom:
secretKeyRef:
key: unseal_key_1
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_2
valueFrom:
secretKeyRef:
key: unseal_key_2
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_3
valueFrom:
secretKeyRef:
key: unseal_key_3
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_4
valueFrom:
secretKeyRef:
key: unseal_key_4
name: vault-unseal-vault-unseal
image: blockloop/vault-unseal
imagePullPolicy: Always
name: vault-unseal
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
nodeSelector:
nodePool: ci
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 5
status:
conditions:
- lastProbeTime: 2020-03-19T21:49:11Z
lastTransitionTime: 2020-03-19T21:49:11Z
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 1
startTime: 2020-03-19T21:40:11Z
当我运行 kubectl create -f my_file.yaml
时,出现此错误:
When I run kubectl create -f my_file.yaml
, I am getting this error:
The Job "vault-unseal-vault-unseal-1584654000" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"controller-uid":"35262878-07bb-11eb-9b2c-0abca2a23428", "app":"vault-unseal-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: `selector` not auto-generated
有人可以建议如何解决此问题吗?
Can someone suggest how to fix this?
更新:
在测试删除 .spec.selector
后,我收到错误消息: error:jobs.batch"vault-unseal-vault-unseal-1584654000"无效
After testing removal of .spec.selector
I am getting error: error: jobs.batch "vault-unseal-vault-unseal-1584654000" is invalid
这是没有 .spec.selector
时我的配置的样子:
This is how my config looks without .spec.selector
:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: batch/v1
kind: Job
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"creationTimestamp":"2020-03-19T21:40:11Z","labels":{"controller-uid":"35e63c20-6a2a-11ea-b577-069afd6d30d4","job-name":"vault-unseal-vault-unseal-1584654000"},"name":"vault-unseal-vault-unseal-1584654000","namespace":"infrastructure","ownerReferences":[{"apiVersion":"batch/v1beta1","blockOwnerDeletion":true,"controller":true,"kind":"CronJob","name":"vault-unseal-vault-unseal","uid":"c9965fdb-4fbb-11e9-80d7-061cf1426d5a"}],"resourceVersion":"163427805","selfLink":"/apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000","uid":"35e63c20-6a2a-11ea-b577-069afd6d30d4"},"spec":{"backoffLimit":20,"completions":1,"parallelism":1,"selector":{"matchLabels":{"controller-uid":"35e63c20-6a2a-11ea-b577-069afd6d30d4"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"controller-uid":"35e63c20-6a2a-11ea-b577-069afd6d30d4","job-name":"vault-unseal-vault-unseal-1584654000"}},"spec":{"containers":[{"env":[{"name":"VAULT_ADDR","value":"http://vault-vault:8200"},{"name":"VAULT_SKIP_VERIFY","value":"1"},{"name":"VAULT_TOKEN","valueFrom":{"secretKeyRef":{"key":"vault_token","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_0","valueFrom":{"secretKeyRef":{"key":"unseal_key_0","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_1","valueFrom":{"secretKeyRef":{"key":"unseal_key_1","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_2","valueFrom":{"secretKeyRef":{"key":"unseal_key_2","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_3","valueFrom":{"secretKeyRef":{"key":"unseal_key_3","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_4","valueFrom":{"secretKeyRef":{"key":"unseal_key_4","name":"vault-unseal-vault-unseal"}}}],"image":"blockloop/vault-unseal","imagePullPolicy":"Always","name":"vault-unseal","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","nodeSelector":{"nodePool":"devs"},"restartPolicy":"OnFailure","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":5}}},"status":{"conditions":[{"lastProbeTime":"2020-03-19T21:49:11Z","lastTransitionTime":"2020-03-19T21:49:11Z","message":"Job has reached the specified backoff limit","reason":"BackoffLimitExceeded","status":"True","type":"Failed"}],"failed":1,"startTime":"2020-03-19T21:40:11Z"}}
creationTimestamp: 2020-03-19T21:40:11Z
labels:
controller-uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4
job-name: vault-unseal-vault-unseal-1584654000
name: vault-unseal-vault-unseal-1584654000
namespace: infrastructure
ownerReferences:
- apiVersion: batch/v1beta1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: vault-unseal-vault-unseal
uid: c9965fdb-4fbb-11e9-80d7-061cf1426d5a
resourceVersion: "163442526"
selfLink: /apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000
uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4
spec:
backoffLimit: 100
completions: 1
parallelism: 1
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4
job-name: vault-unseal-vault-unseal-1584654000
spec:
containers:
- env:
- name: VAULT_ADDR
value: http://vault-vault:8200
- name: VAULT_SKIP_VERIFY
value: "1"
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
key: vault_token
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_0
valueFrom:
secretKeyRef:
key: unseal_key_0
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_1
valueFrom:
secretKeyRef:
key: unseal_key_1
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_2
valueFrom:
secretKeyRef:
key: unseal_key_2
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_3
valueFrom:
secretKeyRef:
key: unseal_key_3
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_4
valueFrom:
secretKeyRef:
key: unseal_key_4
name: vault-unseal-vault-unseal
image: blockloop/vault-unseal
imagePullPolicy: Always
name: vault-unseal
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
nodeSelector:
nodePool: devs
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 5
status:
conditions:
- lastProbeTime: 2020-03-19T21:49:11Z
lastTransitionTime: 2020-03-19T21:49:11Z
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 1
startTime: 2020-03-19T21:40:11Z
推荐答案
您似乎没有使用系统默认为您自动生成的 selector
.请记住,创建作业时建议的选项是不要填写 selector
.这样就更有可能创建重复的标签+选择器.因此,您应该使用自动生成的自动生成的内容,以确保唯一性并使您摆脱手动管理的必要性.
It looks like you are not using the selector
that the system generates for you automatically by default. Bear in mind that the recommended option when creating a job is NOT to fill in selector
. It makes it more probable to create a duplicate labels+selectors. Therefore you should use the auto-generated ones, which ensure uniqueness and release you from the necessity of manual management.
官方文档通过一个示例对此进行更详细的说明.请注意以下部分:
The official docs have this explained in more detail with an example. Please notice the below parts:
和:
如果要使用手动选择器,则需要在作业规范中设置: .spec.manualSelector:true
.这样,API服务器将不会自动生成标签,您将可以自行设置标签.
If you want to use manual selectors you need to set: .spec.manualSelector: true
in the job's spec. This way the API server will not generate labels automatically and you will be able to set them yourself.
请记住, spec.Completions
spec.Selector
和 spec.Template
是不可变的字段,不允许更新.为了在那里进行更改,您需要创建一个新的Job.
Remember that spec.Completions
spec.Selector
and spec.Template
are immutable fields and are not allowed to be updated. In order to make changes there you need to create a new Job.
有关撰写工作规范的官方文档可以帮助您了解Job规范中应包含的内容和不应该包含的内容.请注意,尽管:
The official docs regarding Writing a Job spec will help you understand what should and what shouldn't be put into the Job spec. Notice that despite:
建议不要像我之前解释的那样指定pod选择器/标签,以免创建重复的标签+选择器.
it is advised that the pod selector / labels are not specified as I explained earlier in order to not create a duplicate labels+selectors.
这篇关于作业无效:“选择器"未自动生成的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!