问题描述
这就是我不断得到的:
[root@centos-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-h6nw8 1/1 Running 0 1h
nfs-web-07rxz 0/1 CrashLoopBackOff 8 16m
nfs-web-fdr9h 0/1 CrashLoopBackOff 8 16m
下面是describe pods
的输出kubectl 描述 pods
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
16m 16m 1 {default-scheduler } Normal Scheduled Successfully assigned nfs-web-fdr9h to centos-minion-2
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id 495fcbb06836
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id 495fcbb06836
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id d56f34ae4e8f
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id d56f34ae4e8f
16m 16m 2 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "web" with CrashLoopBackOff: "Back-off 10s restarting failed container=web pod=nfs-web-fdr9h_default(461c937d-d870-11e6-98de-005056040cc2)"
我有两个 Pod:nfs-web-07rxz
、nfs-web-fdr9h
,但是如果我这样做 kubectl logs nfs-web-07rxz
或使用 -p
选项,我在两个 Pod 中都看不到任何日志.
I have two pods: nfs-web-07rxz
, nfs-web-fdr9h
, but if I do kubectl logs nfs-web-07rxz
or with -p
option I don't see any log in both pods.
[root@centos-master ~]# kubectl logs nfs-web-07rxz -p
[root@centos-master ~]# kubectl logs nfs-web-07rxz
这是我的 replicationController yaml 文件:replicationController yaml 文件
This is my replicationController yaml file:replicationController yaml file
apiVersion: v1 kind: ReplicationController metadata: name: nfs-web spec: replicas: 2 selector:
role: web-frontend template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: eso-cmbu-docker.artifactory.eng.vmware.com/demo-container:demo-version3.0
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
我的 Docker 镜像是由这个简单的 docker 文件制作的:
My Docker image was made from this simple docker file:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y nfs-common
我在 CentOs-1611 上运行我的 kubernetes 集群,kube 版本:
I am running my kubernetes cluster on CentOs-1611, kube version:
[root@centos-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
如果我通过 docker run
运行 docker 镜像,我可以毫无问题地运行镜像,只有通过 kubernetes 我才崩溃.
If I run the docker image by docker run
I was able to run the image without any issue, only through kubernetes I got the crash.
谁能帮帮我,我如何在不看到任何日志的情况下进行调试?
Can someone help me out, how can I debug without seeing any log?
推荐答案
正如@Sukumar 所评论的,你需要让你的 Dockerfile 有一个 Command 运行或让您的 ReplicationController 指定命令.
As @Sukumar commented, you need to have your Dockerfile have a Command to run or have your ReplicationController specify a command.
Pod 崩溃是因为它启动然后立即退出,因此 Kubernetes 重新启动并继续循环.
The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues.
这篇关于我的 kubernetes pod 不断因“CrashLoopBackOff"而崩溃;但我找不到任何日志的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!