本文介绍了kubelet无法获取docker和kubelet服务的cgroup统计信息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在裸机 Debian 上运行 kubernetes (目前有3位大师,2位工人,PoC).我遵循了k8s-the-hard-way的方式,而我在kubelet上遇到了以下问题:

I'm running kubernetes on bare-metal Debian (3 masters, 2 workers, PoC for now). I followed k8s-the-hard-way, and I'm running into the following problem on my kubelet:

对于kubelet.service,我也有相同的消息.

And I have the same message for kubelet.service.

我有一些有关这些cgroup的文件:

I have some files about those cgroups:

$ ls /sys/fs/cgroup/systemd/system.slice/docker.service
cgroup.clone_children  cgroup.procs  notify_on_release  tasks

$ ls /sys/fs/cgroup/systemd/system.slice/kubelet.service/
cgroup.clone_children  cgroup.procs  notify_on_release  tasks

管理者告诉我:

$ curl http://127.0.0.1:4194/validate
cAdvisor version:

OS version: Debian GNU/Linux 8 (jessie)

Kernel version: [Supported and recommended]
    Kernel version is 3.16.0-4-amd64. Versions >= 2.6 are supported. 3.0+ are recommended.


Cgroup setup: [Supported and recommended]
    Available cgroups: map[cpu:1 memory:1 freezer:1 net_prio:1 cpuset:1 cpuacct:1 devices:1 net_cls:1 blkio:1 perf_event:1]
    Following cgroups are required: [cpu cpuacct]
    Following other cgroups are recommended: [memory blkio cpuset devices freezer]
    Hierarchical memory accounting enabled. Reported memory usage includes memory used by child containers.


Cgroup mount setup: [Supported and recommended]
    Cgroups are mounted at /sys/fs/cgroup.
    Cgroup mount directories: blkio cpu cpu,cpuacct cpuacct cpuset devices freezer memory net_cls net_cls,net_prio net_prio perf_event systemd
    Any cgroup mount point that is detectible and accessible is supported. /sys/fs/cgroup is recommended as a standard location.
    Cgroup mounts:
    cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
    cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
    cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
    cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
    cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
    cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
    cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
    cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
    cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0


Managed containers:
    /kubepods/burstable/pod76099b4b-af57-11e7-9b82-fa163ea0076a
    /kubepods/besteffort/pod6ed4ee49-af53-11e7-9b82-fa163ea0076a/f9da6bf60a186c47bd704bbe3cc18b25d07d4e7034d185341a090dc3519c047a
            Namespace: docker
            Aliases:
                    k8s_tiller_tiller-deploy-cffb976df-5s6np_kube-system_6ed4ee49-af53-11e7-9b82-fa163ea0076a_1
                    f9da6bf60a186c47bd704bbe3cc18b25d07d4e7034d185341a090dc3519c047a
    /kubepods/burstable/pod76099b4b-af57-11e7-9b82-fa163ea0076a/956911118c342375abfb7a07ec3bb37451bbc64a1e141321b6284cf5049e385f

编辑

禁用kubelet(--cadvisor-port=0)上的 cadvisor 端口不能解决该问题.

Disabling the cadvisor port on kubelet (--cadvisor-port=0) doesn't fix that.

推荐答案

尝试以以下方式启动kubelet

Try to start kubelet with

--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

我在带有Kubelet 1.8.0和Docker 1.12的RHEL7上使用此解决方案

I'm using this solution on RHEL7 with Kubelet 1.8.0 and Docker 1.12

这篇关于kubelet无法获取docker和kubelet服务的cgroup统计信息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-01 10:30