限制码头容器上的内存不起作用

限制码头容器上的内存不起作用

本文介绍了限制码头容器上的内存不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

ubuntu 13.04之前运行最后一个版本的docker版本

  root @ docker:〜#docker version 
客户端版本:0.6.6
Go版本(客户端):go1.2rc3
Git提交(客户端):6d42040
服务器版本:0.6.6
Git提交(服务器):6d42040
Go版本(服务器):go1.2rc3
最后稳定版本:0.6.6

但是当我启动容器时,您需要

  root @ docker:〜#docker run -m = 1524288 -i -t ubuntu / bin / bash 
root @ 7b09f638871a:/#free -m
总共使用免费共享缓冲区缓存
内存:1992 608 1383 0 30 341
- / + buffers / cache: 237 1755
掉期:2047 0 2047

我没有看到任何限制,并且我的内核启用了cgroups内存限制:

  kernel /boot/vmlinuz-3.8.0-33-generic ro console = tty0 root = / dev / xvda1 cgr oup_enable = memory swapaccount = 1 

我在这里缺少什么明显的事情?

解决方案

free 不会显示,因为这是通过cgroups强制执行的。主机(容器外)可以使用 / sysfs 和cgroup memory

  vagrant @ precise64:〜$ docker run -m = 524288 -d -t busybox sleep 3600 
f03a017b174f
vagrant @ precise64:〜$ cat / sys / fs / cgroup / memory / lxc / f03a017b174ff1022e0f46bc1b307658c2d96ffef1dd97e7c1929a4ca61ab80f // memory.limit_in_bytes
524288

要看到内存不足,可以运行一些使用比分配更多内存的内容 - 例如:

  docker run -m = 524288 -d -p 8000: 8000 -t ubuntu:12.10 / usr / bin / python3 -m http.server 
8480df1d2d5d
vagrant @ precise64:〜$ docker ps | grep 0f742445f839
vagrant @ precise64:〜$ docker ps -a | grep 0f742445f839
0f742445f839 ubuntu:12.10 / usr / bin / python3 -m 16秒前退出137 blue_pig

dmesg 中,您应该会看到容器和进程被杀死:

  [583.447974] Pid:1954,comm:python3 Tainted:GF O 3.8.0-33-generic#48〜precise1-Ubuntu 
[583.447980]呼叫跟踪:
[583.447998] [< ffffffff816df13a> ] dump_header + 0x83 / 0xbb
[583.448108] [< ffffffff816df1c7>] oom_kill_process.part.6 + 0x55 / 0x2cf
[583.448124] [< ffffffff81067265>] has_ns_capability_noaudit + 0x15 / 0x20
[583.448137] [< ffffffff81191cc1>]? mem_cgroup_iter + 0x1b1 / 0x200
[583.448150] [< ffffffff8113893d>] oom_kill_process + 0x4d / 0x50
[583.448171] [< ffffffff816e1cf5>] mem_cgroup_out_of_memory + 0x1f6 / 0x241
[583.448187] [< ; ffffffff816e1e7f>] mem_cgroup_handle_oom + 0x13f / 0x24a
[583.448200] [< ffffffff8119000d>]? mem_cgroup_margin + 0xad / 0xb0
[583.448212] [< ffffffff811949d0>]? mem_cgroup_charge_common + 0xa0 / 0xa0
[583.448224] mem_cgroup_do_charge + 0x143 / 0x170
[583.448236] [< ffffffff81194125>] __mem_cgroup_try_charge + 0x105 / 0x350
[583.448249] [< fffffff81193ff3] ; ffffffff81194987>] mem_cgroup_charge_common + 0x57 / 0xa0
[583.448261] [ffffffff8119517a] mem_cgroup_newpage_charge + 0x2a / 0x30
[583.448275] [< ffffffff8115b4d3>] do_anonymous_page.isra.35 + 0xa3 / 0x2f0
[583.448288] [< ffffffff8115f759>] handle_pte_fault + 0x209 / 0x230
[583.448301] [< ffffffff81160bb0>] handle_mm_fault + 0x2a0 / 0x3e0
[583.448320] [< ffffffff816f844f>] __do_page_fault + 0x1af / 0x560
[583.448341] [< ffffffffa02b0a80>]? vfsub_read_u + 0x30 / 0x40 [aufs]
[583.448358] [< ffffffffa02ba3a7>]? aufs_read + 0x107 / 0x140 [aufs]
[583.448371] [< ffffffff8119bb50>]? vfs_read + 0xb0 / 0x180
[583.448384] [< ffffffff816f880e>] do_page_fault + 0xe / 0x10
[583.448396] [< ffffffff816f4bd8>] page_fault + 0x28 / 0x30
[583.448405] / LXC / 0f742445f8397ee7928c56bcd5c05ac29dcc6747c6d1c3bdda80d8e688fae949被杀的/ LXC / 0f742445f8397ee7928c56bcd5c05ac29dcc6747c6d1c3bdda80d8e688fae949
限制的结果[583.448412]存储器:使用416KB,限制512KB,failcnt 342


Running last version of docker on top of ubuntu 13.04

root@docker:~# docker version
Client version: 0.6.6
Go version (client): go1.2rc3
Git commit (client): 6d42040
Server version: 0.6.6
Git commit (server): 6d42040
Go version (server): go1.2rc3
Last stable version: 0.6.6

But when I start the container

root@docker:~# docker run -m=1524288 -i  -t ubuntu /bin/bash
root@7b09f638871a:/# free -m
             total       used       free     shared    buffers     cached
Mem:          1992        608       1383          0         30        341
-/+ buffers/cache:        237       1755
Swap:         2047          0       2047

I don't see any limiting from any kind, and my kernel has cgroups memory limit enabled:

kernel /boot/vmlinuz-3.8.0-33-generic ro console=tty0 root=/dev/xvda1 cgroup_enable=memory swapaccount=1

What obvious thing am I missing here ?

解决方案

free won't show it as this is enforced via cgroups. Instead on the host (outside the container) you can check using /sysfs and the cgroup memory

vagrant@precise64:~$ docker run -m=524288 -d  -t busybox sleep 3600
f03a017b174f
vagrant@precise64:~$ cat /sys/fs/cgroup/memory/lxc/f03a017b174ff1022e0f46bc1b307658c2d96ffef1dd97e7c1929a4ca61ab80f//memory.limit_in_bytes
524288

To see it run out of memory you can run something that will use more memory than you allocate - eg:

docker run -m=524288 -d -p 8000:8000 -t ubuntu:12.10  /usr/bin/python3 -m http.server
8480df1d2d5d
vagrant@precise64:~$ docker ps | grep 0f742445f839
vagrant@precise64:~$ docker ps -a | grep 0f742445f839
0f742445f839        ubuntu:12.10        /usr/bin/python3 -m    16 seconds ago       Exit 137                                blue_pig

In dmesg you should see the container and process killed:

[  583.447974] Pid: 1954, comm: python3 Tainted: GF          O 3.8.0-33-generic #48~precise1-Ubuntu
[  583.447980] Call Trace:
[  583.447998]  [<ffffffff816df13a>] dump_header+0x83/0xbb
[  583.448108]  [<ffffffff816df1c7>] oom_kill_process.part.6+0x55/0x2cf
[  583.448124]  [<ffffffff81067265>] ? has_ns_capability_noaudit+0x15/0x20
[  583.448137]  [<ffffffff81191cc1>] ? mem_cgroup_iter+0x1b1/0x200
[  583.448150]  [<ffffffff8113893d>] oom_kill_process+0x4d/0x50
[  583.448171]  [<ffffffff816e1cf5>] mem_cgroup_out_of_memory+0x1f6/0x241
[  583.448187]  [<ffffffff816e1e7f>] mem_cgroup_handle_oom+0x13f/0x24a
[  583.448200]  [<ffffffff8119000d>] ? mem_cgroup_margin+0xad/0xb0
[  583.448212]  [<ffffffff811949d0>] ? mem_cgroup_charge_common+0xa0/0xa0
[  583.448224]  [<ffffffff81193ff3>] mem_cgroup_do_charge+0x143/0x170
[  583.448236]  [<ffffffff81194125>] __mem_cgroup_try_charge+0x105/0x350
[  583.448249]  [<ffffffff81194987>] mem_cgroup_charge_common+0x57/0xa0
[  583.448261]  [<ffffffff8119517a>] mem_cgroup_newpage_charge+0x2a/0x30
[  583.448275]  [<ffffffff8115b4d3>] do_anonymous_page.isra.35+0xa3/0x2f0
[  583.448288]  [<ffffffff8115f759>] handle_pte_fault+0x209/0x230
[  583.448301]  [<ffffffff81160bb0>] handle_mm_fault+0x2a0/0x3e0
[  583.448320]  [<ffffffff816f844f>] __do_page_fault+0x1af/0x560
[  583.448341]  [<ffffffffa02b0a80>] ? vfsub_read_u+0x30/0x40 [aufs]
[  583.448358]  [<ffffffffa02ba3a7>] ? aufs_read+0x107/0x140 [aufs]
[  583.448371]  [<ffffffff8119bb50>] ? vfs_read+0xb0/0x180
[  583.448384]  [<ffffffff816f880e>] do_page_fault+0xe/0x10
[  583.448396]  [<ffffffff816f4bd8>] page_fault+0x28/0x30
[  583.448405] Task in /lxc/0f742445f8397ee7928c56bcd5c05ac29dcc6747c6d1c3bdda80d8e688fae949 killed as a result of limit of /lxc/0f742445f8397ee7928c56bcd5c05ac29dcc6747c6d1c3bdda80d8e688fae949
[  583.448412] memory: usage 416kB, limit 512kB, failcnt 342

这篇关于限制码头容器上的内存不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-30 22:29