问题描述
我正在运行mongod服务器.每天,我都会执行mongodump以获得备份.问题在于mongodump将占用大量资源,并且会降低服务器的速度(顺便说一句,它已经在运行其他繁重的任务).
I have a mongod server running. Each day, I am executing the mongodump in order to have a backup. The problem is that the mongodump will take a lot of resources and it will slow down the server (that by the way already runs some other heavy tasks).
我的目标是以某种方式限制在shell脚本中调用的mongodump.
My goal is to somehow limit the mongodump which is called in a shell script.
谢谢.
推荐答案
您应该使用cgroups.发行版和内核上的挂载点和详细信息不同. IE.带有备用内核的Debian 7.0默认情况下不会挂载cgroupfs并禁用了内存子系统(人们建议使用cgroup_enabled = memory重启),而openSUSE 13.1出厂时就已将所有这些都拆箱了(主要是由于systemd).
You should use cgroups. Mount points and details are different on distros and a kernels. I.e. Debian 7.0 with stock kernel doesn't mount cgroupfs by default and have memory subsystem disabled (folks advise to reboot with cgroup_enabled=memory) while openSUSE 13.1 shipped with all that out of box (due to systemd mostly).
因此,首先创建安装点并安装cgroupfs(如果发行版尚未完成):
So first of all, create mount points and mount cgroupfs if not yet done by your distro:
mkdir /sys/fs/cgroup/cpu
mount -t cgroup -o cpuacct,cpu cgroup /sys/fs/cgroup/cpu
mkdir /sys/fs/cgroup/memory
mount -t cgroup -o memory cgroup /sys/fs/cgroup/memory
创建一个cgroup:
Create a cgroup:
mkdir /sys/fs/cgroup/cpu/shell
mkdir /sys/fs/cgroup/memory/shell
设置一个cgroup.我决定更改 cpu份额.它的默认值为1024,因此如果有竞争者,将其设置为128会将cgroup限制为所有CPU资源的11%.如果仍然有免费的cpu资源,它们将被提供给mongodump.您也可以使用cpuset
限制可用的内核数.
Set up a cgroup. I decided to alter cpu shares. Default value for it is 1024, so setting it to 128 will limit cgroup to 11% of all CPU resources, if there are competitors. If there are still free cpu resources they would be given to mongodump. You may also use cpuset
to limit numver of cores available to it.
echo 128 > /sys/fs/cgroup/cpu/shell/cpu.shares
echo 50331648 > /sys/fs/cgroup/memory/shell/memory.limit_in_bytes
现在将PID添加到cgroup,这也会影响其所有子级.
Now add PIDs to the cgroup it will also affect all their children.
echo 13065 > /sys/fs/cgroup/cpu/shell/tasks
echo 13065 > /sys/fs/cgroup/memory/shell/tasks
我进行了几次测试.尝试分配一堆内存的Python被OOM杀死了:
I run couple of tests. Python that tries to allocate bunch of mem was Killed by OOM:
myaut@zenbook:~$ python -c 'l = range(3000000)'
Killed
我还在cgroup中运行了四个无限循环,第五个循环.不出所料,在cgroup中运行的循环仅占用了大约45%的CPU时间,而其余部分则获得了355%(我有4个内核).
I've also run four infinite loops and fifth in cgroup. As expected, loop that was run in cgroup got only about 45% of CPU time, while the rest of them got 355% (I have 4 cores).
所有更改都无法在重新启动后幸存!
All that changes do not survive reboot!
您可以将此代码添加到运行mongodump的脚本中,或使用某些永久性解决方案.
You may add this code to a script that runs mongodump, or use some permanent solution.
这篇关于如何限制mongodump的CPU和RAM资源?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!