问题描述
I'm running a web server that is handling many thousands of concurrent web socket connections. For this to be possible, on Debian linux (my base image is google/debian:wheezy, running on GCE), where the default number of open files is set to 1000, I usually just set the ulimit to the desired number (64,000).
This works out great, except that when I dockerized my application and deployed it - I found out that docker kind of ignores the limit definitions. I have tried the following (all on the host machine, not on the container itself):
MAX=64000
sudo bash -c "echo "* soft nofile $MAX" >> /etc/security/limits.conf"
sudo bash -c "echo "* hard nofile $MAX" >> /etc/security/limits.conf"
sudo bash -c "echo "ulimit -c $MAX" >> /etc/profile"
ulimit -c $MAX
After doing some research I found that people were able to solve a similar issue by doing this:
sudo bash -c "echo "limit nofile 262144 262144" >> /etc/init/docker.conf"
and rebooting / restarting the docker service.
However, all of the above fail: I am getting the "too many open files" error when my app runs inside the container (doing the following without docker solves the problem).
I have tried to run ulimit -a
inside the container to get an indication if the ulimit setup worked, but doing so throws an error about ulimit not being an executable that's a part of the PATH.
Anyone ran into this and/or can suggest a way to get docker to recognzie the limits?
I was able to mitgiate this issue with the following configuration :
I used ubuntu 14.04 linux for the docker machine and the host machine.
On the host machine You need to :
- update the /etc/security/limits.conf to include :
* - nofile 64000
- add to your /etc/sysctl.conf :
fs.file-max = 64000
- restart sysctl : sudo sysctl -p
这篇关于Docker Ignores limits.conf(试图解决“打开文件过多"错误)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!