问题描述
基本上,标题说明一切:在单个Docker主机上同时运行的容器数量有限吗? / div>
有一些系统限制,你可以运行(并解决),但有大量的灰色区域取决于
- 您如何配置Docker容器。
- 您在容器中运行的内容。
- 内核,发行版和Docker版本在...上。
下面的数字来自基于。内核是4.4.8
Docker
Docker创建或使用大量资源来运行容器,在你在容器内运行的东西之上。
- 将虚拟以太网适配器连接到
docker0
bridge(每个桥接最多1023个) - 安装AUFS和
shm
文件系统(1048576最大每fs类型) - 创建在图像顶部的AUFS图层(最多127层)
- 叉1个额外的$ code> docker-containerd-shim 管理过程(〜 Docker API /守护进程管理容器的内部数据,每个容器的平均容量为3MB,
sysctl kernel.pid_max
) - (每个容器约400k)
- 创建内核
cgroup
和名称空格 - 打开文件描述符(启动时运行容器为〜15 + 1)
ulimit -n
和sysctl fs.file-max
)
Docker选项
- 端口映射
-p
将在主机上为每个端口号运行一个额外的进程(每个端口的平均前端大小为4.5MB,前端为1.12,端口大约为300k,端口为1.12,还有sysctl kernel.pid_max
) -
- net = none
和- net = host
将删除网络开销。
集装箱服务
整体限制通常由您决定运行在容器内,而不是停靠码头(除非你正在做一些深奥的事情,比如测试可以运行多少个容器)。
如果您在虚拟机中运行应用程序(node,ruby,python,java)内存使用很可能成为你的主要问题。在1000个进程中的
IO会导致大量的IO争用。
尝试同时运行的1000个进程将导致大量的上下文切换(请参阅上面的vm应用程序进行垃圾回收)
如果您从1000个容器创建网络连接,则主机网络层将进行锻炼。
调整linux主机运行1000个进程并不一样,只需要添加一些Docker开销即可。
示例
1023运行 nc -l -p 80 -e echo主机的Docker busybox图像
使用了1GB的内核内存和3.5GB的系统内存。
1023 plain nc -l -p 80 -e echo host
在主机上运行的进程使用大约75MB的内核内存和125MB的系统内存
启动1023容器串行花费约8分钟。
杀死1023个货柜连续花费约6分钟
Basically, the title says it all: Is there any limit in the number of containers running at the same time on a single Docker host?
There are a number of system limits you can run into (and work around) but there's a significant amount of grey area depending on
- How you are configuring your docker containers.
- What you are running in your containers.
- What kernel, distribution and docker version you are on.
The figures below are from the boot2docker 1.11.1 vm image which is based on Tiny Core Linux 7. The kernel is 4.4.8
Docker
Docker creates or uses a number of resources to run a container, on top of what you run inside the container.
- Attaches a virtual ethernet adaptor to the
docker0
bridge (1023 max per bridge) - Mounts an AUFS and
shm
file system (1048576 mounts max per fs type) - Create's an AUFS layer on top of the image (127 layers max)
- Forks 1 extra
docker-containerd-shim
management process (~3MB per container on avg andsysctl kernel.pid_max
) - Docker API/daemon internal data to manage container. (~400k per container)
- Creates kernel
cgroup
s and name spaces - Opens file descriptors (~15 + 1 per running container at startup.
ulimit -n
andsysctl fs.file-max
)
Docker options
- Port mapping
-p
will run a extra process per port number on the host (~4.5MB per port on avg pre 1.12, ~300k per port > 1.12 and alsosysctl kernel.pid_max
) --net=none
and--net=host
would remove the networking overheads.
Container services
The overall limits will normally be decided by what you run inside the containers rather than dockers overhead (unless you are doing something esoteric, like testing how many containers you can run :)
If you are running apps in a virtual machine (node,ruby,python,java) memory usage is likely to become your main issue.
IO across a 1000 processes would cause a lot of IO contention.
1000 processes trying to run at the same time would cause a lot of context switching (see vm apps above for garbage collection)
If you create network connections from a 1000 containers the hosts network layer will get a workout.
It's not much different to tuning a linux host to run a 1000 processes, just some additional Docker overheads to include.
Example
1023 Docker busybox images running nc -l -p 80 -e echo host
uses up about 1GB of kernel memory and 3.5GB of system memory.
1023 plain nc -l -p 80 -e echo host
processes running on a host uses about 75MB of kernel memory and 125MB of system memory
Starting 1023 containers serially took ~8 minutes.
Killing 1023 containers serially took ~6 minutes
这篇关于Docker主机上运行的容器数量是否最多?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!