在Docker容器中运行

在Docker容器中运行

本文介绍了在Docker容器中运行django worker和daphne的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有在Docker容器中运行的Django应用程序。最近,我发现我需要在应用程序中添加websockets接口。我正在使用daphne位于nginx和redis后面的通道作为缓存。问题是我必须在1个容器中运行django工人和daphne。
在容器启动时运行的脚本:

 #!/ usr / bin / env bash 

python wait_for_postgres.py
python manage.py makemigrations
python manage.py迁移
python manage.py collectstatic-无输入

python manage。 py runworker --only-channels = http。* --only-channels = websocket。* -v2
daphne team_up.asgi:channel_layer --port 8000 -b 0.0.0.0

但是它挂着运行一个工人。我尝试了nohup,但似乎不起作用。如果我直接使用docker exec从容器运行daphne,一切都将正常运行。

解决方案

这是一个老问题,但是我想通了无论如何都会回答,因为我最近遇到了同样的问题,并认为可以对此有所了解。



Django通道的工作方式

Django Channels是Django之上的另一层,它具有两种处理类型:




  • 接受HTTP / Websockets的人

  • 运行Django视图,Websocket处理程序,后台任务等的人



基本上,当请求进入时,它首先命中接口服务器(Daphne),该服务器接受HTTP / Websocket连接并将其放在Redis队列中。然后,工作人员(消费者)看到它,将它从队列中取出并运行视图逻辑(例如Django视图,WS处理程序等)。



为什么



因为您只运行工作程序(消费者),而这阻止了接口服务器(生产者)的执行。意思是,将不接受任何连接,并且工作人员只是盯着一个空的Redis队列。



我是如何实现的



我将Daphne,redis和worker作为单独的容器运行,以便于扩展。数据库迁移,静态文件收集等仅在Daphne容器中执行。此容器将只运行一个实例,以确保没有并行的数据库迁移正在运行。



另一方面,可以按比例放大和缩小工人的数量,以处理传入的数据包。流量。



工作方式



将设置拆分为至少两个容器。我不建议在一个容器中运行所有内容(例如,使用)。为什么?因为当需要扩展设置时,没有简单的方法可以做到。您可以将容器扩展为两个实例,但是这会创建另一个包含daphne,redis和django的主管...如果将工作人员与daphne分开,则可以轻松地扩展工作容器以处理不断增长的传入请求。 p>

一个容器可以运行:

 #!/ usr / bin / env bash 

python wait_for_postgres.py
python manage.py迁移
python manage.py collectstatic-无输入

daphne team_up.asgi:channel_layer --port 8000 -b 0.0.0.0

而另一个:

 #!/ usr / bin / env bash 

python wait_for_postgres.py
python manage.py runworker --only -channels = http。* --only-channels = websocket。* -v2

'makemigrations'命令



无需在您提供的脚本中运行该命令,如果由于某些问题它可能阻止整个过程正在等待输入用于(例如



相反,您可以在正在运行的容器中执行它,如下所示:

  docker exec -it< container_name> python manage.py makemigrations 


I have django application that run in docker container. Recently i figured out that i'm going to need to add websockets interface to my application. I'm using channels with daphne behind nginx and redis as a cache. The problem is that i have to run django workers and daphne in 1 container.Script that is running on container startup:

#!/usr/bin/env bash

python wait_for_postgres.py
python manage.py makemigrations
python manage.py migrate
python manage.py collectstatic --no-input

python manage.py runworker --only-channels=http.* --only-channels=websocket.* -v2
daphne team_up.asgi:channel_layer --port 8000 -b 0.0.0.0

But it hangs on running a worker. I tried nohup but it seems to not work. If i run daphne directly from container with docker exec everything works just fine.

解决方案

This is an old question, but I figured I will answer it anyway, because I recently faced the same issue and thought I can shed some light on this.

How Django channels work

Django Channels is another layer on top of Django and it has two process types:

  • One that accepts HTTP/Websockets
  • One that runs Django views, Websocket handlers, background tasks, etc

Basically, when a request comes in, it first hits the interface server (Daphne), which accepts the HTTP/Websocket connection and puts it on the Redis queue. The worker (consumer) then sees it, takes it off the queue and runs the view logic (e.g. Django views, WS handlers, etc).

Why it didn't work for you

Because you only run the worker (consumer) and it's blocking the execution of the interface server (producer). Meaning, that no connections will be accepted and worker is just staring at an empty redis queue.

How I made it work

I run Daphne, redis and workers as separate containers for easy scaling. DB migrations, static file collection, etc are executed only in Daphne container. This container will only have one instance running to ensure that there are no parallel db migrations running.

Workers on the other hand can be scaled up and down to deal with the incoming traffic.

How you could make it work

Split your setup into at least two containers. I wouldn't recommend running everything in one container (using Supervisor for example). Why? Because when the time comes to scale the setup there's no easy way to do it. You could scale your container to two instances, but that just creates another supervisor with daphne, redis, django in it... if you split the worker from daphne, you could easily scale the worker container to deal with growing incoming requests.

One container could run:

#!/usr/bin/env bash

python wait_for_postgres.py
python manage.py migrate
python manage.py collectstatic --no-input

daphne team_up.asgi:channel_layer --port 8000 -b 0.0.0.0

while the other one:

#!/usr/bin/env bash

python wait_for_postgres.py
python manage.py runworker --only-channels=http.* --only-channels=websocket.* -v2

The 'makemigrations' command

There is no need to run the command in the script you provided, if anything it could block the whole thing because of some question it is awaiting input for (e.g. "Did you rename column X to Y?").

Instead, you can execute it in a running container like this:

docker exec -it <container_name> python manage.py makemigrations

这篇关于在Docker容器中运行django worker和daphne的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-04 23:15