问题描述
我正在运行一个多功能的网站,我希望减少每个请求创建PostgreSQL连接的开销。 Django的CONN_MAX_AGE允许这样做,代价是创建大量与PostgreSQL的空闲连接(8个工作人员的20个线程= 160个连接)。每个连接10MB,这消耗了大量的内存。主要目的是减少连接时间开销。
因此我的问题:
- 我应该使用哪种设置? (PgBouncer?)
- 我可以使用Django的事务池模式吗?
- 我会更好地使用以下内容:而不是Django的池?
Django 1.6设置:
DATABASES ['default'] = {
pre>
'ENGINE':'django.db.backends.postgresql_psycopg2',
....
'PORT':'6432'
'选项' {'autocommit':True,},
'CONN_MAX_AGE':300,
}
ATOMIC_REQUESTS = False#default
Postgres:
max_connections = 100
PgBouncer:
pool_mode = session#这可以是事务吗?
max_client_conn = 400#应该匹配postgres max_connections吗?
default_pool_size = 20
reserve_pool_size = 5
解决方案这是我使用的一个设置。
pgbouncer在同一台机器上运行,如刺客,芹菜等。
pgbouncer.ini:
[databases]
< dbname> = host =< dbhost>端口= LT; dbport> DBNAME = LT; DBNAME>
[pgbouncer]
:您的应用程序将需要此unix套接字的文件系统权限
unix_socket_dir = / var / run / postgresql
;您需要使用
计划的用户名/密码对配置此文件;连接。
auth_file = /etc/pgbouncer/userlist.txt
; 会议导致了我们的糟糕表现。我觉得
; 声明可以防止事务处理。
pool_mode = transaction
;你可能想要改变default_pool_size。取最大数量
;你的postgresql服务器的连接,除以
的数量;将会连接到它的pgbouncer实例,然后减去几个
;连接,因此如果出现问题,您仍然可以连接到管理员。
;您可能需要相应地调整min_pool_size和reserve_pool_size。
default_pool_size = 50
min_pool_size = 10
reserve_pool_size = 10
reserve_pool_timeout = 2
;我正在使用gunicorn + eventlet,这就是为什么这么高。它
;需要足够高以适应我们
的所有持续连接;去Django&其他应用程式
max_client_conn = 1000
...
/ etc / pgbouncer / userlist。 txt:
< dbuser> < DBPASSWORD> 中
Django settings.py:
...
DATABASES = {
'default':{
'ENGINE':'django.contrib.gis.db.backends.postgresql_psycopg2',
'NAME':'< dbname>',
'USER':'< dbuser>',
'PASSWORD':'< dbpassword>'
'HOST ':'/ var / run / postgresql',
'PORT':'',
'CONN_MAX_AGE':无,#设置为无持久连接
}
}
...
如果我记得正确,你基本上可以有任何数量的持久连接到pgbouncer,因为当Django完成时,pgbouncer将服务器连接释放回池中(只要您使用
transaction
或语句
forpool_mode
)。当Django尝试重用其持久连接时,pgbouncer负责等待与Postgres的可用连接。I'm running a multi-tennant website, where I would like to reduce the overhead of creating a PostgreSQL connection per request. Django's CONN_MAX_AGE allows this, at the expense of creating a lot of open idle connections to PostgreSQL (8 workers * 20 threads = 160 connections). With 10MB per connection, this consumes a lot of memory.
The main purpose is reducing connection-time overhead.Hence my questions:
- Which setup should I use for such solution? (PgBouncer?)
- Can I use 'transaction' pool mode with Django?
- Would I be better off using something like: https://github.com/kennethreitz/django-postgrespool instead of Django's pooling?
Django 1.6 settings:
DATABASES['default'] = {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
....
'PORT': '6432'
'OPTIONS': {'autocommit': True,},
'CONN_MAX_AGE': 300,
}
ATOMIC_REQUESTS = False # default
Postgres:
max_connections = 100
PgBouncer:
pool_mode = session # Can this be transaction?
max_client_conn = 400 # Should this match postgres max_connections?
default_pool_size = 20
reserve_pool_size = 5
Here's a setup I've used.
pgbouncer running on same machine as gunicorn, celery, etc.
pgbouncer.ini:
[databases]
<dbname> = host=<dbhost> port=<dbport> dbname=<dbname>
[pgbouncer]
: your app will need filesystem permissions to this unix socket
unix_socket_dir = /var/run/postgresql
; you'll need to configure this file with username/password pairs you plan on
; connecting with.
auth_file = /etc/pgbouncer/userlist.txt
; "session" resulted in atrocious performance for us. I think
; "statement" prevents transactions from working.
pool_mode = transaction
; you'll probably want to change default_pool_size. take the max number of
; connections for your postgresql server, and divide that by the number of
; pgbouncer instances that will be conecting to it, then subtract a few
; connections so you can still connect to PG as an admin if something goes wrong.
; you may then need to adjust min_pool_size and reserve_pool_size accordingly.
default_pool_size = 50
min_pool_size = 10
reserve_pool_size = 10
reserve_pool_timeout = 2
; I was using gunicorn + eventlet, which is why this is so high. It
; needs to be high enough to accommodate all the persistent connections we're
; going to allow from Django & other apps.
max_client_conn = 1000
...
/etc/pgbouncer/userlist.txt:
"<dbuser>" "<dbpassword>"
Django settings.py:
...
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgresql_psycopg2',
'NAME': '<dbname>',
'USER': '<dbuser>',
'PASSWORD': '<dbpassword>',
'HOST': '/var/run/postgresql',
'PORT': '',
'CONN_MAX_AGE': None, # Set to None for persistent connections
}
}
...
If I remember correctly, you can basically have any number of "persistent" connections to pgbouncer, since pgbouncer releases server connections back to the pool when Django is done with them (as long as you're using transaction
or statement
for pool_mode
). When Django tries to reuse its persistent connection, pgbouncer takes care of waiting for a usable connection to Postgres.
这篇关于用于Django的CONN_MAX_AGE的pgbouncer的理想设置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!