最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

postgresql - Database Connections Spiking on Heroku Dyno Startup with django-db-geventpool – MAX_CONNS Not Enforced - Stack Over

programmeradmin1浏览0评论

I'm using Django, Gunicorn with Gevent, and django-db-geventpool on Heroku (Performance L dynos, WEB_CONCURRENCY=17). My database connections spike significantly on dyno startup, exceeding the expected number of connections.

Expected Behavior

Given my setup:

  • MAX_CONNS=4 (per worker)
  • REUSE_CONNS=2
  • WEB_CONCURRENCY=17 (workers per dyno)
  • 6 dynos in production

I would expect each dyno to hold at most 68 connections (17 workers * 4 MAX_CONNS). However, on startup, I see single dynos temporarily holding 150+ idle connections, which contributes to hitting Heroku’s 500 connection limit.

Key Questions

  • What does django-db-geventpool do if more than MAX_CONNS connections are requested?
    • Are requests queued and forced to wait for a connection to free up?
    • Or does django-db-geventpool ignore MAX_CONNS and allow connections to exceed the limit?
  • Why might I be seeing connection spikes during dyno startup?

Steps Taken So Far

  • Verified that the spike happens only during startup, not under normal traffic.
  • Checked pg_stat_activity and saw many idle connections from the same dyno.
  • Ensured I’m not leaking connections from Celery, cron jobs, or background tasks.

Has anyone encountered this issue with django-db-geventpool on Heroku? Any insights on whether it respects MAX_CONNS or if connections can exceed the limit under high concurrency?

I'm using Django, Gunicorn with Gevent, and django-db-geventpool on Heroku (Performance L dynos, WEB_CONCURRENCY=17). My database connections spike significantly on dyno startup, exceeding the expected number of connections.

Expected Behavior

Given my setup:

  • MAX_CONNS=4 (per worker)
  • REUSE_CONNS=2
  • WEB_CONCURRENCY=17 (workers per dyno)
  • 6 dynos in production

I would expect each dyno to hold at most 68 connections (17 workers * 4 MAX_CONNS). However, on startup, I see single dynos temporarily holding 150+ idle connections, which contributes to hitting Heroku’s 500 connection limit.

Key Questions

  • What does django-db-geventpool do if more than MAX_CONNS connections are requested?
    • Are requests queued and forced to wait for a connection to free up?
    • Or does django-db-geventpool ignore MAX_CONNS and allow connections to exceed the limit?
  • Why might I be seeing connection spikes during dyno startup?

Steps Taken So Far

  • Verified that the spike happens only during startup, not under normal traffic.
  • Checked pg_stat_activity and saw many idle connections from the same dyno.
  • Ensured I’m not leaking connections from Celery, cron jobs, or background tasks.

Has anyone encountered this issue with django-db-geventpool on Heroku? Any insights on whether it respects MAX_CONNS or if connections can exceed the limit under high concurrency?

Share Improve this question asked Mar 13 at 17:15 Johnny MetzJohnny Metz 5,64724 gold badges100 silver badges185 bronze badges
Add a comment  | 

2 Answers 2

Reset to default 0

Yes, it should absolutely respect MAX_CONNS being per worker. What I believe is happening is that you briefly have double the number of workers. I faced something similar with AWS Elastic Beanstalk. The culprit was the All at once Deployment which was responislbe for having 2x instances and thus double the number of connection and hitting the DB connection limit. Changing it to batched rolling was all that was needed. Unfortunately, Heroku doesn't seem ot have a similar option. What you can do is temporarily scale your app down heroku ps:scale worker=5, work on the dyno and scale it back up once you done heroku ps:scale worker=17

I assume your dyno is using many processes (I think above 8 for each dyno)

so MAX_CONNS x WEB_CONCURRENCY x No. of processes = Over all connections

which is in this case 4 x 17 x 8 = 544 or more.

from django-db-geventpool GITHUB:

Is your webserver using more than one process?, in that case, each process is independent and the DB connections cannot be shared, so, 2 processes with 15 connections each one, can open 30 connections

source

Due the way python and gevent works, the pool is shared between threads (or greenlets) inside the same python process, a connection can't be shared with other process. Remember that with gevent, one process can use hundred or thousand of connections at the same time, this pool allows to set a limit per process and also avoid django DB queries to block the process that will make gevent useless. If you want a global pool, pgbouncer or similar should be used.

So, MAX_CONNS should be set to (postgres's connections allowed / number of processes) or lower.

source

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论