最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

postgresql - Close postgre database connections when using multiple instances of a Dash app using nginx that runs on Docker - St

programmeradmin2浏览0评论

I am trying to deploy a Dash application with nginx that runs on Docker. I compose it so that I can get 8 separate instances of the same dashboard for which the inputs can be different. Each dashboard is fetching data from a PostgreSQL database depending on the user inputs on the dashboard.

However, when opening multiple instances, DB connections are set to "Idle", "ROLLBACK" or "Idle", "Commit" and never close even if the dashboard tab is closed. Moreover, as there is an auto-refresh on the dashboard, additional connections keep appearing.

What I would like is to be able to close all connections once a dashboard tab is closed. I have tried using websockets but these did not work. Same thing using SQLAlchemy's scoped_session. If I stop the code manually (Ctrl+C in terminal), the engine is disposed and the connection is closed but I cannot manage to make it automatically.

Below is a code snippet of what I do, please tell me if you need anything else :)

data.fetch_db_data.py


import os
from dotenv import load_dotenv
from sqlalchemy import create_engine, text
from sqlalchemy.orm import sessionmaker, scoped_session
import datetime
import pytz
import pandas as pd
import atexit
import signal
import sys

load_dotenv()

uk_dashboard_username = os.getenv("uk_dashboard_username")
uk_dashboard_password = os.getenv("uk_dashboard_password")
uk_dashboard_port = os.getenv("uk_dashboard_port")
uk_dashboard__ip_address = os.getenv("uk_dashboard_ip_address")
uk_dashboard_db_name = os.getenv("uk_dashboard_db_name")

connection_string = (
    f"postgresql://{uk_dashboard_username}:{uk_dashboard_password}@{uk_dashboard__ip_address}:{uk_dashboard_port}/{uk_dashboard_db_name}"
)

engine = create_engine(
    connection_string,
    pool_size=5,          # Max number of connections
    max_overflow=5,        # Extra connections allowed if pool is full
    pool_recycle=30,     # Recycle connections every 30 minutes
    pool_pre_ping=True,     # Check if the connection is alive before using it
    pool_timeout=60
)
SessionLocal = scoped_session(sessionmaker(bind=engine, autoflush=False, autocommit=False, expire_on_commit=True))

utc = pytz.utc
amsterdam_tz = pytz.timezone("Europe/Amsterdam")

def shutdown():
    print("Disposing of database engine...")
    SessionLocal.remove()
    engine.dispose()

    # Extra cleanup for Gunicorn workers
    try:
        import os
        worker_id = os.getpid()
        print(f"Shutting down worker {worker_id}...")
    except Exception:
        pass  # Just in case there's an error

atexit.register(shutdown)
signal.signal(signal.SIGTERM, lambda signum, frame: shutdown() or sys.exit(0))
signal.signal(signal.SIGINT, lambda signum, frame: shutdown() or sys.exit(0))

# Context manager for safe session handling
from contextlib import contextmanager

@contextmanager
def get_session():
    session = SessionLocal()
    try:
        yield session
        sessionmit()  # Ensure transactions are committed
    except Exception as e:
        session.rollback()  # Prevents lingering ROLLBACK connections
        raise e
    finally:
        session.close()  # Closes session properly

def fetch_data(date:datetime.datetime = datetime.datetime.today().date()):

    # Create localized datetime objects for D-00:00  and D+1-23:59 in Amsterdam time
    start_amsterdam = amsterdam_tz.localize(
        datetime.datetimebine(date, datetime.datetime.min.time())
    )
    end_amsterdam = amsterdam_tz.localize(
        datetime.datetimebine(date+datetime.timedelta(days=1), datetime.datetime.max.time())
    )

    # Convert to UTC
    start_utc = start_amsterdam.astimezone(utc)
    end_utc = end_amsterdam.astimezone(utc)

    query=text("""
        SELECT data
        FROM table
        WHERE delivery_start_utc BETWEEN :start_utc AND :end_utc
        ;
    """)

    # Fetch data with proper session management
    with SessionLocal() as session:
        df = pd.read_sql_query(
            query,
            session.bind,  # Use the session's connection
            params={"start_utc": start_utc, "end_utc": end_utc},
        )

    return df

run_app.py file

import dash
from layouts.main_layout import global_layout
from data.fetch_db_data import shutdown

app = dash.Dash(__name__)

app.layout=global_layout

server=app.server

from callbacks.update_data_callbacks import *
from callbacks.update_charts_callbacks import *
from callbacks.style_callbacks import *
from callbacks.timer_callbacks import *
from callbacks.input_callbacks import *

if __name__ == "__main__":
    try:
        app.run_server(host="0.0.0.0", port=8050, debug=False)
    finally:
        shutdown()  # Cleanup on full server exit

docker-compose.yml

version: '3.8'

services:
  dash:
    build: .
    deploy:
      replicas: 8  # Runs 8 separate Dash instances
    restart: always
    volumes:
      - .:/app
    env_file:
      - .env
    environment:
      - ENV=production
    command: ["gunicorn", "-w", "4", "--threads", "4", "-k", "gevent", "-b", "0.0.0.0:8096", "uk_dash_app:server"]
    expose:
      - "8096"

  nginx_uk_dashboard:
    image: nginx:latest
    container_name: nginx_uk_dashboard
    ports:
      - "6969:6969"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - dash

Dockerfile

# Use an official Python runtime as a parent image
FROM python:3.12-slim

# Install system dependencies needed for PostgreSQL and compiling Python packages
RUN apt-get update && apt-get install -y libpq-dev gcc && rm -rf /var/lib/apt/lists/*

# Set the working directory in the container
WORKDIR /app

# Copy the application files into the container
COPY . /app

# Install dependencies
RUN pip install --no-cache-dir --upgrade pip && \
    pip install -r requirements.txt

# Expose the port the app runs on
EXPOSE 8096

nginx.conf

upstream dash_app_cluster {
    least_conn ;  # ✅ Sends requests to the least busy instance

    server dash:8096 max_fails=3 fail_timeout=5s;
}

server {
    listen 6969;

    client_max_body_size 10M;

    location / {
        proxy_pass http://dash_app_cluster;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Thanks for your help, Berthouille

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论