backend intermediate

PostgreSQL Connection Pooling with psycopg2

min read Frederick Tubiermont

PostgreSQL Connection Pooling with psycopg2

Every time get_db() runs, it opens a new TCP connection to PostgreSQL. That costs 20-100ms. At any meaningful traffic level, you want to reuse connections instead of creating new ones.

The Problem

# This opens a brand-new connection on every request
def get_db():
    conn = psycopg2.connect(
        host=Config.DB_HOST,
        database=Config.DB_NAME,
        user=Config.DB_USER,
        password=Config.DB_PASSWORD
    )
    return conn

With 100 concurrent users, that's 100 connection handshakes per second. PostgreSQL's default max_connections is 100, but managed providers set it lower — Railway's free tier is around 25.

The Solution: ThreadedConnectionPool

psycopg2 ships with a built-in pool. No extra dependencies.

# utils/db.py
import psycopg2
import psycopg2.pool
from psycopg2.extras import RealDictCursor
from contextlib import contextmanager
from config import Config

# Create the pool once at module level
_pool = None

def init_pool():
    """Initialize the connection pool. Call once at app startup."""
    global _pool
    _pool = psycopg2.pool.ThreadedConnectionPool(
        minconn=2,   # Keep at least 2 connections open
        maxconn=10,  # Never open more than 10
        host=Config.DB_HOST,
        port=Config.DB_PORT,
        database=Config.DB_NAME,
        user=Config.DB_USER,
        password=Config.DB_PASSWORD
    )

@contextmanager
def get_db():
    """Get a connection from the pool, return it when done."""
    conn = _pool.getconn()
    try:
        conn.cursor_factory = RealDictCursor
        yield conn
        conn.commit()
    except Exception:
        conn.rollback()
        raise
    finally:
        _pool.putconn(conn)  # Return to pool, not close

Initialize at App Startup

# app.py
from flask import Flask
from utils.db import init_pool
from config import Config

def create_app():
    app = Flask(__name__)
    app.config.from_object(Config)

    # Initialize pool once when app starts
    init_pool()

    return app

Usage is Identical

Your routes don't change at all — the context manager interface stays the same:

@app.route("/users")
def users():
    with get_db() as conn:
        with conn.cursor() as cur:
            cur.execute("SELECT * FROM users ORDER BY created_at DESC")
            users = cur.fetchall()
    return render_template("users.html", users=users)

Note on commits: The pooled get_db() auto-commits on clean exit. For write routes (INSERT/UPDATE/DELETE), you do not need to call conn.commit() explicitly — the context manager handles it. Calling it twice is harmless but redundant.

Choosing minconn and maxconn

Setting Rule of thumb
minconn Number of worker processes (gunicorn workers)
maxconn PostgreSQL max_connections ÷ number of app instances

For a Railway deployment with 1 instance (check your actual max_connections with SHOW max_connections; in psql):

_pool = psycopg2.pool.ThreadedConnectionPool(
    minconn=2,
    maxconn=20   # Leaves room for other tools (psql, migrations)
)

Handling Pool Exhaustion

If all connections are in use, getconn() raises PoolError. Handle it gracefully:

from psycopg2 import pool as pg_pool

@contextmanager
def get_db():
    try:
        conn = _pool.getconn()
    except pg_pool.PoolError:
        # Pool exhausted — return 503 to client
        raise RuntimeError("Database pool exhausted. Try again shortly.")
    try:
        conn.cursor_factory = RealDictCursor
        yield conn
        conn.commit()
    except Exception:
        conn.rollback()
        raise
    finally:
        _pool.putconn(conn)

Closing the Pool on Shutdown

Define close_pool in utils/db.py where _pool lives, then register it in app.py:

# utils/db.py  (add alongside init_pool)
def close_pool():
    global _pool
    if _pool:
        _pool.closeall()
        _pool = None
# app.py
import atexit
from utils.db import init_pool, close_pool

def create_app():
    app = Flask(__name__)
    init_pool()
    atexit.register(close_pool)
    return app

Before vs After

Metric Per-request connection Connection pool
Connection time 20-100ms per request ~0ms (reused)
Max concurrent users ~20 (PostgreSQL limit) 500+
Memory usage High (each conn ~5MB) Stable

When to Use SimpleConnectionPool vs ThreadedConnectionPool

  • SimpleConnectionPool: Single-threaded apps (scripts, CLI tools)
  • ThreadedConnectionPool: Flask with threaded server (default) — use this

If you're using asyncio (not gevent/eventlet), look at AsyncConnectionPool from the psycopg package (psycopg3). For gevent/eventlet, ThreadedConnectionPool works correctly after monkey-patching — they are different async models.

Was this helpful?

Get More Flask Vibe Tutorials

Join 1,000+ developers getting weekly Flask tips and AI-friendly code patterns.

No spam. Unsubscribe anytime.