Open Menu

    PostreSQL Advisory Locks, explained (with real-world patterns)

    Flavio Del Grosso

    Flavio Del GrossoSept 20, 2025

    10 min read2480 words

    If you’ve ever thought “I just need a tiny, fast lock that isn’t tied to a table or row,” Postgres has you covered. Advisory locks are little rendezvous points you can use from SQL to coordinate work across sessions and services—without touching your schema. They’re “advisory” because Postgres won’t enforce them for you; your app agrees to play by the rules.

    Below is a clear, hands-on guide: what they are, when to use them, how to use them safely, and a few patterns you can drop straight into production.


    What advisory locks are (and aren’t)

    • Application-defined locks. You invent the meaning of the lock key (e.g., “rebuild cache for store 42”). The database only stores and arbitrates the key; it doesn’t know what it represents.
    • Two lifetimes. Take a lock for the session (it survives commits/rollbacks, until you unlock or disconnect) or for the transaction (auto-releases on commit/rollback). You can also choose shared (many holders) or exclusive (one holder).
    • Key shapes. Keys can be a single 64-bit integer (bigint) or a pair of 32-bit integers; these two key spaces do not overlap. Pick whichever is more convenient for namespacing.
    • Visible in pg_locks. You can inspect who holds what via the pg_locks view. For bigint keys, Postgres splits the number across columns; the doc even gives a formula to reassemble it.
    • Scoped to a database. The same advisory lock key can be held simultaneously in different databases within the same cluster; they don’t conflict across DB boundaries.
    • Shared pool, finite capacity. Advisory locks live in shared memory along with regular locks, bounded by max_connections and max_locks_per_transaction (so… don’t mint millions).

    The core API you’ll actually use

    Blocking (waits until lock available):

    sql

    -- session-level
    SELECT pg_advisory_lock( k_bigint );                 -- exclusive
    SELECT pg_advisory_lock_shared( k_bigint );          -- shared
    SELECT pg_advisory_lock( k1_int, k2_int );           -- exclusive
    SELECT pg_advisory_lock_shared( k1_int, k2_int );    -- shared
    
    -- transaction-level (auto-release at commit/rollback)
    SELECT pg_advisory_xact_lock( k_bigint );
    SELECT pg_advisory_xact_lock_shared( k1_int, k2_int );

    Non-blocking (try and return immediately):

    sql

    SELECT pg_try_advisory_lock( k1_int, k2_int );        -- boolean
    SELECT pg_try_advisory_xact_lock( k_bigint );         -- boolean

    Releasing:

    sql

    SELECT pg_advisory_unlock( k1_int, k2_int );          -- true if actually released
    SELECT pg_advisory_unlock_all();                      -- drop all session-level locks

    A session can acquire the same lock more than once; it must unlock the same number of times (helpful for nested helpers).


    How to pick good keys (without collisions)

    Use the two-int form for natural namespacing:

    • key1 = a small constant per “lock family” (e.g., 10 for “cache rebuilds”).
    • key2 = your resource id (e.g., store_id).

    That gives you readable, deterministic keys and avoids hashing puzzles:

    sql

    -- Try to become exclusive owner of `store_id` within the "cache rebuilds" namespace.
    SELECT pg_try_advisory_lock(10, :store_id) AS got_it;

    If got_it is true, proceed. If false, somebody else is already doing it—bail or retry.


    Five patterns that shine with advisory locks

    1) Make scheduled jobs idempotent across workers

    You’ve got multiple cron instances or a distributed job runner? De-duplicate work:

    sql

    WITH attempt AS (
      SELECT pg_try_advisory_lock(1, :job_id) AS got
    )
    SELECT CASE WHEN got THEN perform_my_job(:job_id) END
    FROM attempt;

    Transaction-level locks are great here: they auto-release on commit/rollback so you won’t forget.

    2) Leader election (the lightweight way)

    One process holds “the crown” and others stand down until it’s free. Use session-level and don’t tie leadership to any one transaction:

    sql

    -- on process start
    SELECT pg_try_advisory_lock(2, 1) AS i_am_leader; -- namespace=2, crown=1

    If you get true, you’re leader until you disconnect or call pg_advisory_unlock(2,1).

    Sequence diagram (two contenders):

    3) Serialize migrations or one-off maintenance

    Wrap the critical section with a cluster-friendly lock so parallel deploys don’t trip:

    sql

    DO $$
    BEGIN
      IF NOT pg_try_advisory_lock(3, 0) THEN
        RAISE NOTICE 'another migration is running, exiting';
        RETURN;
      END IF;
    
      -- … your DDL/data fix here …
    
      PERFORM pg_advisory_unlock(3, 0);
    END $$;

    4) Protect an external, non-SQL resource

    Only one worker can touch a shared file, queue, or API quota window at a time:

    sql

    -- Exclusive lock per external handle
    SELECT pg_advisory_xact_lock(4, :external_handle_id);
    -- talk to the API safely…

    5) Build a read-mostly barrier with shared/exclusive

    Let many readers proceed together (shared lock), but block them when a single writer needs exclusivity:

    • Readers: pg_advisory_lock_shared(5, :resource_id)
    • Writer: pg_advisory_lock(5, :resource_id)

    A careful word on safety

    • Cooperation is required. Advisory locks don’t guard rows or tables; they guard ideas. Any code path that mutates the same conceptual resource must use the same lock or it’s not protected.
    • Avoid “accidental” mass locking. In queries with ORDER BY … LIMIT …, the planner might evaluate expressions before applying the LIMIT. If that expression is pg_advisory_lock(id), you might lock more than you expected. Use a subquery or CTE to bound rows first:

    sql

    -- bad
    SELECT pg_advisory_lock(id) FROM items ORDER BY created_at DESC LIMIT 100;
    
    -- good
    SELECT pg_advisory_lock(id)
    FROM (SELECT id FROM items ORDER BY created_at DESC LIMIT 100) s;
    • Don’t explode the lock table. Each lock consumes shared memory. If you generate vast numbers of distinct keys (e.g., per-row locks for millions of rows), you can exhaust the pool. Cap concurrent keys, and reuse namespaces.
    • Remember scope. Advisory locks don’t cross databases. If your app spans multiple DBs in the same cluster, you’ll need a coordination DB or an out-of-band mechanism.

    Monitoring & debugging

    Peek at who’s holding what:

    sql

    SELECT pid, granted, locktype, mode, classid, objid, objsubid
    FROM pg_locks
    WHERE locktype = 'advisory';

    If you use the bigint form and want the original key back:

    sql

    -- Recompose the bigint key from pg_locks columns (as documented)
    SELECT (classid::bigint << 32) | objid::bigint AS key64
    FROM pg_locks
    WHERE locktype = 'advisory' AND objsubid = 1;

    (That mapping between classid/objid and your original bigint is documented.)


    A tiny, production-ready wrapper

    SQL helper (transaction-level, with retry budget):

    sql

    -- Try to run a function while holding an exclusive advisory lock.
    -- Returns TRUE if it ran, FALSE if it couldn't acquire the lock in time.
    CREATE OR REPLACE FUNCTION run_with_advisory_lock(ns int, rid int, ms_timeout int,
                                                      work regprocedure)
    RETURNS boolean LANGUAGE plpgsql AS $$
    DECLARE
      deadline timestamptz := clock_timestamp() + make_interval(msecs => ms_timeout);
    BEGIN
      WHILE clock_timestamp() < deadline LOOP
        IF pg_try_advisory_xact_lock(ns, rid) THEN
          PERFORM work();  -- call your function
          RETURN TRUE;
        END IF;
        PERFORM pg_sleep(0.05);
      END LOOP;
      RETURN FALSE;
    END$$;

    Usage:

    sql

    SELECT run_with_advisory_lock(10, 42, 2000, 'refresh_store_cache()');

    When not to use them

    • If you truly need to protect rows or tables, use row/table locks or constraints. Advisory locks won’t stop a rogue transaction from writing.
    • If coordination must span multiple databases or non-Postgres systems and you can’t centralize through one DB, consider a distributed lock manager (e.g., Redis Redlock with caution) or a queue that provides the exclusivity you need.

    Cheat sheet of function flavors (mental model)


    TL;DR

    Advisory locks are fast, flexible, and simple: pick a key, agree on the contract, and let Postgres serialize or coordinate the work. Use transaction-level for “do a thing once” code paths; session-level for long-lived roles like “leader.” Namespace with the two-int variants to keep keys tidy, and keep an eye on pg_locks if something feels stuck. They’re one of those features that—once you start using them—you’ll wonder how you lived without.


    Share on

    PostreSQL Advisory Locks, explained (with real-world patterns) | Flavio Del Grosso