You are here: Home / Topics / How banks handle multiple concurrent transactions with 0 errors

How banks handle multiple concurrent transactions with 0 errors

Filed under: Solid System Design on 2026-01-02 14:00:14

I was learning about deadlock and found that when multiple threads locks the same row in that situation, it creates deadlock. 

So I asked chatgpt that what if 500 managers are working on 100 bank accounts simultaneously, will it cause deadlock and how to avoid it? 

Chatgpt replied:

Short answer first (then deep explanation):

No — this will NOT cause deadlock by itself.
But it WILL cause contention, waiting, and throughput limits if designed incorrectly.

Now let’s break it down properly, the way banks and payment systems reason.

1️⃣ Deadlock vs Contention (CRITICAL DIFFERENCE)

Deadlock ❌

  • Two or more transactions
  • Each holds a lock
  • Each waits for another lock
  • Circular wait

Contention ✅

  • Many transactions want the same lock
  • One gets it
  • Others wait
  • No circular wait

? 500 managers on 100 accounts = contention, not deadlock

2️⃣ Case 1 — Single-Account Operations (Deposit / Withdraw)

What each transaction locks

ONLY account_id = X

Result

  • At most 1 lock per transaction
  • Others wait in DB queue
  • Deadlock impossible

Why?

Deadlock requires multiple locks per transaction.

✔ Safe
✔ Correct
✔ Slow under load (expected)

3️⃣ Case 2 — Transfers Between Accounts (A → B)

Locks taken

account A
account B

Can this deadlock?

YES — if lock order is inconsistent

Example deadlock scenario

T1: lock A → waiting for B
T2: lock B → waiting for A

? Deadlock

4️⃣ Why Banks Still Handle Millions of Transfers

The Golden Fix

Always lock accounts in a global order

Account first = lock(min(fromId, toId));
Account second = lock(max(fromId, toId));

✔ Removes circular wait
✔ Deadlock impossible
✔ Works at any scale

5️⃣ What Happens with 500 Managers & Proper Design

Internally

  • DB creates row-level lock queue
  • Requests are processed serially per account
  • Parallelism across different accounts

Example

Account 1 → 20 concurrent txns → serialized
Account 2 → 15 concurrent txns → serialized
...
Account 100 → 3 concurrent txns → serialized

Total throughput = sum of independent accounts.

6️⃣ Why Databases Are Good at This

Databases:

  • Are written in C
  • Use optimized lock managers
  • Detect deadlocks automatically
  • Kill one transaction safely

? Never try to reimplement this in Java

7️⃣ When Deadlocks Still Happen (Real Life)

Even with good ordering, deadlocks can occur if:

  • You lock extra tables inside transaction
  • You read rows in different order
  • You use SELECT before SELECT FOR UPDATE
  • Indexes differ
  • Foreign key cascades fire

That’s why retry is mandatory.

8️⃣ Production Pattern (Bank-Grade)

@Transactional
@Retryable(
  value = {
    DeadlockLoserDataAccessException.class,
    CannotAcquireLockException.class
  },
  maxAttempts = 3,
  backoff = @Backoff(delay = 50)
)
public void transfer(...) {
   ...
}

✔ Retry-safe
✔ Idempotent
✔ Short transactions

9️⃣ Scaling Beyond This (Advanced)

Banks don’t stop here:

Techniques

  • Shard accounts
  • Partition by accountId
  • Event sourcing
  • Command queue per account
  • Optimistic locking for low-contention accounts

But locking still exists underneath.

? Final Mental Model (Remember Forever)

Deadlock = bad lock ordering
Contention = too many people in line

Banks accept contention.
They eliminate deadlocks.

✅ Final Answer (One Line)

500 managers on 100 accounts will NOT cause deadlock if each transaction locks rows in a consistent order. It will only cause waiting, which is expected and safe.


About Author:
N
Neha Sharma     View Profile
Hi, I am using MCQ Buddy. I love to share content on this website.