Yige

Yige

Build

Common Types of Locks

Classification of Locks#

Reentrant/Non-reentrant Locks#

Reference link: What is a Reentrant Lock

In a broad sense, a reentrant lock refers to a lock that can be called repeatedly and recursively. After using the lock in the outer layer, it can still be used in the inner layer without causing a deadlock (provided it is the same object or class).
In Java, both ReentrantLock and synchronized are reentrant locks, with the following differences:

  • Synchronized relies on JVM implementation, while ReentrantLock is implemented in the JDK.
  • ReentrantLock can specify whether it is a fair lock or a non-fair lock, while Synchronized can only be a non-fair lock.

Implementation of Reentrant Lock in ReentrantLock#

Implementation principle: A private volatile int state is maintained in AQS to count the number of reentrances, avoiding frequent hold and release operations, thus improving efficiency and avoiding deadlocks.

final boolean nonfairTryAcquire(int acquires) {
    final Thread current = Thread.currentThread();
    int c = getState();
    if (c == 0) {
        if (compareAndSetState(0, acquires)) {
            setExclusiveOwnerThread(current);
            return true;
        }
    }
    // Implementation principle
    else if (current == getExclusiveOwnerThread()) {
        int nextc = c + acquires;
        if (nextc < 0) // overflow
            throw new Error("Maximum lock count exceeded");
        // setState is a final immutable method defined in AbstractQueuedSynchronizer
        setState(nextc);
        return true;
    }
    return false;
}

Fair Locks/Non-fair Locks#

Fair locks ensure that in a multi-threaded environment, each thread acquires the lock in order, with the first arriving thread getting priority (achieved by maintaining a FIFO queue), while non-fair locks do not provide this guarantee. ReentrantLock and ReadWriteLock are both non-fair by default, as non-fair locks reduce the chances of thread suspension, allowing later threads a chance to avoid the overhead of being suspended. Looking at the code:

// Non-fair lock
final boolean nonfairTryAcquire(int acquires) {
    final Thread current = Thread.currentThread();
    int c = getState();
    if (c == 0) {
      // Key difference is here
        if (compareAndSetState(0, acquires)) {
            setExclusiveOwnerThread(current);
            return true;
        }
    }
    else if (current == getExclusiveOwnerThread()) {
        int nextc = c + acquires;
        if (nextc < 0) // overflow
            throw new Error("Maximum lock count exceeded");
        setState(nextc);
        return true;
    }
    return false;
}

// Fair lock
protected final boolean tryAcquire(int acquires) {
    final Thread current = Thread.currentThread();
    int c = getState();
    if (c == 0) {
        // hasQueuedPredecessors is the key difference
        // Before acquiring the lock, it checks if the waiting queue is empty or if it is at the head of the queue; this condition must be met to continue acquiring the lock.
        if (!hasQueuedPredecessors() &&
            compareAndSetState(0, acquires)) {
            setExclusiveOwnerThread(current);
            return true;
        }
    }
    else if (current == getExclusiveOwnerThread()) {
        int nextc = c + acquires;
        if (nextc < 0)
            throw new Error("Maximum lock count exceeded");
        setState(nextc);
        return true;
    }
    return false;
}

Read-Write Locks#

Reference link: In-depth Understanding of Read-Write Lock ReentrantReadWriteLock

How Read-Write Locks Implement Separate Tracking of Read and Write States#

The high 16 bits of the synchronization state variable state are used to represent the number of times the read lock has been acquired, while the low 16 bits represent the number of times the write lock has been acquired.

/** Returns the number of shared holds represented in count  */
// Read lock is a shared lock
static int sharedCount(int c)    { return c >>> SHARED_SHIFT; }
/** Returns the number of exclusive holds represented in count  */
// Write lock is an exclusive lock
static int exclusiveCount(int c) { return c & EXCLUSIVE_MASK; }

How Write Locks Are Acquired and Released#

Acquiring a Write Lock#

A write lock is an exclusive lock, meaning it cannot be acquired by multiple threads at the same time. The synchronization semantics of acquiring a write lock are implemented by overriding the tryAcquire method in AQS:

The main logic is: if the read lock has been acquired by a read thread or the write lock has been acquired by other write threads, the acquisition of the write lock fails; otherwise, it succeeds and supports reentrance, increasing the write state.

Releasing a Write Lock#

Releasing a write lock is done by overriding the tryRelease method in AQS.

protected final boolean tryRelease(int releases) {
    if (!isHeldExclusively())
        throw new IllegalMonitorStateException();
	//1. Subtract the write state from the synchronization state
    int nextc = getState() - releases;
	//2. Check if the current write state is 0; if so, release the write lock
    boolean free = exclusiveCount(nextc) == 0;
    if (free)
        setExclusiveOwnerThread(null);
	//3. If not 0, update the synchronization state
    setState(nextc);
    return free;
}

How Read Locks Are Acquired and Released#

Acquiring a Read Lock#

When the write lock is acquired by other threads, acquiring the read lock fails; otherwise, it succeeds by using CAS to update the synchronization state. Additionally, the current synchronization state needs to be incremented by SHARED_UNIT, because the high 16 bits of the synchronization state are used to represent the number of times the read lock has been acquired. If CAS fails or the thread that has already acquired the read lock tries to acquire it again, it is implemented by the fullTryAcquireShared method.

Releasing a Read Lock#

The implementation of releasing a read lock is mainly done through the tryReleaseShared method, as shown below:

protected final boolean tryReleaseShared(int unused) {
    Thread current = Thread.currentThread();
	// The previous part is still to implement new features like getReadHoldCount
    if (firstReader == current) {
        // assert firstReaderHoldCount > 0;
        if (firstReaderHoldCount == 1)
            firstReader = null;
        else
            firstReaderHoldCount--;
    } else {
        HoldCounter rh = cachedHoldCounter;
        if (rh == null || rh.tid != getThreadId(current))
            rh = readHolds.get();
        int count = rh.count;
        if (count <= 1) {
            readHolds.remove();
            if (count <= 0)
                throw unmatchedUnlockException();
        }
        --rh.count;
    }
    for (;;) {
        int c = getState();
		// To release the read lock, simply subtract the read state from the synchronization state
        int nextc = c - SHARED_UNIT;
        if (compareAndSetState(c, nextc))
            // Releasing the read lock has no effect on readers,
            // but it may allow waiting writers to proceed if
            // both read and write locks are now free.
            return nextc == 0;
    }
}

Optimistic Locks/Pessimistic Locks#

Reference link: Optimistic Locks and Pessimistic Locks Essential for Interviews

Optimistic Locks#

Always assume the best case; every time data is fetched, it is believed that others will not modify it, so no locks are applied. However, when updating, it checks whether others have updated the data during this period. This can be implemented using version number mechanism and CAS algorithm.
Optimistic locks are suitable for applications with many reads, thus improving throughput.

Disadvantages of Optimistic Locks (CAS Algorithm Flaws)#

  1. ABA problem (solved by AtomicStampedReference class since JDK 1.5).
  2. Long loop time incurs high overhead: Spin CAS, which continuously loops until successful if unsuccessful; if it remains unsuccessful for a long time, it can cause significant CPU execution overhead.
  3. Can only guarantee atomic operations on a single shared variable: CAS is only effective for a single shared variable; when operations involve multiple shared variables, CAS is ineffective. Since JDK 1.5, the AtomicReference class has been provided to ensure atomicity between reference objects, allowing multiple variables to be placed in one object for CAS operations.

Pessimistic Locks#

Every time data is fetched, it is assumed that others will modify it, so a lock is applied every time data is fetched. This way, others wanting to access this data will be blocked until they acquire the lock (shared resources are used by only one thread at a time, while other threads are blocked, and once done, the resource is passed to other threads).

  1. Many locking mechanisms are used in traditional relational databases, such as row locks, table locks, read locks, write locks, etc., which all involve locking before performing operations.
  2. In Java, synchronized and ReentrantLock and other exclusive locks are implementations of the pessimistic lock concept.
Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.