The Implementation Principle of Synchronized#
synchronized can ensure that at any one time, only one method can enter the critical section while it also guarantees the memory visibility of shared variables.
Java Object Header and Monitor#
The Java object header and monitor are the foundation for implementing synchronized.
Object Header#
The object header in the Hotspot virtual machine mainly includes two parts of data: Mark Word (mark field), Klass Pointer (type pointer)
. Among them, Klass Pointer
is a pointer that points the object to its class metadata, and the virtual machine uses this pointer to determine which class instance this object belongs to. Mark Word
is used to store the object's runtime data, such as hash code (HashCode)
, GC generation age
, lock state flag
, thread holding the lock
, biased thread ID
, biased timestamp
, etc. It is key to implementing lightweight locks and biased locks.
Reference link:
Monitor#
Associative memory of locks in operating systems.
Scope of Action#
Every object in Java can act as a lock, which is the basis for synchronized implementation:
Instance methods
are modified, the lock is the current instance object.Static methods
are modified, the lock is the current class's class object.Code blocks
are modified, the lock is the object inside the parentheses.
Synchronized code blocks are implemented using monitorenter and monitorexit
instructions, while synchronized methods rely on the method modifier ACC_SYNCHRONIZED
.
Analysis of Lock Competition in Multithreading#
Lock Object#
Situation 1:
The same object is accessed by two threads in two synchronized methods.
Result: Mutual exclusion occurs.
Explanation: Because the lock targets the object, when an object calls a synchronized method, other synchronized methods must wait until it finishes executing and releases the lock before they can execute.
Situation 2:
Different objects call the same synchronized method in two threads.
Result: No mutual exclusion occurs.
Explanation: Because they are two different objects, the lock targets the object, not the method, so they can execute concurrently without mutual exclusion. To visualize, since each thread creates a new object when calling the method, there are two spaces and two keys.
Class Lock#
Situation 1:
Using the class directly to call two different synchronized methods in two threads.
Result: Mutual exclusion occurs.
Explanation: Locking the class (.class) means there is only one class object, which can be understood as there being only one space at any time, containing N rooms, and one lock. Therefore, there must be mutual exclusion between the rooms (synchronized methods).
Note: The above situation is the same as declaring an object using the singleton pattern to call non-static methods, as there is only this one object. Thus, access to synchronized methods must be mutually exclusive.
Situation 2:
Using a static object of a class to call static or non-static methods in two threads.
Result: Mutual exclusion occurs.
Explanation: Because it is one object calling, similar to Situation 1.
Situation 3:
One object calls a static synchronized method and a non-static synchronized method in two threads.
Result: No mutual exclusion occurs.
Explanation: Although it is one object calling, the lock types of the two methods are different. The static method is actually called by the class object, meaning that the locks produced by these two methods are not the same object lock, so they do not mutually exclude and can execute concurrently.
Lock Optimization#
JDK 1.6 introduced a lot of optimizations for lock implementation, such as spin locks, adaptive spin locks, lock elimination, lock coarsening, biased locks, lightweight locks, etc., to reduce the overhead of lock operations. Locks mainly exist in four states: no lock state, biased lock state, lightweight lock state, and heavyweight lock state, which will gradually upgrade with the intensity of competition. Note that locks can upgrade but cannot downgrade; this strategy is to improve the efficiency of acquiring and releasing locks.
Lock Elimination#
To ensure data integrity, we need to synchronize control over certain operations. However, in some cases, the JVM detects that there cannot be shared data competition, and thus it will eliminate these synchronized locks based on data support from escape analysis.
Lock Coarsening#
A series of continuous locking and unlocking operations may lead to unnecessary performance loss, so the concept of lock coarsening is introduced, which connects multiple continuous locking and unlocking operations into a larger range lock.
Spin Lock#
Thread blocking and waking require the CPU to switch from user mode to kernel mode. Frequent blocking and waking are burdensome for the CPU and will inevitably put great pressure on the system's concurrency performance (associative memory of zero-copy
, which also reduces unnecessary switching).
What is a spin lock? A spin lock means that the thread waits for a while without being immediately suspended, checking whether the thread holding the lock will quickly release it. How to wait? By executing a meaningless loop (spinning).
Spin waiting cannot replace blocking; not to mention the requirements for the number of processors (multi-core, it seems there are no single-core processors now), although it can avoid the overhead of thread switching, it occupies processor time. If the thread holding the lock releases it quickly, then spinning is very efficient; otherwise, the spinning thread will waste processing resources without doing any meaningful work, which is a typical case of occupying the latrine without pulling out the stool, leading to performance waste.
Adaptive Spin Lock#
JDK 1.6 introduced a smarter spin lock, known as the adaptive spin lock. Adaptive means that the number of spins is no longer fixed; it is determined by the previous spin time on the same lock and the state of the lock owner. If a thread spins successfully, the next spin count will increase. Conversely, if there are rarely successful spins for a certain lock, the spin count will decrease or even skip the spinning process in the future to avoid wasting processor resources.
Lightweight Lock#
The performance improvement of lightweight locks is based on the premise that "for the vast majority of locks, there will be no competition throughout their lifecycle." If this premise is broken, there will be additional CAS operations in addition to the mutual exclusion overhead, so in cases of multi-thread competition, lightweight locks are slower than heavyweight locks.
Biased Lock#
The locking and unlocking operations of lightweight locks rely on multiple CAS atomic instructions. Biased locks improve performance by checking whether the Mark Word is in a biased state; if so, it skips the CAS operation and directly executes the synchronized code block.
Heavyweight Lock#
Heavyweight locks are implemented through the monitor inside the object, where the essence of the monitor relies on the underlying operating system's Mutex Lock implementation. The operating system's implementation of thread switching requires switching from user mode to kernel mode, which has a very high switching cost.