Friday 9 January 2015

Synchronization is lot more than Mutual Exclusion

In multiprocessor systems, processors generally have one or more layers of memory cache. Cache speeds up speed of accessing the data as data is closer to memory.
It also reduces traffic on the shared memory bus as many memory operations can be done using cache only.
Using cache effects the concurrent execution of program.
At the processor level, a memory model defines necessary and sufficient conditions for knowing that writes to memory by other processors are visible to the current processor, and writes by the current processor are visible to other processors.
Types of memory model:
  1.     Strong  memory model: For a given memory location, all processors see the same value
  2.     Weaker memory model: special instructions, called memory barriers, are required to flush or invalidate the local processor cache in order to see writes made by other processors or make writes by this processor visible to others.
These memory barriers are usually performed when lock and unlock actions are taken; they are invisible to programmers in a high level language.

Recent trends in processor design have encouraged weaker memory models, because the relaxations they make for cache consistency allow for greater scalability across multiple processors and larger amounts of memory.

The issue of when a write becomes visible to another thread is compounded by the compiler's reordering of code.
A simple example of this can be seen in the following code:
Class Reordering {
  int x = 0, y = 0;
  public void writer() {
    x = 1;
    y = 2;
  }

  public void reader() {
    int r1 = y;
    int r2 = x;
  }
}

Let's say that this code is executed in two threads concurrently, and the read of y sees the value 2. Because this write came after the write to x, the programmer might assume that the read of x must see the value 1. However, the writes may have been reordered. If this takes place, then the write to y could happen, the reads of both variables could follow, and then the write to x could take place. The result would be that r1 has the value 2, but r2 has the value 0.

The Java Memory Model describes what behaviors are legal in multithreaded code, and how threads may interact through memory. It describes the relationship between variables in a program and the low-level details of storing and retrieving them to and from memory or registers in a real computer system. It does this in a way that can be implemented correctly using a wide variety of hardware and a wide variety of compiler optimizations.

There are a number of cases in which accesses to program variables (object instance fields, class static fields, and array elements) may appear to execute in a different order than was specified by the program. The compiler is free to take liberties with the ordering of instructions in the name of optimization. Processors may execute instructions out of order under certain circumstances. Data may be moved between registers, processor caches, and main memory in different order than specified by the program.
For example, if a thread writes to field a and then to field b, and the value of b does not depend on the value of a, then the compiler is free to reorder these operations, and the cache is free to flush b to main memory before a. There are a number of potential sources of reordering, such as the compiler, the JIT, and the cache.

Incorrectly synchronized code can mean different things to different people. When we talk about incorrectly synchronized code in the context of the Java Memory Model, we mean any code where
1.     there is a write of a variable by one thread,
2.     there is a read of the same variable by another thread and
3.     the write and read are not ordered by synchronization
When these rules are violated, we say we have a data race on that variable. A program with a data race is an incorrectly synchronized program.

What does synchronization do?

Synchronization has several aspects.
1.     Mutual exclusion: only one thread can hold a monitor at once, so synchronizing on a monitor means that once one thread enters a synchronized block protected by a monitor, no other thread can enter a block protected by that monitor until the first thread exits the synchronized block.
2.      Synchronization ensures that memory writes by a thread before or during a synchronized block are made visible in a predictable manner to other threads which synchronize on the same monitor.

Exit Synchronized block --> Release Moniter -> Flush cache to main memory.
                                                          Write by this thread is visible to all other threads


Before enter synchronnized block  -> Acquire monitor --> Invalidate local processor cache so memory will be reloaded from main memory. (Now this thread can see changes by other thread)





The new memory model semantics create a partial ordering on memory operations.
When one action happens before another, the first is guaranteed to be ordered before and visible to the second. The rules of this ordering are as follows:
  • Each action in a thread happens before every action in that thread that comes later in the program's order.
  • An unlock on a monitor happens before every subsequent lock on that same monitor.
  • A write to a volatile field happens before every subsequent read of that same volatile.
  • A call to start() on a thread happens before any actions in the started thread.
  • All actions in a thread happen before any other thread successfully returns from a join() on that thread.

synchronized (new Object()) {}
Will not work because compiler remove this block.
Reason: compiler knows that no other thread can synchronized on the same object/monitor.
Writing to a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire. 



No comments:

Post a Comment