none
How lock is able to synchronize multiple threads? RRS feed

Answers

  • Hi Arnet,

    >>holding sync block requires same race!!!. 

    The sync flag/object is unique, when one get it, the others don't get it anymore. And it is public for every racer. Sync block race is not the same with resource race. When trying to get the sync object, it check the register. The register value is newest value. It can indicate the status very well. The resource object is in memory, the memory may not changed after the register changed. There needs one more steps to update the memory.

    Best regards,


    Mike Feng
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    • Marked as answer by arnet11 Monday, May 27, 2013 4:15 PM
    Monday, May 27, 2013 5:42 AM
    Moderator

All replies

  • Basically, when thread A enters a monitor with (lock)object X, CLR assigns object X a sync block that contains, among some other things, the fact that thread A is currently holding a lock on that object.

    Now, when thread B attempts to enter that same monitor with object X, CLR sees that thread A is currently holding that lock and puts thread B to sleep until thread A releases the lock(returns from the lock(X) {} block), effectively guaranteeing that those two threads won't enter the monitor simultaneously.


    Teo Selenius

    Sunday, May 26, 2013 4:33 PM
  • Thanks for reply. Don't you think race to capture sync block creates exactly same same situation where we go for synchronization. I mean we sync things when there is a race condition between threads, holding sync block requires same race!!!. 
    Monday, May 27, 2013 4:20 AM
  • Hi Arnet,

    >>holding sync block requires same race!!!. 

    The sync flag/object is unique, when one get it, the others don't get it anymore. And it is public for every racer. Sync block race is not the same with resource race. When trying to get the sync object, it check the register. The register value is newest value. It can indicate the status very well. The resource object is in memory, the memory may not changed after the register changed. There needs one more steps to update the memory.

    Best regards,


    Mike Feng
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    • Marked as answer by arnet11 Monday, May 27, 2013 4:15 PM
    Monday, May 27, 2013 5:42 AM
    Moderator
  • Thanks Mike,

    Can you give me link/ material by which I can understand what part of object goes in register and how long it stays there.

    If it is related to value in register then can I say that I can achieve same behavior with volatile?

    Monday, May 27, 2013 4:19 PM
  • Thanks for reply. Don't you think race to capture sync block creates exactly same same situation where we go for synchronization. I mean we sync things when there is a race condition between threads, holding sync block requires same race!!!. 

    The operating system and underlying physical hardware provide mechanisms for resolving those situations.  They handle such fancy scenarios as multiple physical processors with their own caches with non-uniform memory and such.  .NET takes advantage of those.

    Rest assured that the hardware problems of multi-threaded, multi-process (and even distributed multi-machine) synchronization have been solved and the operating system provides useful synchronization mechanisms.

    Different synchronization techniques are optimized based on how they are expected to perform under the expected locking scenarios. If you're doing low-level work, then you'll want to be aware of the costs of each approach if you're optimizing.

    For example, a critical section is less expensive than a mutex because it knows that the critical section cannot be held by another process.  It also optimizes for the use case of re-entrancy, and is optimistic (expected to succeed, in that you pay a stiffer performance penalty when there is actual contention for the lock).

    Monday, May 27, 2013 4:36 PM
  • Hi Arnet,

    >>If it is related to value in register then can I say that I can achieve same behavior with volatile?

    http://stackoverflow.com/questions/2484980/why-is-volatile-not-considered-useful-in-multithreaded-c-or-c-programming 

    he problem with volatile in a multithreaded context is that it doesn't provide all the guarantees we need. It does have a few properties we need, but not all of them, so we can't rely on volatile alone.

    However, the primitives we'd have to use for the remaining properties also provide the ones thatvolatile do, so it is effectively unnecessary.

    For thread-safe accesses to shared data, we need a guarantee that

    • the read/write actually happens (that the compiler won't just store the value in a register instead and defer updating main memory until much later)
    • that no reordering takes place. Assume that we use a volatile variable as a flag to indicate whether or not some data is ready to be read. In our code, we simply set the flag after preparing the data, so all looks fine. But what if the instructions are reordered so the flag is set first?

    volatile does guarantee the first point. It also guarantees that no reordering occurs between different volatile reads/writes. All volatile memory accesses will occur in the order in which they're specified. That is all we need for what volatile is intended for: manipulating I/O registers or memory-mapped hardware, but it doesn't help us in multithreaded code where the volatile object is often only used to synchronize access to non-volatile data. Those accesses can still be reordered relative to thevolatile ones.

    The solution to preventing reordering is to use a memory barrier, which indicates both to the compiler and the CPU that no memory access may be reordered across this point. Placing such barriers around our volatile variable access ensures that even non-volatile accesses won't be reordered across the volatile one, allowing us to write thread-safe code.

    However, memory barriers also ensure that all pending reads/writes are executed when the barrier is reached, so it effectively gives us everything we need by itself, making volatile unnecessary. We can just remove the volatile qualifier entirely.

    Best regards,


    Mike Feng
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    Tuesday, May 28, 2013 3:05 AM
    Moderator