none
Thread.MemoryBarrier RRS feed

  • Question

  • Hi,

    I have a query related to the following code snippet written by another developer in our project. I am not sure what is the usage.

    if (!_locked)
    {
     lock (_queueLock)
            {
             if (!_locked)
                    {
                     _queue = new Queue<EntityA>();
                            Thread.MemoryBarrier();
                            _locked = true;
                    }
            }
    }

    My query is that is we already have placed the lock, what is the reason to check for "if (!_locked)" again and then using Thread.memoryBarrier.
    Waiting for an early response.

    Thanks
    Vijay


    Thanks Vijay Koul

    Tuesday, April 24, 2012 6:51 AM

Answers

  • Yeah. It is required.

    Assume you have 2 threads ThreadA and ThreadB and _locked is false. First ThreadA obtaines the lock and creates the queue. But before it sets _locked to true, ThreadB might have entered the first if condition (since _locked is still false). Then ThreadA sets _locked = true and releases the lock. Then ThreadB obtaines the lock but it now sees that _locked is set true. So, it doesn't create new Queue object and simply returns.

    So, if you do not have second if condition, then there is a potential chance if multiple threads creating multiple Queue objects.

    I hope you understand it.


    Please mark this post as answer if it solved your problem. Happy Programming!


    Tuesday, April 24, 2012 7:19 AM
  • calling Thread.MemoryBarrier() ensures CLR does not reorder the instructions while doing optimizations. For, example, if you do not use Thread.MemoryBarrier() in your code, then CLR might think there is a duplicate check for _locked (Similar to what you thought initially) and it may replace the code like below,

    if (!_locked)
    {
       lock (_queueLock)
       {
            _queue = new Queue<EntityA>();
            Thread.MemoryBarrier();
            _locked = true;
        }
    }
    So, this is what you don't need. So, you have to tell CLR that do not reorder the instauctions during optimization. So, you need to add Thread.MemoryBarrier, which ensures CLR execues your instructions as they are.

    Please mark this post as answer if it solved your problem. Happy Programming!

    Tuesday, April 24, 2012 8:14 AM
  • Hi Vijay,

    Apart from reordering code instructions CLR does caching optimization, because of this other threads accessing this variable will not see latest values immediately, by specifying memorybarrier, processor does not do any reordering of reads and writes.

    Recommended is using volatile instead of Thread.MemoryBarrier which is also not advised unless you have a reason to do. (volatile is evil)

    I hope this adds little more to your understanding.


    If this post answers your question, please click "Mark As Answer". If this post is helpful please click "Mark as Helpful".

    Tuesday, April 24, 2012 12:37 PM
  • "What is the reason for using Thread.MemoryBarrier()."

    That's a bit complicated to explain and to the best of my knowledge this is not needed in the current .NET versions. What could happen in theory is that due to compiler or CPU memory store reordering other threads will get to see a queue that's not fully initialized. Something like this:

    1. the queue constructor is called, it has to write some values to some fields in the Queue class
    2. the queue constructor returns and the new queue instance is assigned to _queue and the lock is exited
    3. but the CPU didn't yet write the memory where the queue fields are stored, those that were initialized by the constructor
    4. now if another thread tries to use the queue, bad luck, the queue object is not completely initialized

    For more details try this MSDN article: http://msdn.microsoft.com/en-us/magazine/cc163715.aspx especially the section about lazy initialization.


    Tuesday, April 24, 2012 8:19 AM
    Moderator

All replies

  • Yeah. It is required.

    Assume you have 2 threads ThreadA and ThreadB and _locked is false. First ThreadA obtaines the lock and creates the queue. But before it sets _locked to true, ThreadB might have entered the first if condition (since _locked is still false). Then ThreadA sets _locked = true and releases the lock. Then ThreadB obtaines the lock but it now sees that _locked is set true. So, it doesn't create new Queue object and simply returns.

    So, if you do not have second if condition, then there is a potential chance if multiple threads creating multiple Queue objects.

    I hope you understand it.


    Please mark this post as answer if it solved your problem. Happy Programming!


    Tuesday, April 24, 2012 7:19 AM
  • Thanks Adavesh for your answer. But I still have a query:

    What is the reason for using Thread.MemoryBarrier().

    Thanks

    Vijay


    Thanks Vijay Koul

    Tuesday, April 24, 2012 7:28 AM
  • calling Thread.MemoryBarrier() ensures CLR does not reorder the instructions while doing optimizations. For, example, if you do not use Thread.MemoryBarrier() in your code, then CLR might think there is a duplicate check for _locked (Similar to what you thought initially) and it may replace the code like below,

    if (!_locked)
    {
       lock (_queueLock)
       {
            _queue = new Queue<EntityA>();
            Thread.MemoryBarrier();
            _locked = true;
        }
    }
    So, this is what you don't need. So, you have to tell CLR that do not reorder the instauctions during optimization. So, you need to add Thread.MemoryBarrier, which ensures CLR execues your instructions as they are.

    Please mark this post as answer if it solved your problem. Happy Programming!

    Tuesday, April 24, 2012 8:14 AM
  • "What is the reason for using Thread.MemoryBarrier()."

    That's a bit complicated to explain and to the best of my knowledge this is not needed in the current .NET versions. What could happen in theory is that due to compiler or CPU memory store reordering other threads will get to see a queue that's not fully initialized. Something like this:

    1. the queue constructor is called, it has to write some values to some fields in the Queue class
    2. the queue constructor returns and the new queue instance is assigned to _queue and the lock is exited
    3. but the CPU didn't yet write the memory where the queue fields are stored, those that were initialized by the constructor
    4. now if another thread tries to use the queue, bad luck, the queue object is not completely initialized

    For more details try this MSDN article: http://msdn.microsoft.com/en-us/magazine/cc163715.aspx especially the section about lazy initialization.


    Tuesday, April 24, 2012 8:19 AM
    Moderator
  • Hi Vijay,

    Apart from reordering code instructions CLR does caching optimization, because of this other threads accessing this variable will not see latest values immediately, by specifying memorybarrier, processor does not do any reordering of reads and writes.

    Recommended is using volatile instead of Thread.MemoryBarrier which is also not advised unless you have a reason to do. (volatile is evil)

    I hope this adds little more to your understanding.


    If this post answers your question, please click "Mark As Answer". If this post is helpful please click "Mark as Helpful".

    Tuesday, April 24, 2012 12:37 PM
  • Correct me if I am wrong:)

    I remember for WinAPI, locks like EnterCriticalSection guarantee the contained codes inside the locks already have MemoryBarrier protection, no reordering or cache inside the register.

    If that works as I described, should "lock" in c# also follows the same rule?

    I really have some question about such double check locking design.

    Thanks:)

    Friday, May 11, 2012 9:33 PM
  • "If that works as I described, should "lock" in c# also follows the same rule?"

    Indeed, pretty much the same rules apply to C#'s lock. Because of that there's no way that "if (!locked)" could move outside of the lock (or be removed) and there's no way that the queue instance will not be completely initialized when the lock is exited.

    Saturday, May 12, 2012 7:04 AM
    Moderator
  • Mike, thanks for your reply.

    If "lock" has the memory barrier protection, then the external check if(!locked) and internal MemoryBarrier statement can be safely removed technically. And maybe the reason leaving it there is to get better performance, since the simple value check is faster than lock() statement, and the case in the post is to guarantee initialize the inner queue once safely, not an actually contention lock. So, the sample can improve performance.


    • Edited by sali98 Saturday, May 12, 2012 4:04 PM
    Saturday, May 12, 2012 4:02 PM
  • Removing the external if can always be done, it's only there as a perf optimization, it doesn't serve any correctness purposes.

    As I said in a previous post the barrier is unlikely to be needed but not because of the lock.

    Even if the lock doesn't allow memory " would be set before "queue" then another thread my end up seeing a null or uninitialized queue because "locked" was set. So strictly speaking the barrier is correctly placed there but because the current .NET implementation doesn't reorder writes the barrier is useless.

    Sunday, May 13, 2012 7:12 AM
    Moderator