none
atomic operation of reference assignment?

    Question

  • Hello everyone,

     


    I have not found any support document about whether the operation of reference variable assignment is atomic or not.

    For example, foo1 and foo2 are both reference variable of type Foo. Is foo1 = foo2 atomic? Any support documents?

     


    thanks in advance,
    George

    Tuesday, May 27, 2008 1:43 PM

Answers

  • Not sure what you are asking here, but I'm answering anyway Smile

     

    Assigning foo2 to foo1 just makes a copy of the reference so that both foo1 and foo2 are now referencing the same object, it does not copy the object itself.

     

    • Marked as answer by jack 321 Friday, May 30, 2008 5:39 AM
    Tuesday, May 27, 2008 4:29 PM
  •  Navaneeth wrote:
    Yes. Assignment of premitive types and reference types are atomic. Read the ECMA C# language specification.
     
    Actually, that's not entirely true, only assignments to reference types, bool, char, byte, sbyte, short, ushort, uint, int and float are atomic.  Assignments to Double, for example, are not atomic.

     

    See section 5.5 of the C# specification.

    • Marked as answer by jack 321 Friday, May 30, 2008 5:39 AM
    Tuesday, May 27, 2008 5:36 PM

All replies

  • Not sure what you are asking here, but I'm answering anyway Smile

     

    Assigning foo2 to foo1 just makes a copy of the reference so that both foo1 and foo2 are now referencing the same object, it does not copy the object itself.

     

    • Marked as answer by jack 321 Friday, May 30, 2008 5:39 AM
    Tuesday, May 27, 2008 4:29 PM
  • Yes. Assignment of premitive types and reference types are atomic. Read the ECMA C# language specification.
     
    Tuesday, May 27, 2008 4:56 PM
  •  Navaneeth wrote:
    Yes. Assignment of premitive types and reference types are atomic. Read the ECMA C# language specification.
     
    Actually, that's not entirely true, only assignments to reference types, bool, char, byte, sbyte, short, ushort, uint, int and float are atomic.  Assignments to Double, for example, are not atomic.

     

    See section 5.5 of the C# specification.

    • Marked as answer by jack 321 Friday, May 30, 2008 5:39 AM
    Tuesday, May 27, 2008 5:36 PM
  • It's also important to clarify that the fact the assignment is atomic does not imply that the write is immediately observed by other threads.  If the reference is not volatile, then it's possible for another thread to read a stale value from the reference some time after your thread has updated it.  However, the update itself is guaranteed to be atomic (you won't see a part of the underlying pointer getting updated).

     

    Tuesday, May 27, 2008 7:12 PM
  •  Sasha Goldshtein wrote:
    It's also important to clarify that the fact the assignment is atomic does not imply that the write is immediately observed by other threads.  If the reference is not volatile, then it's possible for another thread to read a stale value from the reference some time after your thread has updated it.  However, the update itself is guaranteed to be atomic (you won't see a part of the underlying pointer getting updated).

     

    Since this thread has gone down this route, it's also important to note that the volatile keyword does not universally make a variable thread-safe.  Volatile only means access of a member isn't reordered, it doesn't mean that a value assigned to a volatile member will be immediately visible to all processors (i.e. CPU write caching).  If you want an assignment to a member to be immediately visible to all processors and threads use an Interlocked member (like Interlocked.Exchange).  Interlocked members are guaranteed to perform a volatile write as well as a memory barrier--it's the memory barrier that is required to flush each processor's write cache to physical memory and reset the caches.

     

    This, of course, applies to processors with write caching (i.e. processors other than Intel x86).

     

    If all your member accesses are wrapped by a lock (Monitor.Enter/Exit) you don't need either volatile or to call an Interlocked method and whether or not setting a member is atomic is irrelevant because only one thread at a time can be writing to it or reading from it.

    Tuesday, May 27, 2008 7:26 PM
  • Peter, I'd like to stand corrected, but to the best of my understanding, a volatile field declaration in C# guarantees that all accesses to the field are made by Thread.VolatileRead and Thread.VolatileWrite (e.g. the note in http://msdn.microsoft.com/en-us/library/bah54t54.aspx).  In turn, Thread.VolatileRead and Thread.VolatileWrite guarantee that the read or write is performed from memory and not cached, even if it requires flushing the CPU cache.  It does not provide a memory barrier, however (which is insignificant if I'm only interested in the result of a single store, because I don't care about reordering).

    A memory barrier, on the other hand (to the best of my knowledge, again) is merely a means of preventing reordering, i.e. it ensures that the CPU can't reorder instructions on the current thread such that memory accesses prior to the call execute after memory accesses that follow the call.
    Tuesday, May 27, 2008 8:06 PM
  •  Sasha Goldshtein wrote:
    Peter, I'd like to stand corrected, but to the best of my understanding, a volatile field declaration in C# guarantees that all accesses to the field are made by Thread.VolatileRead and Thread.VolatileWrite (e.g. the note in http://msdn.microsoft.com/en-us/library/bah54t54.aspx).  In turn, Thread.VolatileRead and Thread.VolatileWrite guarantee that the read or write is performed from memory and not cached, even if it requires flushing the CPU cache.  It does not provide a memory barrier, however (which is insignificant if I'm only interested in the result of a single store, because I don't care about reordering).

    I'll have to check that, that's contradictory to a lot of other information.  For example, if you declare a volatile field in C# and view the IA64 assembly created when that code is JITted you'll see the only instruction emitted for access to the field is the ld.acq instruction--which does not flush write buffers.  The only way I've read to flush the write cache on IA64 is via a memory fence which InterlockedExchange does (Thread.MemoryBarrier simply calls InterlockedExchange).

    Tuesday, May 27, 2008 8:35 PM
  •  Peter Ritchie wrote:

    Actually, that's not entirely true, only assignments to reference types, bool, char, byte, sbyte, short, ushort, uint, int and float are atomic.  Assignments to Double, for example, are not atomic.


    Ahh - I missed that. Thanks for correcting.
    Thursday, May 29, 2008 4:02 AM
  • Sasha Goldshtein said:

    Peter, I'd like to stand corrected, but to the best of my understanding, a volatile field declaration in C# guarantees that all accesses to the field are made by Thread.VolatileRead and Thread.VolatileWrite (e.g. the note in http://msdn.microsoft.com/en-us/library/bah54t54.aspx).  In turn, Thread.VolatileRead and Thread.VolatileWrite guarantee that the read or write is performed from memory and not cached, even if it requires flushing the CPU cache.  It does not provide a memory barrier, however (which is insignificant if I'm only interested in the result of a single store, because I don't care about reordering).

    A memory barrier, on the other hand (to the best of my knowledge, again) is merely a means of preventing reordering, i.e. it ensures that the CPU can't reorder instructions on the current thread such that memory accesses prior to the call execute after memory accesses that follow the call.


    Joe Duffy has a blog post that goes into this, where he says:
    "(As an aside, many people wonder about the difference between loads and stores of variables marked as volatile and calls to Thread.VolatileRead and Thread.VolatileWrite.  The difference is that the former APIs are implemented stronger than the jitted code: they achieve acquire/release semantics by emitting full fences on the right side." whereas load and store operations on volatile variables use one-way fences that only affect instruction ordering and do not affect flushing write caches.

    An IA64 page also mentions that "Note that it is not guaranteed that OP1 and OP2 complete before the LD.ACQ; thus if one of those operations is a store, the LD.ACQ can receive bypassed data. A memory fence would be necessary to prevent this."  Where "memory fence" refers to the IA16 MF instruction that is the equivalent of a "full fence" in Joe's terminology.
    http://www.peterRitchie.com/blog
    Thursday, June 12, 2008 11:42 PM
  • Sasha Goldshtein said:

    Peter, I'd like to stand corrected, but to the best of my understanding, a volatile field declaration in C# guarantees that all accesses to the field are made by Thread.VolatileRead and Thread.VolatileWrite (e.g. the note in http://msdn.microsoft.com/en-us/library/bah54t54.aspx).  In turn, Thread.VolatileRead and Thread.VolatileWrite guarantee that the read or write is performed from memory and not cached, even if it requires flushing the CPU cache.  It does not provide a memory barrier, however (which is insignificant if I'm only interested in the result of a single store, because I don't care about reordering).

    A memory barrier, on the other hand (to the best of my knowledge, again) is merely a means of preventing reordering, i.e. it ensures that the CPU can't reorder instructions on the current thread such that memory accesses prior to the call execute after memory accesses that follow the call.


    Joe Duffy has a blog post that goes into this, where he says:
    "(As an aside, many people wonder about the difference between loads and stores of variables marked as volatile and calls to Thread.VolatileRead and Thread.VolatileWrite.  The difference is that the former APIs are implemented stronger than the jitted code: they achieve acquire/release semantics by emitting full fences on the right side." whereas load and store operations on volatile variables use one-way fences that only affect instruction ordering and do not affect flushing write caches.

    An IA64 page also mentions that "Note that it is not guaranteed that OP1 and OP2 complete before the LD.ACQ; thus if one of those operations is a store, the LD.ACQ can receive bypassed data. A memory fence would be necessary to prevent this."  Where "memory fence" refers to the IA16 MF instruction that is the equivalent of a "full fence" in Joe's terminology.
    http://www.peterRitchie.com/blog
    That all makes sense but if you look at this web page http://msdn.microsoft.com/en-us/library/ex9kh13f.aspx there is text on the yellow background (still) stating: "In C#, using the volatile modifier on a field guarantees that all access to that field uses VolatileRead or VolatileWrite . ".

    Somebody is (still) wrong and I guess it is the note text.

    Miha Markic [MVP C#] http://blog.rthand.com
    Wednesday, January 13, 2010 2:08 PM