none
Improve this singleton wrapper

    Question

  • I want a singleton wrapper so that I can avoid the .instance.method call syntax.

    I want calls like

    SingletonWrapper.fn ();

    instead of

    Singleton.Instance().fn();

     

    the code bellow is my first attempt but I think there must be a better way of doing it.

     thanks

     

    public class SingletonWrapper

    {

    public static string fn()

    {

    return Singleton.Instance().Foo();

    }

    private class Singleton

    {

    private static Singleton instance;

    protected Singleton(){}

    public static Singleton Instance()

    {

    // Use 'Lazy initialization'

    if (instance == null)

    {

    instance = new Singleton();

    }

    return instance;

    }

    public string Foo()

    {

    return "This is it";

    }

    }

    }

    Thursday, July 12, 2007 8:01 PM

Answers

  • There is try the link below for details.

     

    http://www.yoda.arachsys.com/csharp/singleton.html

    Thursday, July 12, 2007 8:13 PM
  • Hi

     

    in the code above you are kind of duplicating the singleton...

    take a look at the following:

    Code Snippet

    public class Something{

    private static Something _Instance = new Something();

     

    public static String SomeMethod{

    return _instance.SomeOtherMethod;

    }

     

    String SomeOtherMethod{

    return "gimmegimme";

    }

     

    protected Something(){}

    }
     

     

    some remarks:

    - this is an implementation of a singleton, even if it doesn't have a public static property called Instance

    as there can be but one instance at any time

    - the static string SomeMethod  has a different name than the instance method, as c# doesn't allow you to access static methods via instances

    - this allows you to just call Something.SomeMethod

    - you could create an additional method CreateInstance() that would need to be protected or private and called before passing a call to the instance field

     

    Code Snippet

    protected void CreateInstance(){

    if(_instance == null){

     _Instance = new Something()

    }

    }

     However, this method is not really  thread safe, as two threads can be accessing the CreateInstance method at the same time

    what you really need to do is:

    Code Snippet

    private Object _lock = new Object();

    protected void CreateInstance(){

    if(_instance == null){

    lock(_lock){

    _Instance = new Something()

    }

    }

    }

     

    The code above comes from Jeffrey Richter's CLR via C# 2.0 (recommend reading)

     Hope this helps you out

     

     

     

     

    Thursday, July 12, 2007 8:22 PM

All replies

  • There is try the link below for details.

     

    http://www.yoda.arachsys.com/csharp/singleton.html

    Thursday, July 12, 2007 8:13 PM
  • Hi

     

    in the code above you are kind of duplicating the singleton...

    take a look at the following:

    Code Snippet

    public class Something{

    private static Something _Instance = new Something();

     

    public static String SomeMethod{

    return _instance.SomeOtherMethod;

    }

     

    String SomeOtherMethod{

    return "gimmegimme";

    }

     

    protected Something(){}

    }
     

     

    some remarks:

    - this is an implementation of a singleton, even if it doesn't have a public static property called Instance

    as there can be but one instance at any time

    - the static string SomeMethod  has a different name than the instance method, as c# doesn't allow you to access static methods via instances

    - this allows you to just call Something.SomeMethod

    - you could create an additional method CreateInstance() that would need to be protected or private and called before passing a call to the instance field

     

    Code Snippet

    protected void CreateInstance(){

    if(_instance == null){

     _Instance = new Something()

    }

    }

     However, this method is not really  thread safe, as two threads can be accessing the CreateInstance method at the same time

    what you really need to do is:

    Code Snippet

    private Object _lock = new Object();

    protected void CreateInstance(){

    if(_instance == null){

    lock(_lock){

    _Instance = new Something()

    }

    }

    }

     

    The code above comes from Jeffrey Richter's CLR via C# 2.0 (recommend reading)

     Hope this helps you out

     

     

     

     

    Thursday, July 12, 2007 8:22 PM
  •  frederikm wrote:

    Hi

     

    in the code above you are kind of duplicating the singleton...

    take a look at the following:

    Code Snippet

    public class Something{

    private static Something _Instance = new Something();

     

    public static String SomeMethod{

    return _instance.SomeOtherMethod;

    }

     

    String SomeOtherMethod{

    return "gimmegimme";

    }

     

    protected Something(){}

    }
     

     

    some remarks:

    - this is an implementation of a singleton, even if it doesn't have a public static property called Instance

    as there can be but one instance at any time

    - the static string SomeMethod  has a different name than the instance method, as c# doesn't allow you to access static methods via instances

    - this allows you to just call Something.SomeMethod

    - you could create an additional method CreateInstance() that would need to be protected or private and called before passing a call to the instance field

     

    Code Snippet

    protected void CreateInstance(){

    if(_instance == null){

     _Instance = new Something()

    }

    }

     However, this method is not really  thread safe, as two threads can be accessing the CreateInstance method at the same time

    what you really need to do is:

    Code Snippet

    private Object _lock = new Object();

    protected void CreateInstance(){

    if(_instance == null){

    lock(_lock){

    _Instance = new Something()

    }

    }

    }

     

    The code above comes from Jeffrey Richter's CLR via C# 2.0 (recommend reading)

     Hope this helps you out

    The double-check lock pattern is not thread safe on all platforms either, unless _instance is declared volatile (see http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/).  If you're interested in a low-lock singleton, Jon Skeet's at http://www.yoda.arachsys.com/csharp/singleton.html is the one to look at.

    Thursday, July 12, 2007 10:56 PM
  • The double-checked locking pattern is thread safe in .net 2.0 (it'd be not thread safe in the ECMA memory model). Here's the excerpt from http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx:

     

    "Like all techniques that remove read locks, the code in Figure 7 relies on strong write ordering. For example, this code would be incorrect in the ECMA memory model unless myValue was made volatile because the writes that initialize the LazyInitClass instance might be delayed until after the write to myValue, allowing the client of GetValue to read the uninitialized state. In the .NET Framework 2.0 model, the code works without volatile declarations."

     

    Friday, July 13, 2007 7:01 AM
  •  Thomas Danecker wrote:

    The double-checked locking pattern is thread safe in .net 2.0 (it'd be not thread safe in the ECMA memory model). Here's the excerpt from http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx:

     

    "Like all techniques that remove read locks, the code in Figure 7 relies on strong write ordering. For example, this code would be incorrect in the ECMA memory model unless myValue was made volatile because the writes that initialize the LazyInitClass instance might be delayed until after the write to myValue, allowing the client of GetValue to read the uninitialized state. In the .NET Framework 2.0 model, the code works without volatile declarations."

     

    Joe Duffy's interpretation of that suggests you still need volatile for IA64 "The 2.0 memory model does not use ld.acq’s unless you are accessing volatile data (marked w/ the volatile modifier keyword or accessed via the Thread.VolatileRead API)." [1]; although he seems contradictory, or that there's heuristics to detect double-checked lock patterns.  None of this is in any specification.

    But, other's have read Joe's blog and Vance's article and have come to the same conclusion:
    http://geekswithblogs.net/akraus1/articles/90803.aspx
    http://www.yoda.arachsys.com/csharp/singleton.html
    http://blogs.msdn.com/cbrumme/archive/2003/05/17/51445.aspx

    Honestly, I don't consider Vance's article a description of what a standards-compliant framework must do.  Mostly because it's just a magazine article, but it's contradicted by the only spec we have and by others at Microsoft.

     

    [1] http://www.bluebytesoftware.com/blog/PermaLink,guid,543d89ad-8d57-4a51-b7c9-a821e3992bf6.aspx

     

    Friday, July 13, 2007 3:02 PM
  • I read somewhere (maybe it was the CLR specification but I've to look it up) that it also works on an IA64 architecute because the memory model is very restrictive in this case (what leads to performance penalties but the CLR favors working code on all platforms over performance). I'll look it up and post details on it, but it will take some time because I'm currently quite busy.
    Saturday, July 14, 2007 10:14 AM
  • Here's my answer (sooner than expcected):

     

    Here's the summary of the applied rules (all from the link to http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx):

     

    The fundamental rules:

    1. The behavior of the thread when run in isolation is not changed. Typically, this means that a read or a write from a given thread to a given location cannot pass a write from the same thread to the same location.
    2. Reads cannot move before entering a lock. (This implies an invalidation of the cache at the beginning of the lock. Otherwise reads would move back in time to the fetching of the cache-line which may be before entering the lock.)
    3. Writes cannot move after exiting a lock. (This implies a flushing of the cache at exiting a lock. Otherwise writes would move forward in time to the flushing of the cahce-line which may be after exiting the lock.)

    The ECMA memory model:

    1. Reads and writes cannot move before a volatile read. (Implies invalidating the cache.)
    2. Reads and writes cannot move after a volatile write. (Implies flushing the cahce.)

    These definition also limits code-reording done by the compiles (language-to-managed and managed-to-native compilers).

     

    So I do not agree with Joe Duffy who assumes that the read at "return instance;" may be reordered prior to "!initialized" what's impossible because reads and writes cann't move before entering the lock so the read of instance (after the lock) can't move before the read of initialized (before the lock).

     

    Assume we have the following code:

    Code Snippet

    class Singleton

    {

        static object syncObject = new object();

        static bool initialized; // implicitly initialized to false

        static Singleton instance; // implicitly initialized to null

     

        public static Singleton Instance

        {

            get

            {

                if(!initialized) // reading from the cache (maybe an old value)

                {

                    lock(syncObject) // invalidating the cache

                    {

                        if(!initialized) // reading the value fetched after the lock

                        {

                            instance = new Singleton(); // writing to the cache

                            initialized = true; // writing to the cache

                        }

                    } // flushing the cache (future reads will read the new value

                      // after invalidating the cache what's done at entering the lock)

                }

     

                return instance;

            }

        }

        private Singleton()

        {

        }

    }

     

    The CLR specifies that the fields (syncObject, initialized and instance) are initialized prior to using them (the first read). This implies, that the cache is flushed and invalidated after the CLR initializes them (ensured by the CLR through the lock(typeof(Singleton)) ), so we'll have no problems with the initialization.

     

    I'll also want to show up the differences and effect of the IA64 memory model:

    The x86 memory model (and thus also the downward compatible x64 memory model) specifies that "every write has release semantics" meaning that it will be notices by every read after invalidating the cache.

    This semantic isn't spefied by the ECMA memory model and the IA64 architecture also doesn't specify this semantic, but (as stated by Jeffrey Richter in his book "CLR via C#") Microsoft's implementation of the CLR specifies again that "every write has release semantics" (ensured by the JIT-compiler for the IA64 architecture).

    Saturday, July 14, 2007 12:55 PM
  • I've to retract my statement: Joe Duffy is correct with his statements.

    Assume the value of instance (=null) was loaded into the cache but the loaded cache-line doesn't include initialized. Now another thread on another CPU and another cache is initializing the singleton and so writing initialized=true and instance = new Singleton() to the memory (flushing it's cache). Now the first thread is entering the getter. The value of initialized doesn't exist in the cache so it's loaded (initialized = true) but this load doesn't load instance (because instance is on another cache-line which was already loaded). The value of instance (=null) exists still in the cache because of the previous fetch. In this very rare case "null" will be returned.

    This behaviour also may occur on non IA64 architectures. The only requisite for that bug is a system with more than one cache.

    A volatile read of initialized wouldn't help in this case because it wouldn't load instance into the cache. We have to do a volatile read of instance to ensure that the double checked locking pattern is thread safe.

     

    I'll post a more comprehensible answer later when I'm not that busy.

    Saturday, July 14, 2007 1:52 PM
  •  Thomas Danecker wrote:

    The fundamental rules:

    1. The behavior of the thread when run in isolation is not changed. Typically, this means that a read or a write from a given thread to a given location cannot pass a write from the same thread to the same location.
    2. Reads cannot move before entering a lock. (This implies an invalidation of the cache at the beginning of the lock. Otherwise reads would move back in time to the fetching of the cache-line which may be before entering the lock.)
    3. Writes cannot move after exiting a lock. (This implies a flushing of the cache at exiting a lock. Otherwise writes would move forward in time to the flushing of the cahce-line which may be after exiting the lock.)

    The ECMA memory model:

    1. Reads and writes cannot move before a volatile read. (Implies invalidating the cache.)
    2. Reads and writes cannot move after a volatile write. (Implies flushing the cahce.)

    These definition also limits code-reording done by the compiles (language-to-managed and managed-to-native compilers).

     

    I'll also want to show up the differences and effect of the IA64 memory model:

    The x86 memory model (and thus also the downward compatible x64 memory model) specifies that "every write has release semantics" meaning that it will be notices by every read after invalidating the cache.

    This semantic isn't spefied by the ECMA memory model and the IA64 architecture also doesn't specify this semantic, but (as stated by Jeffrey Richter in his book "CLR via C#") Microsoft's implementation of the CLR specifies again that "every write has release semantics" (ensured by the JIT-compiler for the IA64 architecture).

    Vance's rules 1 and 2 and the two ECMA points that you quote are only always true with respect to processor write-caching (which is what "acquire semantics" and "release semantics" deals with, in my opinion) and not with compiler optimization.  For example, given the following two methods the JIT will generate an identical instruction stream on the x86 (I qualify "x86" because I don't have access to a IA64 to verify):

    Code Snippet

    // Ensure the method isn't inlined to we can be sure a debugger can show us

    // the dissassembly of the JIT-generated instructions

    internal class SomeClass {

    volatile int volNum;

    [System.Runtime.CompilerServices.MethodImpl(MethodImplOptions.NoInlining)]

    public int Member2 ( )

    {
       int value = 5;
       value = 10;
       volNum = 6;
       return value;
      }
      [System.Runtime.CompilerServices.MethodImpl(MethodImplOptions.NoInlining)]
      public int Member2a ( )
      {
       volNum = 6;
       return 10;
      }

    }

     

    ...clearly the the write of 10 to value has moved after a volatile write.  That doesn't violate those rules if they only apply to flushing the processor's write-cache because there was never a processor instruction to write the value 10 to value before the volatile write and VolatileWrite likely calls MemoryBarrier or uses appropriate processor-specific instructions to flush the write-cache.

     

    Saturday, July 14, 2007 1:54 PM
  •  Thomas Danecker wrote:

    I've to retract my statement: Joe Duffy is correct with his statements.

    Assume the value of instance (=null) was loaded into the cache but the loaded cache-line doesn't include initialized. Now another thread on another CPU and another cache is initializing the singleton and so writing initialized=true and instance = new Singleton() to the memory (flushing it's cache). Now the first thread is entering the getter. The value of initialized doesn't exist in the cache so it's loaded (initialized = true) but this load doesn't load instance (because instance is on another cache-line which was already loaded). The value of instance (=null) exists still in the cache because of the previous fetch. In this very rare case "null" will be returned.

    This behaviour also may occur on non IA64 architectures. The only requisite for that bug is a system with more than one cache.

    A volatile read of initialized wouldn't help in this case because it wouldn't load instance into the cache. We have to do a volatile read of instance to ensure that the double checked locking pattern is thread safe.

     

    I'll post a more comprehensible answer later when I'm not that busy.

    Actually, even Vance shows (in http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx) that your example (using a bool to test if an instance is initialized)  is not thread safe, see the last three paragraphs of Technique 4: Lazy Initialization.
    Saturday, July 14, 2007 2:06 PM
  •  Peter Ritchie wrote:

    Vance's rules 1 and 2 and the two ECMA points that you quote are only always true with respect to processor write-caching (which is what "acquire semantics" and "release semantics" deals with, in my opinion) and not with compiler optimization.  For example, given the following two methods the JIT will generate an identical instruction stream on the x86 (I qualify "x86" because I don't have access to a IA64 to verify):

    Code Snippet

    // Ensure the method isn't inlined to we can be sure a debugger can show us

    // the dissassembly of the JIT-generated instructions

    internal class SomeClass {

    volatile int volNum;

    [System.Runtime.CompilerServices.MethodImpl(MethodImplOptions.NoInlining)]

    public int Member2 ( )

    {
       int value = 5;
       value = 10;
       volNum = 6;
       return value;
      }
      [System.Runtime.CompilerServices.MethodImpl(MethodImplOptions.NoInlining)]
      public int Member2a ( )
      {
       volNum = 6;
       return 10;
      }

    }

     

    ...clearly the the write of 10 to value has moved after a volatile write.  That doesn't violate those rules if they only apply to flushing the processor's write-cache because there was never a processor instruction to write the value 10 to value before the volatile write and VolatileWrite likely calls MemoryBarrier or uses appropriate processor-specific instructions to flush the write-cache.

    It does not only apply on processor-caches. Try to make value a static field (nonvolatile) and you'll have different code. Local variables and arguments are not subject to the multi-threaded memory model and so the rules have no value to them. Only globally visible memory (static variables, fields of classes, etc.) are subject of this rules.

    Also in your example the volatile declaration of volNum has no affect because in the .net 2.0 memory model every write is a volatile write and if you make value a static field, your code would be compiled to native code as is, meaning that there will be no optimizations at all (including the directly following writes to value (int value = 5; value = 10; ).

     Peter Ritchie wrote:

    Actually, even Vance shows (in http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/default.aspx) that your example (using a bool to test if an instance is initialized)  is not thread safe, see the last three paragraphs of Technique 4: Lazy Initialization.

    Yes, you're right. I didn't understand correctly what Vance has meant.

    To make it threadsafe, there must be a volatile read of initialized (as described by Joe Duffy).

    Saturday, July 14, 2007 5:08 PM
  •  Thomas Danecker wrote:

    It does not only apply on processor-caches. Try to make value a static field (nonvolatile) and you'll have different code. Local variables and arguments are not subject to the multi-threaded memory model and so the rules have no value to them. Only globally visible memory (static variables, fields of classes, etc.) are subject of this rules.

    That's not what rules in either reference say, they specifically mention writes to memory (neither heap nor stack, all memory).  Besides, why should stack variables be excluded from JIT optimization restrictions?  They can be used by multiple threads at the same time.  Take this example:

    Code Snippet
      public int Method( )
      {
       double value1 = 3.1415;
       int value2 = 42;
       IAsyncResult result = BeginBackgroundOperation(ref value1, ref value2);

       // Sit in a loop waiting for up to 250ms at a time
       // doing something with the double value...
       do
       {
        value2 = 5;
        // doubles aren't atomic, we need to use
        // VolatileRead to read the "latest written" value and
        // because BeginBackgroundOperation uses
        // Thread.VolatileWrite(ref double).
        double temp = Thread.VolatileRead(ref value1);
        Thread.Sleep(value2);
        // ...
       } while (!result.AsyncWaitHandle.WaitOne(250, false));
       return 1;
      }

     

    If stack variables (locals) were except from such rules the JIT could optimize that as follows:

    Code Snippet
      public int Method( )
      {
       double value1 = 3.1415;
       int value2 = 42;
       IAsyncResult result = BeginBackgroundOperation(ref value1, ref value2);

       do
       {
        double temp = Thread.VolatileRead(ref value1);
        Thread.Sleep(5);
       } while (!result.AsyncWaitHandle.WaitOne(250, false));
       return 1;
      }

    ...not good.

      

     Thomas Danecker wrote:

    Also in your example the volatile declaration of volNum has no affect because in the .net 2.0 memory model every write is a volatile write and if you make value a static field, your code would be compiled to native code as is, meaning that there will be no optimizations at all (including the directly following writes to value (int value = 5; value = 10; ).

    I'm not following you, if every write were volatile no optimizations could occur, based on Vance's rules and ECMA's rules; if they affected compiler optimizations in addition to flushing processor write-caches where they exist.  The volatile declaration may no effect on x86; but Joe Duffy has said "volatile" does have an effect on IA64.

    And why should I have to leak implementation details of a method into the interface of my class?  Yes, a static would change the code; but I could change the code in any number of ways to get it to change what the JIT generates. 

    The point is, it shows the "fundamental rules" can only be observed as being followed *if* they only apply to writes that are cached by the processor.

    Saturday, July 14, 2007 6:01 PM
  • There is no way to access local variables or arguments from another thread.

    I've provided my own sample code because your sample is not complete and it seems you haven't tried it out.

     

    Code Snippet

    delegate void Method(ref int i);

    static void Main()
     {
         int local = 3;
         Method m = Test;

        IAsyncResult result = m.BeginInvoke(ref local, null, null);
         Thread.Sleep(1000);
         Console.WriteLine(local);
         local = 5;
         m.EndInvoke(ref local, result);
         Console.WriteLine(local);
         Console.ReadKey();
     }
     static void Test(ref int argument)
     {
         argument = 4;
         Thread.Sleep(2000);
         Console.WriteLine(argument);
     }

     

    With your assumptions the output should be the following:

    4

    5

    5

     

    But the output is actually this:

    3

    4

    4

     

    You should know about asynchrouous operations that out and ref params are only passed back at EndXxx.

    • Proposed as answer by AJama Saturday, July 12, 2008 5:28 PM
    Saturday, July 14, 2007 6:27 PM
  •  Thomas Danecker wrote:

    There is no way to access local variables or arguments from another thread.

    I was thinking in the PInvoke case; but yes that would be similar, the address would be marshaled an the other thread wouldn't be directly accessing that thread's stack.  I think you might be able to do it with an unsafe method in C#; but, even if you could it's not a good idea (neither was my original example if it *could* access another thread's stack).  It's moot anyway; there's still code that shows the applicable rules in both the CIL spec and Vance's article are being violated if they're viewed in the context of both processor write-cache flushing AND scope of JIT optimizations.  If you remove the scoping of JIT optimizations, you no longer have a violation.  I'd rather view the framework as not violating either memory model and put "volatile" in there.

     

    Declaring the instance variable in Vance's lazy init (double-check lock pattern) doesn't make it any less thread-safe...

    Sunday, July 15, 2007 1:22 AM
  • I agree with you. I read so much stuff about optimizations and multithreading in the last days and weeks (CLI Specification, Jeffrey Richter's "CLR via C#", dozens of blogs and many resources at msdn) so I do not rember exactly where I've read the things I'm talking about. It's all just in my brain. I should start writing down notes with quotation of references! I never used the keyword "volatile" before and there is no need for it in the common case because I'm sparingly using shared state and whenever I've to use shared state I'm synchronizing with a lock!

    Now we had come to a really common case: The double-checked locking pattern which was intended to create a singleton in a thread-safe manner and I was kind of surprised that this quite famous pattern (used nearly everywhere without volatile access) missed its intention on a multi-core, multi-cache system (regardless of the architecture used). I started of being really interested in volatile access and thread-safety on multi-cache systems. Now I'm questioning how many other common patterns aren't threadsafe though they're broadly used in everydays software development? Many professional tell us: There is no way to write multi-threaded software without introducing race-conditions, deadlocks and other bugs and I'm starting to belief them.

    Are there existing ways to write complex and performant software without multi-threading? How would you implement a system which only relies on message passing? There are approaches in Microsoft's Singularity project but it's only dealing with inter-process-communication and not with intra-process-communication. Wouldn't it be performant enough to disallow multiple threads to access shared memory and to use an approach like Singularity's exchange heap and contract-based channels? Current software-architectures and programming-models aren't designed to be 100% working on highly-optimized multi-cache systems also due to the fact that there were no such systems till some years ago but processor architectures are going to be more and more parallelized (think on Intel's 80-core prototype) and software isn't able to make profit of this massive parallelization just because it's not designed for it! We've to go complete different ways. Software has to be designed around small tasks with well-defined dependencies on other tasks, everyone with his own set of memory. In future we will not have the ability to access shared memory (imagine 80 cores trying to access the motherboard-memory at once - no chance!) but current software just ignores this fact!

    We have to find solutions without depending on shared memory! We need a message-passing approach!

     

    Just my two cents worth,

    Thomas Danecker

    Sunday, July 15, 2007 10:11 AM
  •  Thomas Danecker wrote:

    I agree with you. I read so much stuff about optimizations and multithreading in the last days and weeks (CLI Specification, Jeffrey Richter's "CLR via C#", dozens of blogs and many resources at msdn) so I do not rember exactly where I've read the things I'm talking about. It's all just in my brain. I should start writing down notes with quotation of references! I never used the keyword "volatile" before and there is no need for it in the common case because I'm sparingly using shared state and whenever I've to use shared state I'm synchronizing with a lock!

    Indeed.  There's lots written about it, with many many people unable to agree on some aspects.  Much that has been written is contradictory as well and many people haven't read it all and don't know what contradicts and what doesn't (and/or they've interpreted it incorrectly).  Using a boolean in the double-checked lock pattern for example, it's no longer the double-check lock pattern...

     Thomas Danecker wrote:
    Now we had come to a really common case: The double-checked locking pattern which was intended to create a singleton in a thread-safe manner and I was kind of surprised that this quite famous pattern (used nearly everywhere without volatile access) missed its intention on a multi-core, multi-cache system (regardless of the architecture used). I started of being really interested in volatile access and thread-safety on multi-cache systems. Now I'm questioning how many other common patterns aren't threadsafe though they're broadly used in everydays software development? Many professional tell us: There is no way to write multi-threaded software without introducing race-conditions, deadlocks and other bugs and I'm starting to belief them.

    .NET goes along away towards being able to write thread-safe code because it's baked into the framework and (at least for C#) into the language specification.  It does that by addressing the processor write-cache where it can most beneficial.  VB doesn't have a "volatile" keyword (it never supported the creation of threads until 2002/3) so it's harder in VB; but possible with the framework.  But yes, for many years programmers to for granted that the processor was an abstraction and assumed things like the double-check lock pattern would always work.  People have realized it isn't thread-safe (http://www.devarticles.com/c/a/Cplusplus/C-plus-in-Theory-Why-the-Double-Check-Lock-Pattern-Isnt-100-ThreadSafe/) and steps are being made to accomodate for the different processor memory models (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2138.html and (http://www.ibm.com/developerworks/java/library/j-jtp02244.html) but it's slow agreeing on standard changes and implementing framework changes, especially when changing memory models.

     Thomas Danecker wrote:

    Are there existing ways to write complex and performant software without multi-threading? How would you implement a system which only relies on message passing? There are approaches in Microsoft's Singularity project but it's only dealing with inter-process-communication and not with intra-process-communication. Wouldn't it be performant enough to disallow multiple threads to access shared memory and to use an approach like Singularity's exchange heap and contract-based channels? Current software-architectures and programming-models aren't designed to be 100% working on highly-optimized multi-cache systems also due to the fact that there were no such systems till some years ago but processor architectures are going to be more and more parallelized (think on Intel's 80-core prototype) and software isn't able to make profit of this massive parallelization just because it's not designed for it! We've to go complete different ways. Software has to be designed around small tasks with well-defined dependencies on other tasks, everyone with his own set of memory. In future we will not have the ability to access shared memory (imagine 80 cores trying to access the motherboard-memory at once - no chance!) but current software just ignores this fact!

    We have to find solutions without depending on shared memory! We need a message-passing approach!

     

    Currently designers and programmers must consider what is and isn't shared (or what can and can't be shared) when designing/programming multi-threaded software, mostly for thread-safety but also for correctness  At this stage of consumer computer deployment the need for complex multi-threading is rare; but that will change as processor speed improvements are horizontal and not vertical. 

    When .NET was conceived (or the successor to COM+ was conceived) it was viewed that only handful of people could write correct COM code; hence COM was abstracted away in .NET.  I think the same sort of thing may eventually happen with multi-threaded coding where object-oriented programming may evolve to thread-oriented programming.  Some basic syntax of existing popular languages may be usable; but for the most part I think it will be the next full-blown paradigm shift.  Maybe functional programming will be the panacea where state is immutable (if only one thread can know about the state until after it's creation shared state is always only "read").

    As it stands, we certainly need something to simplify the complexities of parallelizing applicable code without making it horrendously complex and prone to error.  OpenMP tries to do that in C++ (alas, not in C++/CLI) but other language's and framework's similar initiatives just haven't seemed to catch on.  To take a loop and parallelize it for x processors in C# is just too complex.  Changing thought patterns from concepts of "loops" to "collections" will probably go a long way in that respect, allowing a collection's enumerator to parallelize because only it can truly know if it can be.

    Are you involved in the community outside of the MSDN Forums (newsgroups, forums, blog)?  Feel free to send me an email, it's in my profile.

    Sunday, July 15, 2007 2:27 PM