none
Asynchronous callbacks RRS feed

  • Question

  • Is it just me or does the .Net framework seem to be more geared towards asynchronous callbacks than letting developers create and manage their own threads? I realize that concurrent programming is sticky and complicated but is trying to allow developers to side step it simply delaying the inevitable or jumping the gun? At some point I expect the number of cores in a CPU will become so great that even the most mundane tasks would benefit from multithreaded programming. When this happens the number of threads running their own specialized task will become so great that creating a level of abstraction to simulate parallel programming could make sense. Are we really so close to that point already?
    Saturday, April 26, 2008 12:30 AM

Answers

  • There is a Microsoft framework which provides an abstraction for parallel programming, it's called the Parallel Extensions for .NET.  I strongly recommend you check the latest CTP: http://msdn2.microsoft.com/en-us/concurrency/default.aspx

    Re your general comment, the fact that .NET is strongly geared towards managing your asynchronous work is a good thing, because most developers willing to manage their own threads will do significantly worse than what the framework can do.  (E.g. threadpool vs creating hundreds of threads, threadpool notification vs manually spawning a thread to poll and wait for something etc.)
    Saturday, April 26, 2008 6:49 AM
  • Note the way you have taken two of my statements that didn't have any absolute claims in them and made them sound absolute.  I clearly said "most developers".  By the way, the Asynchronous Programming Model (APM) doesn't take control from you if you want to manage your own threads.  Part of the model is that if you have a BeginWork/EndWork method pair, you also have a synchronous Work method, and that's the way the APM is implemented in 99.9% of the cases.

    By the way, the quote from my blog post doesn't imply that we should be working around the framework, just that oftentimes we should understand how the framework works, and that most frameworks contain good examples of places where an abstraction must be removed to achieve performance.  Again, this doesn't imply that if you have a framework to do something, then before all other things you should find a way to work around that framework.

    I'm not sure this discussion benefits other readers, so if you'd like to continue it over email, you could use the contact form on my blog.
    Saturday, April 26, 2008 9:58 AM
  • You are talking about two fundamentally different aspects of threading,  In the .NET framework, IAsyncResult and delegates are an abstraction around the operating system's support for overlapped I/O (check OVERLAPPED in the MSDN index).  Overlapped I/O can be quite difficult to get right if you program using the native API.  .NET makes it easy (well, easier) by using thread pool threads to signal the completion of an I/O request.

    Cooking your own is certainly possible by P/Invoking ReadFile() yourself.  But it is very unlikely you can do a better job than the framework is already doing.  You certainly can't make it any faster, it just takes a set amount of time for the operating system to receive a packet from the network card or retrieve a sector from the disk.  Likewise, the number of cores in your CPU is not going to have any measurable effect.  The value of threading here is that you can just have one that does nothing but wait for the I/O completion without blocking execution in your main program.

    Creating your own threads (or calling QueueUserWorkItem) is an entirely different story.  If you can partition the work to be done into multiple threads than can run concurrently without needing to taking locks against each other, you can have a measurable benefit from added cores.  Reducing execution time by 1 / (num cores) if you do it exactly right.  "Exactly right" is the hard part.  Very hard.
    Saturday, April 26, 2008 12:10 PM
    Moderator

All replies

  • There is a Microsoft framework which provides an abstraction for parallel programming, it's called the Parallel Extensions for .NET.  I strongly recommend you check the latest CTP: http://msdn2.microsoft.com/en-us/concurrency/default.aspx

    Re your general comment, the fact that .NET is strongly geared towards managing your asynchronous work is a good thing, because most developers willing to manage their own threads will do significantly worse than what the framework can do.  (E.g. threadpool vs creating hundreds of threads, threadpool notification vs manually spawning a thread to poll and wait for something etc.)
    Saturday, April 26, 2008 6:49 AM
  • First, let me say I'm just trying to get a feel for this kind of thing. It wasn't until recently that I began taking the time to not only understand how to use a language and it's libraries properly but understanding how that language and it's libraries function and why. So if it turns out I'm working off some misconceptions I apologize.

    ".NET is strongly geared towards managing your asynchronous work is a good thing, because most developers willing to manage their own threads will do significantly worse than what the framework can do."

    While I can see how this would justify designing classes that can manage asynchronous work it seems like gearing them to do so at the expense of those developers who are capable of managing their own threads to any degree only breeds the kind of overreliance you mention in your article "Design for Performance Up-Front".

    "
    As for abstractions and hotspots where these abstractions must be removed, I couldn't disagree more.  It doesn't occur to us often because we rely on frameworks so much without ever looking under the engine hood, but lots of the infrastructure code you'd find in a framework can be categorized as a hotspot where you should be giving up abstraction and decoupling for the sake of performance."
    Saturday, April 26, 2008 8:56 AM
  • Note the way you have taken two of my statements that didn't have any absolute claims in them and made them sound absolute.  I clearly said "most developers".  By the way, the Asynchronous Programming Model (APM) doesn't take control from you if you want to manage your own threads.  Part of the model is that if you have a BeginWork/EndWork method pair, you also have a synchronous Work method, and that's the way the APM is implemented in 99.9% of the cases.

    By the way, the quote from my blog post doesn't imply that we should be working around the framework, just that oftentimes we should understand how the framework works, and that most frameworks contain good examples of places where an abstraction must be removed to achieve performance.  Again, this doesn't imply that if you have a framework to do something, then before all other things you should find a way to work around that framework.

    I'm not sure this discussion benefits other readers, so if you'd like to continue it over email, you could use the contact form on my blog.
    Saturday, April 26, 2008 9:58 AM
  • Alright, I think we're talking past each other a bit. Thanks for your input though.
    Saturday, April 26, 2008 10:18 AM
  • You are talking about two fundamentally different aspects of threading,  In the .NET framework, IAsyncResult and delegates are an abstraction around the operating system's support for overlapped I/O (check OVERLAPPED in the MSDN index).  Overlapped I/O can be quite difficult to get right if you program using the native API.  .NET makes it easy (well, easier) by using thread pool threads to signal the completion of an I/O request.

    Cooking your own is certainly possible by P/Invoking ReadFile() yourself.  But it is very unlikely you can do a better job than the framework is already doing.  You certainly can't make it any faster, it just takes a set amount of time for the operating system to receive a packet from the network card or retrieve a sector from the disk.  Likewise, the number of cores in your CPU is not going to have any measurable effect.  The value of threading here is that you can just have one that does nothing but wait for the I/O completion without blocking execution in your main program.

    Creating your own threads (or calling QueueUserWorkItem) is an entirely different story.  If you can partition the work to be done into multiple threads than can run concurrently without needing to taking locks against each other, you can have a measurable benefit from added cores.  Reducing execution time by 1 / (num cores) if you do it exactly right.  "Exactly right" is the hard part.  Very hard.
    Saturday, April 26, 2008 12:10 PM
    Moderator