locked
Axum Implementation

    Dotaz

  • Hi,

    After reading the Language Spec and Programmer's Guide, I remember a note stating that the user had 500,000 waiting agents.  This sounds like the micro-thread idea behind Stackless Python - is the implementation along the same lines?

    I noticed in one of my tests that having an agent block to wait for a receive completely removes any (noticable) processing in that agent, where as, say, setting a polling mechanism in the agent causes the agent to continue execution (obviously).  Is there an under-the-hood scheduler/manager for agents to share one thread ala Stackless?

    I'm waiting for the day when some bright spark implements an ASP.NET server in this akin to Yaws (Erlang web server).

    One concept I haven't yet had time to get to grips with in the Interaction Point objects (OrderedInteractionPoint).  Where do they come into play?  I made a simple app where two agents constantly message each other with an incremented number and print it (co-routine), purely by using send and receive statements.

    Sorry for the vague questions.

    Adam
    15. května 2009 20:15

Odpovědi

  • Hi Adam,

    The compiler performs a transformation similar to that of C# iterators (yield return). Briefly, when encountering an asynchronous call (such as receive), we return from the method, hoisting the locals in the method frame class; the rest of the method is transformed into a continuation that runs upon completion of the the asynchronous method. If all the methods in the call chain are asynchronous, we return all the way to the Thread Pool, releasing the thread. We have a post on this that describes it more detail here:

    http://blogs.msdn.com/maestroteam/archive/2009/05/05/asynchrony.aspx

    I was also planning an even more detailed post that gets down to the nitty-gritty details of the implementation. Let me know if there is interest!

    OrderedInteractionPoint<T> is an unbounded buffer of T. You can use it, for example, for exchanging messages between multiple agents in the same domain. See our Dining Philosophers example.


    Artur Laksberg - MSFT
    15. května 2009 21:02
    Moderátor
  • Adam,

    The feature that you're talking about, and which both Erlang and stackless Python use, is a well-known and old technique called 'linked stacks.' Under this system, you do not allocate a contiguous block of memory to use for a program stack, but create each method frame on the stack.

    Axum uses linked stacks optionally -- they also have a cost associated with them, and it is therefore only beneficial when you are going to block on message receives a lot. Generally, you have to declare a method 'asynchronous' in order for it to use a linked frame. The exception is the agent constructor, which is meant to be the root frame (first invocation) of all Axum user code. It is our intent that all agent constructors eventually will use linked frames, but since we haven't done any work on the VS debug engine, it makes it very hard to debug your code. Therefore, it is under compiler control: with the command-line compiler, use /async to make the agent constructors asynchronous; with the IDE, set the 'Asynchronous Agent Constructor' property on each Axum project.

    The Axum runtime does use very few threads when you use linked frames -- in my example, I was running 500,000 blocked agents with 6 threads (which is the same number as I got for just starting the application)!! We're using the IO thread pool for our scheduling, which is a very efficient implementation for the threads that we do need.

    OrderedInterationPoint are intended for communication between agents in the same domain: they are less expensive and less capable than channels; unlike channels, they don't enforce value semantics (no need, typically).

    Niklas

    15. května 2009 21:30
    Moderátor

Všechny reakce

  • Hi Adam,

    The compiler performs a transformation similar to that of C# iterators (yield return). Briefly, when encountering an asynchronous call (such as receive), we return from the method, hoisting the locals in the method frame class; the rest of the method is transformed into a continuation that runs upon completion of the the asynchronous method. If all the methods in the call chain are asynchronous, we return all the way to the Thread Pool, releasing the thread. We have a post on this that describes it more detail here:

    http://blogs.msdn.com/maestroteam/archive/2009/05/05/asynchrony.aspx

    I was also planning an even more detailed post that gets down to the nitty-gritty details of the implementation. Let me know if there is interest!

    OrderedInteractionPoint<T> is an unbounded buffer of T. You can use it, for example, for exchanging messages between multiple agents in the same domain. See our Dining Philosophers example.


    Artur Laksberg - MSFT
    15. května 2009 21:02
    Moderátor
  • Adam,

    The feature that you're talking about, and which both Erlang and stackless Python use, is a well-known and old technique called 'linked stacks.' Under this system, you do not allocate a contiguous block of memory to use for a program stack, but create each method frame on the stack.

    Axum uses linked stacks optionally -- they also have a cost associated with them, and it is therefore only beneficial when you are going to block on message receives a lot. Generally, you have to declare a method 'asynchronous' in order for it to use a linked frame. The exception is the agent constructor, which is meant to be the root frame (first invocation) of all Axum user code. It is our intent that all agent constructors eventually will use linked frames, but since we haven't done any work on the VS debug engine, it makes it very hard to debug your code. Therefore, it is under compiler control: with the command-line compiler, use /async to make the agent constructors asynchronous; with the IDE, set the 'Asynchronous Agent Constructor' property on each Axum project.

    The Axum runtime does use very few threads when you use linked frames -- in my example, I was running 500,000 blocked agents with 6 threads (which is the same number as I got for just starting the application)!! We're using the IO thread pool for our scheduling, which is a very efficient implementation for the threads that we do need.

    OrderedInterationPoint are intended for communication between agents in the same domain: they are less expensive and less capable than channels; unlike channels, they don't enforce value semantics (no need, typically).

    Niklas

    15. května 2009 21:30
    Moderátor
  • Oops, it seems Artur and I were both working on a response at the same time. Good thing our responses were consistent... :-)

    Niklas
    15. května 2009 21:31
    Moderátor
  • Hi,

    Thank you both for your replies - clears it up considerably.  I will have a look at that sample project.

    I don't know about anyone else, but I would definately be interested in seeing how Axum is built, or at least more information on how it behaves at a lower level.

    Adam
    16. května 2009 7:10
  • Hi Adam,

    The compiler performs a transformation similar to that of C# iterators (yield return). Briefly, when encountering an asynchronous call (such as receive), we return from the method, hoisting the locals in the method frame class; the rest of the method is transformed into a continuation that runs upon completion of the the asynchronous method. If all the methods in the call chain are asynchronous, we return all the way to the Thread Pool, releasing the thread. We have a post on this that describes it more detail here:

    http://blogs.msdn.com/maestroteam/archive/2009/05/05/asynchrony.aspx

    I was also planning an even more detailed post that gets down to the nitty-gritty details of the implementation. Let me know if there is interest!

    OrderedInteractionPoint is an unbounded buffer of T. You can use it, for example, for exchanging messages between multiple agents in the same domain. See our Dining Philosophers example.


    Artur Laksberg - MSFT

    Thanks for your instruction! This is what I'm looking for, I understand this part.
    28. ledna 2011 23:46