locked
Scalability for Hosting Multiple Instances in a Single Workflow Runtime RRS feed

  • Question

  • Hi,

      I would like to know the maximum number of workflow instances that a single workflow runtime can handle at the same time. Is it just dependent on the server hardware hosting the workflow runtime or there is a maximum number of workflow instances to be hosted in a single runtime.

     

     

    Wednesday, February 1, 2006 1:34 PM

Answers

  • Hi Akram,

         The maximum number of workflows that can run simultaneously in a single workflow runtiem is the same as the number of Clr threads available for the process. Typically this is 25 for Single proc machines and 100 for dual proc or server machines. You can control the number of instances that can be run by specifying the number in the DefaultWorkflowScheduler Constructor. The suggested number for single proc machines is 20.

     

    Thanks,

    Srikanth.

    Wednesday, February 1, 2006 7:35 PM

All replies

  • Hi Akram,

         The maximum number of workflows that can run simultaneously in a single workflow runtiem is the same as the number of Clr threads available for the process. Typically this is 25 for Single proc machines and 100 for dual proc or server machines. You can control the number of instances that can be run by specifying the number in the DefaultWorkflowScheduler Constructor. The suggested number for single proc machines is 20.

     

    Thanks,

    Srikanth.

    Wednesday, February 1, 2006 7:35 PM
  • ok what if i am developing a full fledged enterprise server built on Windows Workflow Foundation, how is it possible to support over 500 running workflows at the same time, or maybe more. shall i use more than one runtime on different servers?

     

    Thursday, February 2, 2006 6:47 AM
  • yes, different runtimes can be used for th scenario pointed out.The max threads in a thread pool can be changed from its default value of 25.The following links might be helpful.

    http://www.c-sharpcorner.com/Code/2003/June/ThreadPoolLimit25.asp

    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/progthrepool.asp

     

    Thanks,

    Srikanth.

     

    Thursday, February 2, 2006 5:20 PM
  • This is a major problem isn't it? I would love to use the workflow in my app but a limit of 25 or even 100 is not acceptable. Using a separate thread for each workflow isn't going to work for scalability is it !

    Marcus

    Sunday, February 5, 2006 8:36 AM
  • Well it won't be a problem if you start doing your own persistance Service, A Workflow can be unloaded from memory in case it is delayed or not running, i guess the most important part is to build a smart service to load and unload workflows in case they are not utilized. I am not sure if there is something in the WWF that automatically triggers the load and unload functionality in the persistance service.

    Sunday, February 5, 2006 1:08 PM
  • Hi Akram,

        The persistence service already has the functionalty to load and unload workflows which are idle. The runtime notifies that an instance is idle (WorkflowIdled Event) and the persistence service can unload the workflow.The SqlWorkflowPersistence already has the functionality to load and unload instances when the "UnLoadOnIdle" flag is set to true.

    Thanks,

    Srikanth.

    Sunday, February 5, 2006 9:43 PM
  • Hi Sirkanth,

    Is there also some kind of queue that if the maximum number of workflows running exceeds 25 or the maximum limit, a workflow can be put in a queue until another workflow is unloaded from memory.

     

    Monday, February 6, 2006 3:32 PM
  • Marcus States:

     marcusp wrote:

    This is a major problem isn't it? I would love to use the workflow in my app but a limit of 25 or even 100 is not acceptable. Using a separate thread for each workflow isn't going to work for scalability is it !

    You'll have to do a little work with the persistence service as the others have mentioned, but I think the design is still scaleable.  Take a look at IIS and ASP.Net.  I think the default number of worker threads in the pool for handling web requests is 10 (could be wrong about that).  But the actual amount of users those 10 threads can support is massive.  They get away with this because the pages can be executed and returned in just a few seconds, and they queue requests to a certain point.

    The WWF is just the framework, you may have provide a bit of plumbing.  For example, take a look at what your activities are doing.  If you have activities that run for hours (and I'm not talking wait periods which the persistence service would deal with, I'm talking actually chewing bits doing some operation) then perhaps you need to offload that work to another service that can be called from within the workflow, then have that service call back into the workflow that the operation is completed.  If you had a page flow within in ASP.Net web application you wouldn't tell a page in that flow to do an operation that took hours.  You would have that page kick it off by calling some other service.  Then, when the operation was completed you'd inform the user it was complete (either by email, or some progress page, etc., etc.).

    If you are finding your activites are actually not taking that long, but the idle periods are long, then definately get the persistence service into your design (either the out of the box version, or your own).  However, if you find that the operations your activities are performing are really long running, I would look at some method of alternative services to perform the work with the workflows simply calling into them.

    It will be interesting to hear the answer about if the workflow runtime queues incoming requests if there are no available threads.  If it does, then is that queue persisted somewhere?  Does it have a limit?

    Monday, February 6, 2006 5:08 PM
  • I do indeed have long running processes but I suspect, in my case,  that the idle times will be long and individual steps will be short-lived. Judging from the comments above it probably looks like the persistance service should handle the load.

    I also suspect that the threadpool handling all this lot will just queue up tasks until worker threads become available. The delays shouldn't be too bad as long as the steps dont block. Unfortunately, if you get a rogue workflow that has a faulty blocking step it could eventaully stop all the other workflows from running on that server instance as the number of available threads gradually get eaten up (being blocked on the fauty step). On another system I had the pleasure of working on (unix) we had an 'overseer' process that monitored the availability of the worker threads. It could kill a thread if it thought it was definitely misbehaving and threads were running short. This sort of drastic measure was deemed necessary as it was a fault-tolerant app and HAD to keep running.

    I guess its all down to the plumbing!

    Marcus.

    Monday, February 6, 2006 6:57 PM
  • Hi Akram,

         The requests are queued and the scheduler service already handles it. So nothing special needs to be done here.

    Thanks,

    Srikanth.

    Tuesday, February 7, 2006 6:47 AM
  • Hi Marcus,

         The same functionality can be achieved by adding a runtime service to the workflowruntime. The service would do the work of the "overseer" process you described. And Mike was correct about separating a long running task to a different process which notifies the workflow after the completion of long running task.

    Thanks,

    Srikanth.

    Tuesday, February 7, 2006 6:55 AM
  • Will WF4.0 have the same maximum number of workflow instances within a single WF runtime as WF3.0?

    Thursday, February 5, 2009 1:08 AM