Workflow farm infrastructure RRS feed

  • Question

  • We are planning on building a Workflow farm to use with an existing SharePoint 2013 farm. We have 7 servers in total split between 2 physical locations:

    Location 1: Server 1, Server 2, Server 3, Server 4

    Location 2: Server 5, Server 6, Server 7

    The requirement is to have high availability in case one data centre goes up into the air. This would be the perfect for disaster recovery scenario, but at the same time we would like to utilize all of the servers as well and not keep half of them running but being useless at the same time.

    Since Service Bus Farm can either be 1,3 or 5 servers, the following configuration should work:

    Location 1: Server 1 (WF + SB), Server 2 (WF + SB), Server 3 (WF + SB), Server 4 (SQL, Clustered)

    Location 2: Server 5 (WF + SB), Server 6 (WF + SB), Server 7 (SQL, Clustered)

    What would happen if all servers in Location 1 would go down? That would leave us with 2 WF servers (2 Service Bus servers as well) + 1 SQL running. Would the whole Workflow Farm stop working since there would only be 2 Service Bus instances instead of supported 1, 3 or 5? What would be the best way to utilise those 7 servers in 2 physical locations to squeeze the most of them?

    I assume that in case of a disaster there would be no need to restore the databases as clustered SQL would take care of that?

    Wednesday, January 18, 2017 11:32 AM

All replies

  • Hi Paul

    SharePoint farm does not work across servers located in the different data centers. This is based on a prerequisite of minimal (< 1ms) latency between components of the farm. If you have latency less than this, then only you can go for stretched farm topology.


    Wednesday, January 18, 2017 12:11 PM
  • I appreciate your comment. But the thing that interests me the most is will this setup work when 3 out of 5 SB/WFM servers go down as it would leave the farm with 2 SB servers which is not a supported topology. Or does it need to be 1, 3, or 5 servers only during the initial setup and configuration?

    Wednesday, January 18, 2017 1:47 PM
  • Hi Paul

    For SB farm, it needs to be either 1 or 3 servers only. 


    Thursday, January 19, 2017 4:45 AM
  • Mohit, I have to disagree with you. After 1.1 update for Service Bus it now supports 5 servers.

    Still, the same question goes for three server setup: what if one server goes down, would it still work with 2 servers? Does it need to be 1, 3 or 5 only when installing and configuring or at any point in time?

    • Edited by Paul Strupeikis Thursday, January 19, 2017 1:52 PM updated link
    Thursday, January 19, 2017 1:51 PM
  • Hi Paul

    I meant server count from DR point of view. I have not tried it with even no of servers. So I'm not sure if it will work or not. 


    Friday, January 20, 2017 4:20 AM
  • Basically you need to be aware of this fundamental points:

    1. Service Bus attains the Quorum internally with three Servers (you can have 5 Servers with Service Bus 1.1)
    2. After you attain this Quorum, you can lose one Server in the WFM Farm that you are still ok and no issues are impacting the WFM Farm, this is because you had the Quorum earlier and you are still having the majority of the WFM Servers Online (in this case Two Servers)
    3. If you lose another Server, meaning that you don’t have any more the majority of the WFM Servers Online (in this case you will have on the three Servers, two Servers down), then at this point you will lose the Quorum and then you will enter an inconsistent state that will causing issues on the WFM Farm.
    4. After you lose the Quorum on the WFM Farm (at a moment you had two Servers down) the only way to recover the Quorum is two have the three Servers again Online 

    In summary, you need to guarantee that on a three node cluster it will always require at least two nodes functional.

    You can check the bellow article to get a better understanding the “Quorum Configurations“ in a Failover Cluster

    Please mark as helpful if this was helpful. Thanks & Regards, Jose

    Monday, January 23, 2017 10:14 AM