none
Service Broker from behind the NAT

    Pertanyaan

  • Let's assume the situation: we have Initiator and Target. Target is behind ISP's NAT and can't be published outside. So, when Initiator sends a message to Target, Target will not be able to establish a backward connection and will not send an acknowledge. Initiator will retry and retry...
    Is it possible for Target to send acknowledges in the same connection?
    15 September 2005 12:04

Jawaban

  • If the target is behind a NAT (I assume you mean source NAT) and not reachable from the initiator, the initiator will not be able to send a message to the target at all.

    A more common scenario is when the initiator is behind a NAT, such that it can establish outbound TCP connections but not accept incoming connections. In this case, the initiator will be able to send messages to the target but the target will not be able to reply (or even send back ACKs) back to the initiator. The target arbitrates the connection by checking remote TCP address with that of the route. If the address does not match, it will try to establish a separate connection to send back replies. This is done as a security measure to prevent IP spoofing and DoS attacks.

    In order to solve the problem of firewalls and NAT, what would be required is a proxy broker that can be deployed as a gateway between private and public networks. Since version 1 of the Service Broker was designed for B2B scenarios we have not solved this problem yet.

    15 September 2005 19:02
  • Hello Dmitry,

    The Target must somehow be able to reach the Initiator. Even if we would use the same connection to send back acknowledgements (which we actually do, if possible), this is a fundamental requirement for the nature of problems Service Broker was designed to solve.

    The issue is not the acknowledgments, but the service replies. Most times, when sending a message to the Target, the Initiator expects some response back. On the trivial scenarios, the request is handled to the target service, this does some processing on it and sends back a response, which is immediately send back to the Initiator. This is the typical RPC scenario and the its XML alter ego, the typical WS service. In this scenario, the Initiator connection is there to send back the response. However, this scenario has severe limitations:
    - The Initiator and the Target are tightly coupled. If the Target service is not up and running, the Initiator cannot send the request to it.
    - The Target can only process as many request as the hardware on which it runs permits. That is, if the Target has reached the maximum number of requests it can process, it must reject new requests until one of the existing requests is completed
    - The Target service cannot be taken down for maintenance and administration without taking down the business service it's provide (Initiators cannot send new requests during this maintenance operations)

    Queuing is the answer that solves this problems:
    - The initiator and Target are loosely coupled. Requests send by the Initiator are queued on the Target queue and the Target service can process them when it starts running
    - The Target can logically process more requests that it's physically possible. When the maximum processing capacity is reached, new requests are queued and the service will get to them later. The Initiator sees no difference.
    - The Target service can be taken down, the incoming requests will continue to be queued up and the Target will get to them when is restarted.

    You see that the fundamental issue here is that queuing breaks the tight coupling between the Initiator and the Target, it separates the Response from the Request. The Initiator must sent the request and then continue with its business as usual, and the response will eventually be sent by the Target. Async vs. sync.
    Now I hope it became clear why the Target MUST be able to send the response back to the Initiator even if the connection no longer exists.
    - There might be a long delay between the request and the response. Is unreasonable to impose a requirement that forces the Initiator to keep the connection open until the Target has time to process it's request. Is just as unfeasible to impose on the Target the requirement to maintain connections open with all Initiators, until the request is processed.
    - One must assume errors on a network, the connection might break, this shouldn’t be a reason for the Target to be unable to send back replies.
    - The Initiator or the Target can be stopped and restarted, that's not a reason to loose requests and/or replies.

    I hope now is clear that the requirement to be able to contact back the Initiator is a fundamental need in order to build loosely coupled distributed systems. Relying on the original connection to post back an HTTP response just doesn't do it...

    Some of you might notice by now that this explanation cuts out the majority Service Brokers hosted in SQL Server instances in the corporate intranet exchanging messages with an instance  outside this intranet. How can they be addressed, if the machine is not reachable from outside? Easy, there must be one SQL Server instance that acts as a gateway between the intranet and the outside. It is physically deployed on the border, it listens on an internet IP address and is able to forward messages to any machine in the intranet. That's why the CRWEATE/ALTER ENDPOINT statements accepts the MESSAGE_FORWARDING =  ENABLED clause!

    HTH,
    ~ Remus

    15 September 2005 19:09
    Moderator

Semua Balasan

  • If the target is behind a NAT (I assume you mean source NAT) and not reachable from the initiator, the initiator will not be able to send a message to the target at all.

    A more common scenario is when the initiator is behind a NAT, such that it can establish outbound TCP connections but not accept incoming connections. In this case, the initiator will be able to send messages to the target but the target will not be able to reply (or even send back ACKs) back to the initiator. The target arbitrates the connection by checking remote TCP address with that of the route. If the address does not match, it will try to establish a separate connection to send back replies. This is done as a security measure to prevent IP spoofing and DoS attacks.

    In order to solve the problem of firewalls and NAT, what would be required is a proxy broker that can be deployed as a gateway between private and public networks. Since version 1 of the Service Broker was designed for B2B scenarios we have not solved this problem yet.

    15 September 2005 19:02
  • Hello Dmitry,

    The Target must somehow be able to reach the Initiator. Even if we would use the same connection to send back acknowledgements (which we actually do, if possible), this is a fundamental requirement for the nature of problems Service Broker was designed to solve.

    The issue is not the acknowledgments, but the service replies. Most times, when sending a message to the Target, the Initiator expects some response back. On the trivial scenarios, the request is handled to the target service, this does some processing on it and sends back a response, which is immediately send back to the Initiator. This is the typical RPC scenario and the its XML alter ego, the typical WS service. In this scenario, the Initiator connection is there to send back the response. However, this scenario has severe limitations:
    - The Initiator and the Target are tightly coupled. If the Target service is not up and running, the Initiator cannot send the request to it.
    - The Target can only process as many request as the hardware on which it runs permits. That is, if the Target has reached the maximum number of requests it can process, it must reject new requests until one of the existing requests is completed
    - The Target service cannot be taken down for maintenance and administration without taking down the business service it's provide (Initiators cannot send new requests during this maintenance operations)

    Queuing is the answer that solves this problems:
    - The initiator and Target are loosely coupled. Requests send by the Initiator are queued on the Target queue and the Target service can process them when it starts running
    - The Target can logically process more requests that it's physically possible. When the maximum processing capacity is reached, new requests are queued and the service will get to them later. The Initiator sees no difference.
    - The Target service can be taken down, the incoming requests will continue to be queued up and the Target will get to them when is restarted.

    You see that the fundamental issue here is that queuing breaks the tight coupling between the Initiator and the Target, it separates the Response from the Request. The Initiator must sent the request and then continue with its business as usual, and the response will eventually be sent by the Target. Async vs. sync.
    Now I hope it became clear why the Target MUST be able to send the response back to the Initiator even if the connection no longer exists.
    - There might be a long delay between the request and the response. Is unreasonable to impose a requirement that forces the Initiator to keep the connection open until the Target has time to process it's request. Is just as unfeasible to impose on the Target the requirement to maintain connections open with all Initiators, until the request is processed.
    - One must assume errors on a network, the connection might break, this shouldn’t be a reason for the Target to be unable to send back replies.
    - The Initiator or the Target can be stopped and restarted, that's not a reason to loose requests and/or replies.

    I hope now is clear that the requirement to be able to contact back the Initiator is a fundamental need in order to build loosely coupled distributed systems. Relying on the original connection to post back an HTTP response just doesn't do it...

    Some of you might notice by now that this explanation cuts out the majority Service Brokers hosted in SQL Server instances in the corporate intranet exchanging messages with an instance  outside this intranet. How can they be addressed, if the machine is not reachable from outside? Easy, there must be one SQL Server instance that acts as a gateway between the intranet and the outside. It is physically deployed on the border, it listens on an internet IP address and is able to forward messages to any machine in the intranet. That's why the CRWEATE/ALTER ENDPOINT statements accepts the MESSAGE_FORWARDING =  ENABLED clause!

    HTH,
    ~ Remus

    15 September 2005 19:09
    Moderator
  • I'm sorry for the misprint, of course, I mean source NAT.

    Many thanks to you and Remus for the detailed answer!
    16 September 2005 8:28
  • Hello,

    A more common scenario is when the initiator is behind a NAT, such that it can establish outbound TCP connections but not accept incoming connections. In this case, the initiator will be able to send messages to the target but the target will not be able to reply (or even send back ACKs) back to the initiator. The target arbitrates the connection by checking remote TCP address with that of the route. If the address does not match, it will try to establish a separate connection to send back replies. This is done as a security measure to prevent IP spoofing and DoS attacks.

    We are have the same scenario and using full security . Why authenticated dialog connection from initiator that uses full security cannot be used to send back replies from target?

    In order to solve the problem of firewalls and NAT, what would be required is a proxy broker that can be deployed as a gateway between private and public networks. Since version 1 of the Service Broker was designed for B2B scenarios we have not solved this problem yet.

    This is bad solution while 90% of NAT routers in our production network are based on hadware or Linux systems. Only SB port forwarding enabling on routers partionally solves the problem...
    • Disarankan sebagai Jawaban oleh Mr.lin 27 April 2009 17:15
    24 April 2009 12:57
  • Siargiej,

    I believe Remus' reply should answer your first question.
    24 April 2009 15:40
    Moderator
  • Rushi,

    This is exactly what our environment is. How do you create a proxy broker?

    Many thanks in advance

    VK

    09 Maret 2012 1:51