locked
Routing for HTTP triggered Azure Functions RRS feed

  • Question

  • I have a problem related to request routing in HTTP triggered azure functions after scaling out.

    Scenario:

    1. Azure function in application A (queue triggered) calls Azure function in application B (HTTP triggered, consumption based plan).

    2. By default configuration, functions in application A execute concurrently in up to 16 threads. Application logic in these functions generates 10 concurrent HTTP calls to application B, so effectively single instance of A generates 160 concurrent HTTP calls to B.

    3. Application B scales out.

    Problem: according to Application Insights, only one instance of Application B receives all the requests.

    Over the last few days I've tried lots of options even remotely relevant: using HttpClient and RestSharp as one shared instance and instance per call, configuring ServicePoint ConnectionLeaseTimeout and DnsRefreshTimeout, setting HttpClient.UseProxy to false (one of suggestions), setting Arr-Disable-Session-Affinity request header to True etc.

    No matter what I do, I end up with 1 instance of A and 8-12 instances of B of which only 1 instance of B receives all HTTP requests. If application A happen to scale out to say 3 instances, then up to 3 instances of B receive requests, and the rest sitting idle, however promptly scaling up if the load persists. If ServicePoint.ConnectionLeaseTimeout is set to say 10 seconds, then every 10 seconds (roughly) all requests are going to different instance, still one active instance at a time.

    Let's not challenge the approach of one function calling another via HTTP. It is a POC where application A represents an external system, and application B represents my application.

    I'm at the end of my rope here. Any suggestions?


    Regards, Dmitry


    • Edited by Demchuk Sunday, October 22, 2017 6:13 PM
    Friday, October 20, 2017 8:54 PM

Answers

  • After consulting with more experienced colleagues, the most likely cause was proclaimed an error in resource provisioning. They've seen in the past load balancers not picking up new instances of Web App. The solution is either re-create faulty resource (LB or Azure Function app), or reset resource properties, like I did:

    $Resource = Get-AzureRmResource -ResourceGroupName XXX -ResourceType Microsoft.Web/sites/config -ResourceName "YYY" -ApiVersion 2016-08-01
    $Resource.Properties.loadBalancing = 'LeastResponseTime'; # It is not necessary to change this value. This was my experiment, but when I reverted it back, all function host instances kept serving incoming requests.
    $Resource | Set-AzureRmResource -ApiVersion 2015-08-01 -Force

    In case you are interested in HTTP client configuration, here is my code. Later I'll play with it a little bit more and find out which settings are really necessary:

    private HttpClient CreateClient() { var client = new HttpClient(new HttpClientHandler() { UseProxy = false }); client.BaseAddress = new Uri(uri); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); client.DefaultRequestHeaders.ConnectionClose = false; // Connection "keep alive". Apparently, it affects connection to LB, further connection to function apps were distributed evenly. Connections to Web App (not function) may behave differently.

    client.DefaultRequestHeaders.Add("Arr-Disable-Session-Affinity", "True"); // Disable sticky sessions. Needed if cookie storage configured for HttpClient, and server is stateless. if (!String.IsNullOrEmpty(authorization)) { client.DefaultRequestHeaders.Add("x-functions-key", authorization); // Authorization header for HTTP triggered Azure functions. } return client; }

    ServicePoint global and local configuration. 

                ServicePointManager.DnsRefreshTimeout = 60 * 1000; // Tolerate DNS changes mid-flight.
                ServicePointManager.ReusePort = true; // Not sure if it is necessary.
                var sp = ServicePointManager.FindServicePoint(new Uri(uri));
                sp.ConnectionLeaseTimeout = 60 * 1000; // Close unused connections.
                sp.UseNagleAlgorithm = false; // More real-time performance.
                sp.MaxIdleTime = 5000; // Close idle connections.
    

    My Azure Function instances are deployed using consumption based plan, so I only have 250 ports available part of which used by dashboard, queue bindings etc. and I really need to conserve port resources.

    Hope this can help somebody.

    Appreciate any comments.


    Regards, Dmitry

    • Marked as answer by Demchuk Friday, October 27, 2017 12:27 AM
    Friday, October 27, 2017 12:26 AM

All replies

  • Changing loadBalancing from LeastRequests to WeightedTotalTraffic fixed the problem.

    Oddly, reverting it back to the original LeastRequests did not break it.


    Regards, Dmitry

    Sunday, October 22, 2017 10:07 PM
  • Hi Demchuk

       Could you please post your code in function apps(A & B)? I don't know how you implemented that in you scenario.It will be very helpful for other Azure users. Thanks.

    Best Regards,

    Michael


    MSDN Community Support Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Thursday, October 26, 2017 7:48 AM
  • After consulting with more experienced colleagues, the most likely cause was proclaimed an error in resource provisioning. They've seen in the past load balancers not picking up new instances of Web App. The solution is either re-create faulty resource (LB or Azure Function app), or reset resource properties, like I did:

    $Resource = Get-AzureRmResource -ResourceGroupName XXX -ResourceType Microsoft.Web/sites/config -ResourceName "YYY" -ApiVersion 2016-08-01
    $Resource.Properties.loadBalancing = 'LeastResponseTime'; # It is not necessary to change this value. This was my experiment, but when I reverted it back, all function host instances kept serving incoming requests.
    $Resource | Set-AzureRmResource -ApiVersion 2015-08-01 -Force

    In case you are interested in HTTP client configuration, here is my code. Later I'll play with it a little bit more and find out which settings are really necessary:

    private HttpClient CreateClient() { var client = new HttpClient(new HttpClientHandler() { UseProxy = false }); client.BaseAddress = new Uri(uri); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); client.DefaultRequestHeaders.ConnectionClose = false; // Connection "keep alive". Apparently, it affects connection to LB, further connection to function apps were distributed evenly. Connections to Web App (not function) may behave differently.

    client.DefaultRequestHeaders.Add("Arr-Disable-Session-Affinity", "True"); // Disable sticky sessions. Needed if cookie storage configured for HttpClient, and server is stateless. if (!String.IsNullOrEmpty(authorization)) { client.DefaultRequestHeaders.Add("x-functions-key", authorization); // Authorization header for HTTP triggered Azure functions. } return client; }

    ServicePoint global and local configuration. 

                ServicePointManager.DnsRefreshTimeout = 60 * 1000; // Tolerate DNS changes mid-flight.
                ServicePointManager.ReusePort = true; // Not sure if it is necessary.
                var sp = ServicePointManager.FindServicePoint(new Uri(uri));
                sp.ConnectionLeaseTimeout = 60 * 1000; // Close unused connections.
                sp.UseNagleAlgorithm = false; // More real-time performance.
                sp.MaxIdleTime = 5000; // Close idle connections.
    

    My Azure Function instances are deployed using consumption based plan, so I only have 250 ports available part of which used by dashboard, queue bindings etc. and I really need to conserve port resources.

    Hope this can help somebody.

    Appreciate any comments.


    Regards, Dmitry

    • Marked as answer by Demchuk Friday, October 27, 2017 12:27 AM
    Friday, October 27, 2017 12:26 AM