locked
SignalR not releasing Concurrent Connections on Server RRS feed

  • Question

  • User-2074101539 posted

    Hi - just wondered if anybody could help.

    We have a very simple SignalR implementation in our web app. Many of the users still use IE8 (which may be relevant) and we are finding that the web server ends up with lots and lots of concurrent connections - as users move from page to page, the old connection isn't being dropped as we would expect. What this means is that the server gets swamped as it never releases the connections, so we have to keep rebooting it (twice a day!).

    Does anybody have a solution (or even a way of dropping the expired connections manually)?

    Cheers,

    Steve.

    Tuesday, June 14, 2016 8:02 AM

Answers

All replies

  • User61956409 posted

    Hi Steve,

    Welcome to ASP.NET forum.

    Firstly, SignalR can be used in Microsoft Internet Explorer versions 8.

    http://www.asp.net/signalr/overview/getting-started/supported-platforms

    Secondly, each client connecting to a hub passes a unique connection id to server, and at any time a user could have more than one connection to the SignalR application, for example, a user who is connected through multiple devices or more than one browser tab would have more than one connection id. So lots of connections may access to SignalR server.

    Thirdly, the server could wait for the client to reconnect if a connection is lost, so the SignalR connection doesn't go away immediately. You could try to change Timeout settings.

    http://www.asp.net/signalr/overview/guide-to-the-api/handling-connection-lifetime-events#timeoutkeepalive

    Best Regards,

    Fei Han

    • Marked as answer by Anonymous Thursday, October 7, 2021 12:00 AM
    Wednesday, June 15, 2016 6:15 AM
  • User-2074101539 posted

    Thanks for the response.

    Thirdly, the server could wait for the client to reconnect if a connection is lost, so the SignalR connection doesn't go away immediately. You could try to change Timeout settings.

    This might be the most significant element of your post. We do find in some situations where a page with SignalR is navigated away from, then the connection that page had is never dropped, however, we do see many connections being dropped within a few seconds. The new page, if it has SignalR, seemingly will always get a new connection id (I've certainly not observed any connection re-use, as far as I can tell).

    We are seeing an increase in the number of concurrent connections at a rate of around 10 per minute, which means that the server soon fills up, and starts responding with 503 errors, unable to serve pages.

    We haven't found a way of forcibly closing the connections, so we are having to reboot the server twice a day to avoid it dying completely.

    If I had to put money on it, I would say that either there's a bug in the IE8 implementation of the long polling, or our environment, which has a form of gateway proxy, is causing the problem. We have been able to re-create a similar issue without the gateway, but it wasn't anywhere near as bad as in our live environment.

    Cheers,

    Steve.

    Thursday, June 16, 2016 2:32 PM
  • User1138507886 posted

    Did anyone find a solution for TCP connections running out?

    Runing dotnet core 2.1 and SignalR 1.1.0 in Azure Web Apps.

    Had 20 servers with 7gb memory each so I thought we where safe.

    Having a couple (20k 404 during 6hours).
    .../bingoHub?id=CunxrOqa3AIAM93-q_WTJQ 

    Noticed during high load 503 errors. After some digging I found description.

    High TCP Socket handle count - High TCP Socket handle count was detected on the instance : RD0003FFDBC11A. During this period, the process dotnet.exe of site MessageService-App with ProcessId 5644 had the maximum open handle count of 2071.
    HTTP Server Errors from FrontEnd only detected	
    
    Description	
    Front End in Azure App Service is a layer seven-load balancer, acting as a proxy, distributing incoming HTTP requests between different applications and their respective Workers. Web Workers are the backbone of the App Service scale unit and they run your application code. The below table shows all the errors that were logged on the FrontEnd only.
    
    HttpStatus	HttpSubStatus	Win32Status	Errors	Description
    503	28	0	2084	
    502	5	38	60	WebSocket
    502	3	87	20	
    502	5	64	12	WebSocket
    502	5	1229	2	WebSocket
    

    --- UPDATE ---

    Found this: https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/february/azure-inside-the-azure-app-service-architecture#network-port-capacity-for-outbound-network-calls

    The maximum connection limits are the following:
    
    1,920 connections per B1/S1/P1 instance
    3,968 connections per B2/S2/P2 instance
    8,064 connections per B3/S3/P3 instance
    64K max upper limit per App Service Environment

    So my P2 instances could only handle 79.360 connections but the cap of 64k blocks.
    I've more users running my service so need to rethink this I guess.

    Monday, December 23, 2019 9:55 PM