locked
NetTcpBinding forcibly closed SocketException RRS feed

  • Question

  • I have a Windows service successfully hosting several WCF endpoints under the NetTcpBinding.  Two endpoints, in particular, are our highest trafficked endpoints (40k req/day each, fairly evenly distributed).  Only for these two endpoints, some days were seeing as many as 2% of requests throwing the following exception:

    System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:02:00'. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
       at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
       at System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)

    I then set the following binding/throttling parameters on the ServiceHost side:
    NetTcpBinding.MaxConnections = 100
    NetTcpBinding.ListenBacklog = 100
    ServiceThrottlingBehavior.MaxConcurrentCalls = 100
    ServiceThrottlingBehavior.MaxConcurrentSessions = 100

    and the following binding parameters on the ChannelFactory side:
    NetTcpBinding.MaxConnections = 100

    This helped dramatically, but I still occassionally see this same exception (sometimes for as many as 10 requests per day). 

    Some other salient data:
    1) The endpoint code traps all exceptions from the point of entry, so I'm pretty sure the cause is within WCF.
    2) I have determined that, at the point the exception is thrown, there has never been more than 30 requests in-process, and is more often around 20 "active requests".
    3) I have also determined that, from the client side, the communication channel is almost always open for only 5-10 seconds when the exception is thrown, and so is probably not timeout related.  (In some rare cases, less then once every couple days, I will see a similar exception with the client reporting that the communication channel has in fact been open for longer than the allotted 2 minute timeout, indicating that the endpoint was probably actually running for longer than 2 minutes for whatever reason, and this is understandable.)
    4) Our messages (both sending and receiving) are fairly large and so I have also set the NetTcpBinding.MaxReceivedMessageSize and NetTcpBinding.MaxStringContentLength to a very high number on both sides.

    Obviously this problem is somewhat difficult to replicate.  Any best practices relating to the binding/throttling parameters is welcome.  Or any advice on getting more exception detail (I've tried setting up some diagnostics messageLogging for system.serviceModel), or any other help.

    Thanks.
    Wednesday, October 22, 2008 4:07 PM

Answers

  • Hi feel like GeoffO here, have tryied all timeouts, throttling, quotas, ensure Close/Abort clients, netTcp and netPipe, all a person can read googling, have trace of the problem and yet i always get

    on server:
    System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto
       en System.Net.Sockets.Socket.BeginReceive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, AsyncCallback callback, Object state)
       en System.ServiceModel.Channels.SocketConnection.BeginRead(Int32 offset, Int32 size, TimeSpan timeout, WaitCallback callback, Object state)


    on client:
    System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto
       en System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
       en System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)

    in english:
     System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

    .....   now working with basicHttp.....
    • Marked as answer by GeoffO Monday, September 28, 2009 5:40 PM
    Friday, August 14, 2009 12:36 PM

All replies

  • This is a pesky problem. It might be related to "MaxUserPort" which explains why it is intermittant. http://technet.microsoft.com/en-us/library/cc758002.aspx
    Tuesday, December 30, 2008 7:13 AM
  • Because you are sending large messages it could have something to do with maxItemsInObjectGraph.

    In client config try next:

    <behaviors>
     <
    endpointBehaviors>
      <
    behavior name="MaxBehavior">
       <
    dataContractSerializer maxItemsInObjectGraph="2147483647"/>
      </
    behavior>
     </
    endpointBehaviors>
    </
    behaviors>

    <endpoint address=".."
     
    binding=".." bindingConfiguration=".."
     
    contract=".." name=".."
     behaviorConfiguration="MaxBehavior">
    </endpoint>

    don't forget to add the behavior configuration to the endpoint

    Tuesday, December 30, 2008 8:13 AM
  • I tried this but it still thows a sockets exception if the socket is overloaded.
    Wednesday, March 25, 2009 12:30 PM
  • I haven't had a chance to try out either the maxItemsInObjectGraph or the MaxUserPort suggestions, but I plan to very soon and will provide my own feedback as to the effects (it takes some time for me to get permissioned to make produciton server changes ;)).

    But I also have uncovered some additional potentially pertinent information.  My WCF invocation code goes something like this:
    System.Diagnostics.Stopwatch watch = new System.Diagnostics.Stopwatch();
    watch.Start();
                        NetTcpBinding binding = new NetTcpBinding(SecurityMode.None);
                        /* set binding.SendTimeout, binding.ReceiveTimeout, binding.MaxConnections, binding.MaxReceivedMessageSize, and binding.ReaderQuotas.MaxStringContentLength with values from registry */

                        // dispatch request to WCF component using svchost object
                        //  stored within app-specific AppDomain
                        MyServiceHost svchost = (MyServiceHost)targetAppDomain.GetData("theSvcHost");
                        ChannelFactory<IApp> channFac = new ChannelFactory<IApp>(binding, serviceUri);
                        IApp appChannel = channFac.CreateChannel();
                        int returnCode = appChannel.Invoke(inputxml, out outputxml);
                        ((IChannel)appChannel).Close();
    watch.Stop();
    Trace.TraceInformation(String.Format("Invocation Trace - Exit {0}", watch.EllapsedMilliseconds.ToString()));

    And my IApp.Invoke implementation code like this:

            public int Invoke(XmlDocument inputXml, out XmlDocument outputXml) {
                Trace.TraceInformation("App Trace - Point of Entry");
                System.Diagnostics.Stopwatch appwatch = new System.Diagnostics.Stopwatch();
                appwatch.Start();
                try {
                  /* do work */
                } catch() { /* catch exceptions */ }
                appwatch.Stop();
                Trace.TraceInformation(String.Format("App Trace - Exit {0}", appwatch.EllapsedMilliseconds.ToString()));
            }

    So in my digging, I've found that my invocation code goes through periods where my "Invocation Trace - Exit" traces say that the WCF transaction is taking much longer than what my "App Trace - Exit" traces say.  For example, on the invocation side I will get 20000 EllapsedMilliseconds but inside the app I will get 100 EllapsedMilliseconds.  These occurrences, more often than not, do not throw the CommunicationException I mentioned originally, however, everytime the CommunicationException occurs one of two things will be true:
    1) both the invocation side and the app will get around 120000 EllapsedMilliseconds (which is our timeout value)
     or
    2) the app EllapsedMilliseconds will be around 100 EllapsedMilliseconds, and the invocation EllapsedMilliseconds will be much higher, anywhere from 10000 to 60000 EllapsedMilliseconds.

    As you can see from my code, I'm doing some things that ideally I would before I start the StopWatch on the invocation side, namely read items from the registry and create my channel factory.  I don't see those calls throwing CommunicationExceptions, but just in the interest of full disclosure in case this timing discrepancy turns out to be completely unrelated.
    Thursday, March 26, 2009 2:44 PM
  • I hate to bump such an old thread, but I have finally confirmed after trying both the MaxUserPort and the maxItemsInObjectGraph suggestions that the problem still exists and has not improved at all with these suggested changes.  In fact, neither the frequency nor the characteristics of the problem has really changed much at all in our production environment since my initial post.

    I have also isolated the timing discrepancy I described in my previous post to the WCF communication specificly.  I'm still not sure if the CommunicationExceptions are related, but it adds to my perception of inconsistent performance within WCF.  I fear that my only remaining option is to try and reproduce this problem in an environment where I can have full WCF Service tracing captured, but I'm not sure how easily/likely that will actually happen despite my desparation in solving this anomaly.

    Thanks for the suggestions.

    In case anyone else is still trying to help figure this out:
    I have actually, over the course of the past 6 months, now seen (at least) 6 isolated instances of this problem spread out over 3 of our collection of 5 dev/test servers.  So I actually may be able to set up the ServiceModel tracing and just leave it run, I just need to figure out how to start a new file periodically so I don't end up with one huge file.  I just can't be sure when the problem will occur in our dev or test environment or if turning on tracing will effect the condition (I doubt it, but it's possible).  At this point, it doesn't really seem to be caused by load per se, but of course more requests does increase incidence.  And something else I'm surprised I hadn't explicitly mentioned before, for a given WCF request/response both the requester and the responder are always on the same machine.  I wouldn't be surprised to find out it was some unnecessary, occassionally slow network communication even though everything's on one machine.  But if I ever find out what's causing this you can be sure I will update this thread.  Also, I know I've been seeing more and more people really utilizing WCF so if anyone reading this has used WCF in a similar architecture (both ServiceHost and ChannelFactory on one machine) I would love to hear if you've experienced similar problems or measured any specific performance.
    Thursday, July 2, 2009 6:38 PM
  • The other aspects of configuration that are defaulted to protect for denial of service attacks should be looked at as well.
    Max received message size and max array length will cause the same communication exception. 
    Below I am showing the Int.MaxValue that could be set on a binding. Adjust to your expected payload.

    <netTcpBinding>

    <

     

    binding name="largeTCPBinding" maxReceivedMessageSize="2147483647" receiveTimeout = "07:00:00">
    <
    readerQuotas maxArrayLength="2147483647"/>
    </
    binding>

    </

     

    netTcpBinding>

     
    Justin
    Thursday, July 2, 2009 7:14 PM
  • Hi feel like GeoffO here, have tryied all timeouts, throttling, quotas, ensure Close/Abort clients, netTcp and netPipe, all a person can read googling, have trace of the problem and yet i always get

    on server:
    System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto
       en System.Net.Sockets.Socket.BeginReceive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags, AsyncCallback callback, Object state)
       en System.ServiceModel.Channels.SocketConnection.BeginRead(Int32 offset, Int32 size, TimeSpan timeout, WaitCallback callback, Object state)


    on client:
    System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto
       en System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
       en System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)

    in english:
     System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

    .....   now working with basicHttp.....
    • Marked as answer by GeoffO Monday, September 28, 2009 5:40 PM
    Friday, August 14, 2009 12:36 PM
  • Thanks for the reply Justin.  I have tried enlarging my maxReceivedMessageSize, maxArrayLength, maxStringContentLength, and my maxItemsInObjectGraph and no change was noticed.

    And besides, I have finally been able to capture an incidence of the problem in a service trace.  Of course now I have an upset client, but at least the problem is somewhat reproducible :).

    The exception is just saying that a timeout was reached, but this request threw the exception long before my allotted two minute timeout, just as my earlier posts had indicated.  As I mentioned, my timeout is 2 minutes, and this exception occurs after only 9 or 10 seconds.  And for this instance of the problem we are actually executing around 20 (nearly) simultaneous requests to the target WCF appdomain.

    I think the problem is probably related to the TCP connection pool used by WCF (see http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/770ba6c2-cc19-4336-bc09-53d5750105d3/), but I'm having trouble finding a good document detailing how the different throttling parameters/tcp connection pool settings play with eachother.  I'd rather not just find some one magic property that fixes this problem, but instead understand how best to control the request processing overall.

    Anyone?

    WCF Client side:
    <ApplicationData>
    <TraceData>
    <DataItem>
    <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Error">
    <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.ThrowingException.aspx</TraceIdentifier>
    <Description>Throwing an exception.</Description>
    <AppDomain>MYSERVER.EXE</AppDomain>
    <Exception>
    <ExceptionType>System.ServiceModel.CommunicationException, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType>
    <Message>The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:02:00'.</Message>
    <StackTrace>
    at System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)
    at System.ServiceModel.Channels.SocketConnection.Read(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout)
    at System.ServiceModel.Channels.DelegatingConnection.Read(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout)
    at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.SendPreamble(IConnection connection, ArraySegment`1 preamble, TimeoutHelper&amp; timeoutHelper)
    at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.DuplexConnectionPoolHelper.AcceptPooledConnection(IConnection connection, TimeoutHelper&amp; timeoutHelper)
    at System.ServiceModel.Channels.ConnectionPoolHelper.EstablishConnection(TimeSpan timeout)
    at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.OnOpen(TimeSpan timeout)
    at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
    at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout)
    at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
    at System.ServiceModel.Channels.ServiceChannel.CallOpenOnce.System.ServiceModel.Channels.ServiceChannel.ICallOnce.Call(ServiceChannel channel, TimeSpan timeout)
    at System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(TimeSpan timeout, CallOnceManager cascade)
    at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout)
    at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
    at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs)
    at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
    at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
    at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData&amp; msgData, Int32 type)
    at MYAPP.IApp.Invoke(XmlDocument&amp; xmlForm, XmlDocument xmlForm2)
    at MYAPP.AppBase.WcfInvoke(String app, String serviceUri, XmlDocument&amp; requestXml, XmlDocument requestXml2, List`1&amp; elapsedTimes)
    at MyRequestHandler.Dispatch(List`1 inputs, Int32&amp; returnCode)
    at MyRequestHandler.ReadComplete(IAsyncResult result)
    at System.Net.LazyAsyncResult.Complete(IntPtr userToken)
    at System.Net.ContextAwareResult.CompleteCallback(Object state)
    at System.Threading.ExecutionContext.runTryCode(Object userData)
    at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData)
    at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
    at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
    at System.Net.ContextAwareResult.Complete(IntPtr userToken)
    at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken)
    at System.Net.Sockets.BaseOverlappedAsyncResult.CompletionPortCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)
    at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP)
    </StackTrace>
    <ExceptionString>System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:02:00'. ---&gt; System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
       at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
       at System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)
       --- End of inner exception stack trace ---</ExceptionString>
    <InnerException>
    <ExceptionType>System.Net.Sockets.SocketException, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType>
    <Message>An existing connection was forcibly closed by the remote host</Message>
    <StackTrace>
    at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
    at System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)
    </StackTrace>
    <ExceptionString>System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
       at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
       at System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)</ExceptionString>
    <NativeErrorCode>2746</NativeErrorCode>
    </InnerException>
    </Exception>
    </TraceRecord>
    </DataItem>
    </TraceData>
    </ApplicationData>

    WCF ServiceHost side:
    <ApplicationData>
    <TraceData>
    <DataItem>
    <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Error">
    <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.ThrowingException.aspx</TraceIdentifier>
    <Description>Throwing an exception.</Description>
    <AppDomain>MYAPP</AppDomain>
    <Exception>
    <ExceptionType>System.ServiceModel.CommunicationException, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType>
    <Message>The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 00:00:05. The time allotted to this operation may have been a portion of a longer timeout.</Message>
    <StackTrace>
    at System.ServiceModel.Channels.SocketConnection.ThrowIfClosed()
    at System.ServiceModel.Channels.SocketConnection.SetImmediate(Boolean immediate)
    at System.ServiceModel.Channels.SocketConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)
    at System.ServiceModel.Channels.BufferedConnection.WriteNow(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, BufferManager bufferManager)
    at System.ServiceModel.Channels.BufferedConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)
    at System.ServiceModel.Channels.DelegatingConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)
    at System.ServiceModel.Channels.TracingConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)
    at System.ServiceModel.Channels.ServerSessionPreambleConnectionReader.ServerFramingDuplexSessionChannel.OnOpen(TimeSpan timeout)
    at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
    at System.ServiceModel.Channels.CommunicationObject.Open()
    at System.ServiceModel.Dispatcher.ChannelHandler.OpenAndEnsurePump()
    at System.ServiceModel.Dispatcher.ChannelHandler.OpenAndEnsurePump(Object state)
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2()
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.OnSecurityContextCallback(Object o)
    at System.Security.SecurityContext.Run(SecurityContext securityContext, ContextCallback callback, Object state)
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke()
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks()
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state)
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)
    at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped)
    at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP)
    </StackTrace>
    <ExceptionString>System.ServiceModel.CommunicationException: The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 00:00:05. The time allotted to this operation may have been a portion of a longer timeout. ---&gt; System.ObjectDisposedException: The socket connection has been disposed.
    Object name: 'System.ServiceModel.Channels.SocketConnection'.
       --- End of inner exception stack trace ---</ExceptionString>
    <InnerException>
    <ExceptionType>System.ObjectDisposedException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType>
    <Message>The socket connection has been disposed.
    Object name: 'System.ServiceModel.Channels.SocketConnection'.</Message>
    <StackTrace>
    at System.ServiceModel.Channels.SocketConnection.ThrowIfClosed()
    at System.ServiceModel.Channels.SocketConnection.SetImmediate(Boolean immediate)
    at System.ServiceModel.Channels.SocketConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)
    at System.ServiceModel.Channels.BufferedConnection.WriteNow(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, BufferManager bufferManager)
    at System.ServiceModel.Channels.BufferedConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)
    at System.ServiceModel.Channels.DelegatingConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)
    at System.ServiceModel.Channels.TracingConnection.Write(Byte[] buffer, Int32 offset, Int32 size, Boolean immediate, TimeSpan timeout)
    at System.ServiceModel.Channels.ServerSessionPreambleConnectionReader.ServerFramingDuplexSessionChannel.OnOpen(TimeSpan timeout)
    at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)
    at System.ServiceModel.Channels.CommunicationObject.Open()
    at System.ServiceModel.Dispatcher.ChannelHandler.OpenAndEnsurePump()
    at System.ServiceModel.Dispatcher.ChannelHandler.OpenAndEnsurePump(Object state)
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2()
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.OnSecurityContextCallback(Object o)
    at System.Security.SecurityContext.Run(SecurityContext securityContext, ContextCallback callback, Object state)
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke()
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks()
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state)
    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)
    at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped)
    at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP)
    </StackTrace>
    <ExceptionString>System.ObjectDisposedException: The socket connection has been disposed.
    Object name: 'System.ServiceModel.Channels.SocketConnection'.</ExceptionString>
    </InnerException>
    </Exception>
    </TraceRecord>
    </DataItem>
    </TraceData>
    </ApplicationData>
    Wednesday, September 16, 2009 7:31 PM
  • I hope we get some feedback on this, but it seems we have stumbled upon a fix/workaround for this issue.

    When I first read your post Enrique, I thought you were just experiencing similar problems to me and that you just intended to start trying out basicHttp as a binding.  But then, after I did a bit of work to facilitate easily changing the binding and any of the multitude of configuration settings in our production environment without requiring code changes (why isn't the Binding object serializable????!!!??), I discovered that under either the basicHttp Binding or setting up a CustomBinding with the httpTransport element I was no longer able to reproduce the error.  I still wasn't certain if this would completely solve all my problems, however after running for almost three days in our production environment without any incidence of the problem I am very optimistic.

    So, in short, the http transport seems to be more stable than either the tcp or named pipe transports.  I cannot imagine a reason; I would have expected http to be very similar to tcp.  I've seen other people having issues with "forcibly closed connections", but their situations did not really seem to apply to my circumstances.  Anyone have any ideas on why I can't use tcp or named pipes?

    Thanks Enrique!
    Monday, September 28, 2009 5:56 PM
  • Hi GeoffO, currently my things is still working with basicHttp, but i have finded this client proxy replacement:
    http://wcfproxygenerator.codeplex.com

    Im using it in other project with Tcp bindings and still premature to say if it make some difference, but i recommend you to watch this videos about the client proxy of wcf, they are very helpfull about the issues you face on session bindings like tcp or pipes:
    http://wcfguidanceforwpf.codeplex.com

    the videos are the foundations of the proxy replacement.

    Thanks and share if u find anything new :)

    Thursday, October 8, 2009 7:21 PM
  • Geoff,

    The error you are hitting is the five-second channel initialization timeout.  From your error message:

    The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 00:00:05. The time allotted to this operation may have been a portion of a longer timeout.

    Unfortunately Microsoft does not provide a way to increase this timeout in the NetTcpBinding configuration.  You have to define a custom binding, or use some other predefined binding (as you wound up doing).  More details are available in this post:

    http://blogs.msdn.com/b/andreal/archive/2009/12/04/wcf-nettcpbinding-what-to-do-if-the-socket-did-not-complete-within-the-allotted-timeout-of.aspx

    I had the same problem and after defining a custom TCP binding with a longer channel initialization timeout, I have had no problems.

    Cheers,

    Dave

    • Proposed as answer by dpecora Thursday, August 12, 2010 11:53 PM
    Thursday, August 12, 2010 11:49 PM