Was wondering if someone could explain this cache exception on resetting a cache object timeout when accessed


  • Hello,

    I'm rarely seeing strange AppFabric cache access issues. Usually, the exception claims that a cache server was unavailable, and usually it happens when a robot (GoogleBot for example) is accessing the app. I'm not too concerned about those ... they seem fairly rare ... maybe only once or twice per day. This new one does look strange: I'm not sure I've seen this one before. Here's the code that simply pulls a value from the cache that was stored with a guid in the cache.

    Public Shared Function Value(ByVal context As HttpContext, ByVal guid as string) As Integer
        Dim defaultCache AsDataCache = AppHTTPModule.cacheFactory.GetDefaultCache()
        defaultCache.ResetObjectTimeout(guid, TimeSpan.FromMinutes(20))
        Value = CType(defaultCache(guid), Hashtable)("Value")
      Catch ex As DataCacheException
        'Log exception here
        Value = 0
      End Try
    End Function

    It threw this exception on the line that resets the timeout:

    Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<errca0016>:SubStatus<es0001>:The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown. ---> System.TimeoutException: The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 00:00:40. The time allotted to this operation may have been a portion of a longer timeout. ---> System.Net.Sockets.SocketException: The I/O operation has been aborted because of either a thread exit or an application request   --- End of inner exception stack trace ---   at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Channels.FramingDuplexSessionChannel.EndReceive(IAsyncResult result) at Microsoft.ApplicationServer.Caching.WcfClientChannel.CompleteProcessing(IAsyncResult result) --- End of inner exception stack trace --- at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody) at Microsoft.ApplicationServer.Caching.DataCache.ExecuteAPI(RequestBody reqMsg, IMonitoringListener listener) at Microsoft.ApplicationServer.Caching.DataCache.InternalResetObjectTimeout(String
    key, TimeSpan newTimeout, String region, IMonitoringListener listener) at Microsoft.ApplicationServer.Caching.DataCache.<>c__DisplayClass68.<resetobjecttimeout>b__67() at Microsoft.ApplicationServer.Caching.DataCache.ResetObjectTimeout(String key, TimeSpan newTimeout) at Console.UserData.SiteKey(HttpContext context) in C:\SELLREX\SellRexAzureProject\Console\App_Code\BE_Misc.vb:line 31</resetobjecttimeout></es0001></errca0016>

    Line 31: defaultCache.ResetObjectTimeout(guid, TimeSpan.FromMinutes(20))

    I can't discern from that language what I've done wrong (if anything). Anyone know what this means ... is it just a rare cache access error?

    I'm strongly favoring a table-backing scheme for cache data. If the cache fails, either for this type of exception or the occasional "cache server unavailable" exception, I can just go to table storage and get the data from there as a back-up.


    Luke Latham, CEO & Chairman
    SellRex Corporation

    Thursday, March 15, 2012 8:28 AM


All replies

  • Increase the receive operation and connection timeout on the DataCacheFactoryConfig for your DataCacheFactory.
    You are probably trying to pull a large object from the cache and the request is not able to serialize and return the object before the timeout of the request is reached.

    You need the RequestTimeout property on the DataCacheFactoryConfig, which you pass into the DataCacheFactory on creation of the cache factory. MSDN information:

    You can find the needed information here. See chapter 9:

    This will only solve the issue only if the reason for the timeout's is that the data can not be retrieved within the default timeout.
    A smart thing to do with HTTP services is to always use a retry policy. If you get an timeout exception, just retry the operation. If it fails again, you might have an issue. But as you know, the network topology can sometimes cause issues with routing, routers, package issues and so forth.

    Be nice to nerds ... Chances are you'll end up working for one!

    • Edited by Robbin Cremers Thursday, March 15, 2012 10:29 AM
    • Marked as answer by SellRex Thursday, March 15, 2012 5:17 PM
    Thursday, March 15, 2012 10:07 AM
  • Robbin,

    Thank you! Yes, that sounds like it's going to work for me. Also, I am aware of the existence of the transient fault handling capabilities. I tested that back when it was put out by the Customer Advisory Team. I think the transient fault handling moved away from the CAT into one of the Azure frameworks at some point. I had taken it out of the app just temporarily since it left that old CAT framework they released. I'll bring it back into play asap. These two steps should clear up my issue. I think that serialized hashtable that I'm pulling from the cache has about 6 or 8 entries in it ... the language of the exception makes more sense to me now that I think about it based on your tip.

    Thanks again for your help --


    Thursday, March 15, 2012 5:17 PM
  • One other thing occurs to me looking at the exception: It claims that "receive from the socket did not complete within the allotted timeout of 00:00:40" ... That's 40 seconds, right? That seems like quite a long time. I'm going to go ahead and increase the timeout to a full minute with:

    Dim config As New DataCacheFactoryConfiguration()
    config.TransportProperties.ReceiveTimeout = New TimeSpan(0, 1, 0)
    cacheFactory = New DataCacheFactory(config)

    However, I agree with you that I should bring the transient fault handling system back into play. I also think that my idea for table-backing my cache data is worth some testing. If the cache were to totally fail, I could still serve the request by just hitting table storage for the data ... relatively slow, yes ... but it should provide a nice backup system.


    Thursday, March 15, 2012 5:38 PM