locked
Size of message to be sent is larger than the maximum message size 8388608 specified in configuration. RRS feed

  • Question

  • I'm getting the error message below when calling dataCache.Put

    "Size of message to be sent is 13199417 bytes which is larger than the maximum message size 8388608 specified in configuration."

    I've read that it might have something to do w/ WCF and a binding setting, but I can be completely wrong on that.  If anyone can tell me where I can fix this it would be greatly appreciated.  Thanks.


    • Edited by jborden13 Wednesday, November 7, 2012 6:00 PM
    Wednesday, November 7, 2012 6:00 PM

All replies

  • Hi

    Can you give more details about your question?

    what are you trying to do ?

    This is Azure forum, if your question is a wcf question, please post it in WCF forum.

    Thanks.

    • Marked as answer by Dino He Wednesday, November 14, 2012 8:31 AM
    • Unmarked as answer by jborden13 Tuesday, December 4, 2012 9:24 PM
    Monday, November 12, 2012 1:32 AM
  • Hi,

    I'm trying to add a large data set to Azure cache, but when I do, I get that error message from the DataCache.Put method.  Thanks for the help.

    Thursday, November 15, 2012 5:37 PM
  • bump
    Thursday, November 22, 2012 7:14 PM
  • I also have the same issue. It happens when I try to upload a file to import data in my application (upload file followed by lenghty application process).

    This worked fine previously on Azure, but I recently had to upgrade to the new SDK (1.8) and client storage API (2.0) to solve another issue (which was also WFC-ish related concerning cache, ErrorCode<ERRCA0017>:SubStatus<ES0006>:There
    is a temporary failure. Please retry later. )

    This is now completly broken, even though our code hasn't changed. Any idea where this could come from? Or how could I change the configuration given that it looks like it is the co-located cache default settings, which I don't think I have access to.

    (excuse my english)
    Tuesday, November 27, 2012 9:26 AM
  • Can we unmark this thread as answered ?

    I have the same issue - it has nothing to do with WCF. The problem is that new co-located Azure cache have a maximum object size set to 8388608 bytes and there is no configuration for this value. (or at least I am not able to find it http://msdn.microsoft.com/en-us/library/windowsazure/hh914132.aspx - seems as a pretty complete documentation )

    To reproduce the issue the easiest way is to add a byte array larger then 8388608 to cache i.e.:

            private void PutAndGetLargeFile(int sizeInBytes)
            {
                var key = Guid.NewGuid().ToString();
                var value = new byte[sizeInBytes];
                Assert.DoesNotThrow(() => _cache.Put(key, value));
                Byte[] item = null;
                Assert.DoesNotThrow(() => item = (byte[])_cache.Get(key));
                Assert.IsNotNull(item);
                Assert.AreEqual(sizeInBytes,item.Length);
            }

    Tuesday, December 4, 2012 12:46 PM
  • Unmarked.  Didn't realize the mod had marked it as an answer.  This is still outstanding for me as well.  MSFT what's the deal?
    Tuesday, December 4, 2012 9:26 PM
  • bump
    Tuesday, December 11, 2012 4:23 AM
  • I am also experiencing this issue.
    Tuesday, December 11, 2012 5:06 PM
  • You can try enabling compression using DataCacheFactoryConfiguration.

    Programmatically this can be done by setting isCompressionEnabled=true. Link : http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.applicationserver.caching.datacachefactoryconfiguration.iscompressionenabled

    Using app.config/web.config please using setting similar to below:

    <dataCacheClients>
        <dataCacheClient name="default" isCompressionEnabled="true">
          <autoDiscover isEnabled="true" identifier="[cache role identifier]" />
          <!--<localCache isEnabled="true" sync="TimeoutBased" objectCount="1000" ttlValue="300" />-->
        </dataCacheClient>
      </dataCacheClients>
      

    However, note the above setting will only work for cases when after compression serialized object size is less than 8MB.

    Can you tell us more about your scenarios in which you need higher objects to be put in azure cache? what type of objects you are storing? What are your required latencies for such objects?

    • Proposed as answer by unconnected4 Monday, March 18, 2013 10:39 AM
    Wednesday, December 12, 2012 6:56 AM
  • Hi Vinushree,

    Although, I am not facing this issue at the moment - I wish to reply to your questions:

    Can you tell us more about your scenarios in which you need higher objects to be put in azure cache? What type of objects you are storing?

    Imagine a scenario where someone wishes to load a very big list of complex objects (ie: Products, UserProfiles, etc) and that list happens to be greater than 8, 10, 15 or even 30 MB (after being serialized, compressed, etc). I can agree that at this point it might make more sense having this on a Storage Blob than on the co-located cache (cluster), but I still feel that I should be able to use the co-located cache service for this. Even if it means paying a price for that (like increased latency)

    What are your required latencies for such objects?

    As low as possible! ;)

    I can understand why it needs to be a "limit" on the amount of data that the cache service is able to handle, in such a way that it keeps low latenties and fast replication throughout the cluster. On the hand, I can understand that people might just want to have the possibility of raising this 8MB limit at their own risk.



    Best Regards,
    Carlos Sardo

    Wednesday, December 12, 2012 12:51 PM
  • "Can you tell us more about your scenarios in which you need higher objects to be put in azure cache? what type of objects you are storing? What are your required latencies for such objects?"

    I have a large list of custom objects that the user accesses on a frequent basis.  Not really sure what else I can add that will be of value...

    Thanks.


    • Edited by jborden13 Thursday, December 13, 2012 8:44 PM typo
    Thursday, December 13, 2012 6:07 PM
  • Hi Vinushree,

    Is there any info how compression enabling influenses productivity and number of concurent  write/read threads?

    Monday, March 18, 2013 10:40 AM
  • Bump.

    I have the same issue, enabling compression doesn't fix it for me as it can still b over the limit. Is there anyway we can just set the max object size manually?

    Cheers


    Gareth Hewitt

    Monday, April 1, 2013 1:20 PM
  • We are getting the same issue - except for us we are using Azure Role Cache to be our ASP.Net Session State provider, since our reporting solution relies on session state.  For large reports, we get this message Size of message to be sent is 14021902 bytes which is larger than the maximum message size 8388608 specified in configuration.  this seems to be a VelocityPacketTooBigException:

    Call stack:

    Microsoft.ApplicationServer.Caching.VelocityWireProtocal.GetWritePacketBuffer()

    Microsoft.ApplicationServer.Caching.SocketCleintChannel.Send()

    DataCacheException: ErrorCode<ERRCA0039>:SubStatus<ES0001>Size of message to be sent is 14021902 bytes which is larger than the maximum message size 8388608 specified in configurationStack Trace for Error


    Deva Wijewickrema

    Tuesday, April 23, 2013 8:02 PM
  • It would be great if someone could give more feedback about this.

    Best Regards,
    Carlos Sardo

    Tuesday, April 23, 2013 8:21 PM
  • I got this reply from the Azure support team:

    Hello Deva,

    How are you?  This is Rudy from Microsoft Azure support team and I will be working with you on this issue while my colleague Imtiaz is on vacation.  I have reviewed the forum posted you mentioned below.   I believe there are two configuration property you can try to work around the issue.  There is a isCompressionEnabled property under dataCacheClient that you can set to true, this will reduce the size of your cached object (http://msdn.microsoft.com/en-us/library/windowsazure/gg185662.aspx):

    <dataCacheClient name="default" isCompressionEnabled="true">

    Also there is a TransportProperties called MaxBufferSize that you can set to increase the max message size (http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.applicationserver.caching.datacachetransportproperties_properties.aspx):

      <transportProperties connectionBufferSize="131072" maxBufferPoolSize="268435456" 
                               maxBufferSize="8388608" maxOutputDelay="2" channelInitializationTimeout="60000" 
                               receiveTimeout="600000"/>

    Let me know you have any questions on this.

    Thanks!

    Microsoft Azure Support


    Deva Wijewickrema

    Wednesday, April 24, 2013 1:02 PM
  •     <dataCacheClient name="default" isCompressionEnabled="true">
          <autoDiscover isEnabled="true" identifier="foo" />
          <!--<localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="300" />-->
          <transportProperties maxBufferSize="67108864"/>
        </dataCacheClient>

    Is how my web.config changed.

    Deva Wijewickrema

    Wednesday, April 24, 2013 1:11 PM