VENTAS: 1-800-867-1389

 none
Does the 8 MB post serialization object size limit applies to windows azure caching preview too?

    Pregunta

  • I am aware that windows azure appfabric cache (shared) supports objects only up to 8 MB in size that is measured post the serialization. I am wondering if the same size limit is also applicable to Windows Azure Caching (Preview). If not then how can I put large objects (>8 MB size) in memory. I tried increasing maxBufferSize but it is not helping. Other settings are as followed. (securityProperties mode="None" protectionLevel="None")

    viernes, 13 de julio de 2012 10:58

Respuestas

  • I can confirm that the 8 MB limit applies to Azure Cache preview as well as the Azure shared cache. Currently there is no better way but to split the objects as Sameer has said in his post above..

    Sameer, you can raise a feature request by describing your scenario and use case for large objects in the cache. Please write to me at pragya dot agarwal delete this at microsoft dot com. We can consider it for the next release.

    • Propuesto como respuesta Carlos Sardo martes, 04 de septiembre de 2012 18:50
    • Marcado como respuesta Sameer Awate miércoles, 05 de septiembre de 2012 5:34
    lunes, 03 de septiembre de 2012 7:05

Todas las respuestas

  • Hi Sameer,

    I don't think there are cache limits per item you put on your Windows Azure Caching (Preview), as long you don't go over the overall/total memory limit. As you can read here: 

    http://msdn.microsoft.com/en-us/library/windowsazure/hh914161.aspx

    No quotas or throttling

    Your application is the only consumer of the cache. There are no predefined quotas or throttling. Physical capacity (memory and other physical resources) is the only limiting factor.

    One easy test you can do, just to be sure, is to try to put a big byte array (ie: 50 MB) in the cache and retrieve it again:

    Something Like this:

    byte[] bytes1 = new byte[1024 * 1024 * 50]; // 50 MB byte array
    
                DataCacheFactory cacheFactory = new DataCacheFactory();
                DataCache cache = cacheFactory.GetDefaultCache();
                cache.Add("byteArray", bytes1);
    
                byte[] bytes2 = cache.Get("byteArray") as byte[];

    Hope this helps!


    Cheers, Carlos Sardo




    viernes, 13 de julio de 2012 12:47
  • Hi Carlos,

    I tried code snippet given by you and have verified that it is NOT working for objects beyond 8MB. I have verified this both in local emulator as well as on Azure instance. I know to get rid of this issue, I can slice my object into multiple objects having size lesser than 8 MB whilst storing it in the cache and then again compile it back from those when it is requested back by the app. However, the kind of processing required for that is an unnecessary performance overhead, ultimately defeating the purpose of "caching". In my view it's big downside of AppFabric Cache and don't think it is of any use to me. I will have to figure out some elegant caching solution on my own.

    Sameer

    miércoles, 01 de agosto de 2012 10:59
  • Hi Sameer,

    You are using the AppFabric Cache service... which has known limitations mentioned here. One of them is:

    noteNote
    Items are serialized before being added to the cache, and the serialized items are typically larger than the memory they occupy at runtime. The maximum size of a post-serialized object is 8 MB.

    My previous reply was about the new Windows Azure Cache (Preview), introduced in SDK 1.7, back in June!

    noteNote
    These resource quotas do not apply to dcache2 on Windows Azure roles. For more information, see Windows Azure Caching (Preview) FAQ.

    You should definitely consider upgrading your solution to SDK 1.7 and make use of this new feature.

    Windows Azure Caching (Preview) supports the ability to host Caching services on Windows Azure roles. In this model, the cache is part of your cloud service. One role within the cloud service is selected to host Caching (Preview). The running instances of that role join memory resources to form a cache cluster. This private cache cluster is available only to the roles within the same deployment. There are two main deployment topologies for Caching (Preview): co-located and dedicated. Co-located roles also host other non-caching application code and services. Dedicated roles are only used for Caching. The following topics discuss these Caching topologies in more detail.

    I am sorry for the misunderstanding here. Hope this helps!


    Cheers, Carlos Sardo

    • Propuesto como respuesta Carlos Sardo miércoles, 01 de agosto de 2012 11:51
    miércoles, 01 de agosto de 2012 11:51
  • Apologies for using misleading words like "AppFabric Cache" in my previous reply, but I was intending to talk in context of "Windows Azure Caching (Preview)" only. So, just to reconfirm, I'm facing 8 MB size limitation issue for Windows Azure Cache (Preview), introduced in SDK 1.7.

    To get to final conclusion about this, may I know if you are able to store objects with size > 8 MBs successfully using SDK 1.7?

    miércoles, 01 de agosto de 2012 13:43
  • It would be great if anyone bothers to confirm this please. It would be even better if appropriate documentation by MS is pointed out confirming this 8MB post serialization object size limit for Azure Cache (Preview), if it's not there will MS bother to point out this fact in their associated documentation please.
    viernes, 17 de agosto de 2012 7:49
  • I am experiencing the same problem Sameer.

    Did you ever figure out a work-around?

    miércoles, 29 de agosto de 2012 16:16
  • Hi Tom,

    Yes, there's a work-around, but TBH, it's a sheer pain - it's an overhead for the developer, for the underlying hardware and for the end user in terms of poor user experience.

    Anyway, the work-around is to slice/split your object to be cached into smaller pieces of size around 7 mb and then storing it in the cache. In turn, you will have to also maintain equal number of keys associated to each piece. While fetching the object back, you will have to use the collection of keys to fetch associated pieces and then compile the whole info back into a single object. Because of explicit serialiation / deserialization and overall processing involved the entire process becomes quite slow but at this moment this is the workaround I am aware of! At first place, I would suggest see if you don't have to do this, that is try to store lesser size objects in the cache if possible. Anyway, followed is some code for your help associated to what I just explained - HTH

    private static CacheEntry SplitDataTable(string DataTableName, DataTable table) { IFormatter formatter = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); formatter.Serialize(ms, table); byte[] b = ms.ToArray(); Collection<byte[]> cacheKeys = new Collection<byte[]>(); Collection<String> cacheKeyNames = new Collection<string>(); int buffersize = 1024 * 1024 * 7; int length = b.Count(); byte[] cacheEntry = null; int count = 0; int cursorLocation = 0; ms.Position = cursorLocation; while ((length - cursorLocation) > buffersize) { cacheEntry = new byte[buffersize]; ms.Read(cacheEntry, 0, buffersize); cursorLocation = (int)ms.Position; cacheKeyNames.Add(DataTableName + "_" + count); cacheKeys.Add(cacheEntry); count++; } cacheEntry = new byte[length - cursorLocation]; int readcount = ms.Read(cacheEntry, 0, (length - cursorLocation)); cacheKeys.Add(cacheEntry); cacheKeyNames.Add(DataTableName + "_" + count); CacheDetails cd = new CacheDetails { CacheKeyValues = cacheKeys }; CacheMetadata cm = new CacheMetadata { KeyCount = cacheKeys.Count, CacheKeyNames = cacheKeyNames }; return new CacheEntry { MetaData = cm, SubKeys = cd }; }

    		private static DataTable JoinAndDeserialize(Collection<byte[]> cacheKeys)
    		{
     			IFormatter formatter = new BinaryFormatter();
     
    			MemoryStream ms = new MemoryStream();
    			int count = 0;
    			while (count < cacheKeys.Count)
    			{
    				ms.Write(cacheKeys[count], 0, cacheKeys[count].Count());
    				count++;
    			}
     
    			ms.Position = 0;
     
    			DataTable dt = (DataTable)formatter.Deserialize(ms);
    			return dt;
     		} 
    

    If this answers your question, please Mark it as Answer. If this post is helpful, please vote as helpful.

    jueves, 30 de agosto de 2012 6:52
  • I can confirm that the 8 MB limit applies to Azure Cache preview as well as the Azure shared cache. Currently there is no better way but to split the objects as Sameer has said in his post above..

    Sameer, you can raise a feature request by describing your scenario and use case for large objects in the cache. Please write to me at pragya dot agarwal delete this at microsoft dot com. We can consider it for the next release.

    • Propuesto como respuesta Carlos Sardo martes, 04 de septiembre de 2012 18:50
    • Marcado como respuesta Sameer Awate miércoles, 05 de septiembre de 2012 5:34
    lunes, 03 de septiembre de 2012 7:05
  • Many thanks for the confirmation, Pragya. Regarding raising the feature request, I shall get in touch with you over email. Thanks, again.

    If this answers your question, please Mark it as Answer. If this post is helpful, please vote as helpful.

    miércoles, 05 de septiembre de 2012 5:34