locked
Table Storage Service vs Windows Azure Caching RRS feed

  • Question

  • My question is which should we use:

    Table Storage Service (Azure's implementation of nosql)

    Windows Azure Caching (unstructured in memory cached)


    We want to implement caching in Azure for two main reasons:

    1. speed up repetive data access

    2. reduce stress on the database


    Here are the characteristics of the data we are planning to cache:

    1. Relatively small (1 - 100 kb)

    2. Specific to each customer

    3. Not private, but we don't really want random people navigating through our entire cache

    4. XML or JSON

    5. Consumed by C# (i.e. not linked to directly in the html)

    6. Most weeks the data will not change, although some days the data could change several times


    For our purposes Table storage appears better than Blob storage (we did just implement Blob storage for images, CSS, and JavaScript) and Windows Azure Caching appears better than Windows Azure Shared Cache (perhaps almost always better and the shared caching is mostly a legacy feature at this point).  


    From what I can pick up by reading numerous articles online it appears that Windows Azure Caching will be slightly faster but Table storage will be slightly easier to debug.  Other than that Table Storage and Windows Azure Caching appear to be very similar.  The programming API of both appears straight forward.  Compared to what we pay for cloud sites the cost of each seems to be negligible.  Any advice?


    Tuesday, March 12, 2013 1:07 AM

Answers

  • I moved this question to StackOverflow and got some good insight that is leading me to think that nosql is more of an option to take load of the DB (or replace a SQL database for some tasks) than speed up the app 'a la caching.  

    So the answer to my quandry is to implement Windows Azure Caching now and if we need additional optimization in the future to look to: use NoSQL to reduce the load on the SQL Server and/or use MemCached to optimize our Azure Caching implementation.

    • Marked as answer by Brian Bober Tuesday, March 12, 2013 5:03 PM
    Tuesday, March 12, 2013 5:03 PM

All replies

  • Based on some additional discussions within our team we have some concerns about the Windows Azure Cache that probably don't affect the Table Storage service:

    1. If the VM is moved to a different server (by Microsoft for load balancing or whatever reasons) is the in-memory cache moved intact
    2. We are guessing that whenever we publish changes to the cloud it wipes out the existing in-memory cache
    3. While the users rarely make changes to the cached data when they do make changes it is likely that they may make multiple updates within seconds and we are not sure how this is going to work with cache located across multiple nodes running web roles especially with increased traffic.

    • Edited by Brian Bober Tuesday, March 12, 2013 2:30 AM Question Update
    Tuesday, March 12, 2013 2:29 AM
  • Hi,

    I don't think storage and Azure caching can be the similar service which need your choice. For saving files or data, of course the storage service. For your situation, just turn on the CDN for your Blob storage which provides the functionality as you required such as speed up repetive data retrieving.

    http://www.windowsazure.com/en-us/develop/net/common-tasks/cdn/

    Thanks,


    QinDian Tang
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

    Tuesday, March 12, 2013 6:16 AM
  • Hi QinDian, 

    I have turned on Blob storage (which I mention in my question).  It doesn't seem that Microsoft intended Blob storage as the solution to my problem.  Thus their implementation of Table Storage Service and Windows Azure Caching.
    Tuesday, March 12, 2013 3:09 PM
  • I moved this question to StackOverflow and got some good insight that is leading me to think that nosql is more of an option to take load of the DB (or replace a SQL database for some tasks) than speed up the app 'a la caching.  

    So the answer to my quandry is to implement Windows Azure Caching now and if we need additional optimization in the future to look to: use NoSQL to reduce the load on the SQL Server and/or use MemCached to optimize our Azure Caching implementation.

    • Marked as answer by Brian Bober Tuesday, March 12, 2013 5:03 PM
    Tuesday, March 12, 2013 5:03 PM
  • Based on some additional discussions within our team we have some concerns about the Windows Azure Cache that probably don't affect the Table Storage service:

    1. If the VM is moved to a different server (by Microsoft for load balancing or whatever reasons) is the in-memory cache moved intact
    2. We are guessing that whenever we publish changes to the cloud it wipes out the existing in-memory cache
    3. While the users rarely make changes to the cached data when they do make changes it is likely that they may make multiple updates within seconds and we are not sure how this is going to work with cache located across multiple nodes running web roles especially with increased traffic.

    Hi Brian,

    I wanted to reply with some additional information about these points even though the thread is marked as answered.

    For #1, if the shutdown of the role is due to a planned shutdown, such as patches or a rolling upgrade, then the items in the cache will be redistributed to the other nodes in the cache cluster, as long as there is capacity int he remaining nodes of the cache to hold the data from the node that is being shut down. This data move is done on a best effort basis, and is dependent on the CPU and network load at the time. If you have used the capacity planning guide (http://aka.ms/CacheCapacityPlanning) for setting up your cache cluster and the load characteristics are not too different the ones used in the capacity planning tool, then there is a high chance of all of the data being moved.

    For #2, it depends on the type of changes made. If you are just deploying a new cscfg then this won't have any impact on the cache. If there are any changes to a property of a named cache itself, then the cache is deleted and re-created and the items are lost. There is some additional information in this topic that covers details on the different configuration changes you can make to a running cache application: http://msdn.microsoft.com/en-us/library/windowsazure/jj835080.aspx

    For #3, I am not exactly sure of the scenario, but you can look at the capacity planning tool (http://aka.ms/CacheCapacityPlanning) and model different scenarios and cache loads to see what configuration would be needed to support that level of cache. If you provide more details I may be able to give more information to help answer this one.

    Thanks,

    Steve Danielson [Microsoft]
    This posting is provided "AS IS" with no warranties, and confers no rights.
    Use of included script samples are subject to the terms specified at http://www.microsoft.com/info/cpyright.htm

    Wednesday, March 20, 2013 3:28 PM