locked
Azure Table Storage Thoughput RRS feed

  • Question

  • Microsoft I am not trying to beat up on you but I just don't understand.

    A singe Azure Table Storage Partition is limited to 500 entities / second and a Storage Account is limited to 5000 / entities per second.  

    I ran a test on a P4 SATA with SQL express  and selected 2,000,000 rows of a 3 column table with write to file and it ran in 30 seconds - that is 66,666 / sec which is 133 times faster than 500 / second.   A .NET  read of that text file using StreamReader.ReadLine completes in 1 second which  is a full 4,000 times faster than 500/second.   Azure Table Storage Query by PartitionKey is 10,000 times slower than slower than I would expect.  What am I missing?


    joe developer

    Saturday, July 7, 2012 2:24 AM

All replies

  • Hi Joe Developer:

    EDIT: Correction - I was mistaken - "the 500 (per partition) and 5000 (per account) rates are storage operations per second" applies to queues, but not tables.

    Of course, you have figured out that you can get back more than one entity per operation. Tables can return up to 1000 entities at a time. (Not sure what your text file comment means.)

    Partitioning data is an important aspect of storing data in Table Storage. A layout that makes queries awkward will result in queries that are slow. The goal is to make sure any query uses the partition and row keys for best perf, or at least partition key where possible.

    EDIT: Removed comment on scalability.

    HTH,

    -Bill



    Saturday, July 7, 2012 3:07 AM
  • Thanks.  

    I don't know were I assert anything about scalability.

    Text file was merely a data point on straight file IO.   Since it only open the file once I get it may not be a fair comparison.   But not totally unfair as  SQL grabs pages from disk 2K at a time.

    As to 500 "operations" / second that is inconsistent with the documentation I have read.  http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx  It very specifically states 500 entities / second / partition.  It even has an example of 50 batches (of 100 each) / second.    Could you point me to documentation of 500 operations / second / partition?   My understanding is from a billing perspective 1000 rows from a single partition in a single query is 1 transaction.  

    Don't get me wrong I like Azure.   I am just doing some design on moving same large tables from SQL to Storage Tables and what it would do to users response time. 

    And I meant 1,000 times slower than expected not 10,000.


    joe developer




    Saturday, July 7, 2012 8:48 PM
  • Hi Joe Developer,

    (I edited my original response to avoid confusion for future readers. Sorry about the error. I should not post very late at night! My bad.)

    Windows Azure Tables target 500 entities per second per partition as you state.

    You are not measuring Table throughput, just comparing it to the published targets. Correct?

    And your baselines for comparison (SQL Express and text file) are based on running these on a single computer (meaning no network). Correct?

    -Bill

    Saturday, July 7, 2012 9:26 PM
  • @codingoutload My question is clear and you have shed no light on the question.   "Azure Table Storage Query by PartitionKey is 1,000 times slower than slower than I would expect.  What am I missing?"

    The SQL numbers don't change on a network as it is an engine.  

    The published rate of ATS of 500 entities / second / partition does not include network.  

    Yes ATS query performance is effected by the type of query. 

    A perfect query processes 500 entities / second / PartitionKey.   Yes ATS is massively scalable in that could have 1 million unique PartitionKeys and performance of a single partition is not effected.   Still 500 entities / second / partition seems slow.  I know ATS is not a relational database.  But ATS does have one key (PartitionKey) and against that key I would expect comparable performance to a relational database and yet ATS is not even close.   Is that just the price of being massively scalable?   

    ATS published performance is the same for select, update, and insert.   Even if you batch an insert the limit is still the same 500 entities / second / PartitionKey.     500 inserts / second / partition I get.  Clearly there is some throttling going on.  Is that to protect scalability? 


    joe developer

    Sunday, July 8, 2012 1:20 PM
  • Hi Joe Developer,

    Re: "Azure Table Storage Query by PartitionKey is 1,000 times slower than I would expect.  What am I missing?"

    I think this post will explain some key reasons. My reason for asking about whether you are testing on a single machine is to make sure I understand what it is you are comparing since these assumptions helped form your expectations.

    Re: "The SQL numbers don't change on a network as it is an engine."

    While a true statement, it is not a fair to compare raw performance on a single machine against a service accessed over a network. Having a network in the middle does matter. Just because SQL Express can handle 66,666 rows / second locally does not mean it can offer this throughput over a network.

    Re: "The published rate of ATS of 500 entities / second / partition does not include network."

    These targets do include delivery over a network. (Of course, this can easily become too sweeping a statement - Microsoft obviously does not control the whole network if you are accessing this data from outside its datacenter, but you will see this throughput within the datacenter. If you have a decent pipe, you will see it outside. The performance targets post for storage says: "The latency is typically around 100ms when accessing small objects (less than 10s of KB) from [same datacenter].")

    Re: "Yes ATS is massively scalable in that could have 1 million unique PartitionKeys and performance of a single partition is not effected."

    True. As long as the per-account limit of 5,000 entities per second is not exceeded.

    Re: "But ATS does have one key (PartitionKey) ..."

    ATS actually has two keys (PartitionKey and RowKey). The storage account is sometimes used as a (crude) third.

    Re: "But ATS does have one key (PartitionKey) and against that key I would expect comparable performance to a relational database and yet ATS is not even close.   Is that just the price of being massively scalable?"

    We have not yet agreed that ATS performance is "not even close" :-) - but I do agree that sometimes it makes sense to yield on absolute performance in favor of scalability. Sometimes also for resiliency, and higher availability, and such. (Read on for more on that.)

    Re: "Clearly there is some throttling going on.  Is that to protect scalability?"

    Short answer: yes. Longer answer...

    ATS is a service optimized for some balance of cost-efficiency, scalability, reliability, and performance. Think about ATS as having "enterprise" features compared to SQL Express. Every byte of data in ATS is written in triplicate before your REST service call responds. This is for high availability and resiliency to failure: if something fails (hard disk, NIC, memory, CPU, ..), no data is lost, and the service keeps working. Further, data is geo-replicated to another datacenter for DR purposes; this happens async so as to not impact performance. It is a multi-tenant service - this helps keep costs low, though at the same time requires that the service not allow any individual tenant to dominate usage, so that's why the scalability targets are clearly stated. 

    Re: "Yes ATS query performance is effected by the type of query."

    True. ATS is scalable, but not magically scalable - a good deal of thought needs to go into designing your data layout so that partitions are used appropriately to meet any perf and scalability requirements you desire. Yes, a single partition supports "only" 500 entities per second. This is only theoretically a problem unless it is determined that for some reason entities cannot be distributed across partitions to meet requirements.

    Further, the type of query simulated in SQL Express are most likely very different than realistic traffic that ATS may shine at. ATS is a NoSQL attribute-value store that returns up to 255 properties with each entity which can (combined) by up to 1 MB in size. If your data is truly relational, a NoSQL database may be a painful choice; you may want to look into sharding (specifically the Federations feature in SQL Database (née SQL Azure)).

    Since I have no information about actual performance or scalability requirements that ATS does not meet, only comparisons to local speed of SQL Express and reading a file, I do not know what other information to offer. But maybe you are just trying to figure out why it is the way it is. Hopefully this has helped. If you really want a closer look into the machinery, this is a great read:

    http://blogs.msdn.com/b/windowsazurestorage/archive/2011/11/20/windows-azure-storage-a-highly-available-cloud-storage-service-with-strong-consistency.aspx

    I hope this helps, Joe Developer. Now.. back to my Sunday. :-)

    Good luck,

    -Bill

    Sunday, July 8, 2012 2:56 PM
  • @codingoutload  No that does not help.  Please let someone else try and answer.


    joe developer

    Sunday, July 8, 2012 5:32 PM
  • Hi - Just trying to help here...

    Yes, as stated in the blog post linked from the other posts on this thread, the scalability target is 500 entities per second per partition.  There are multiple reasons for this (thanks, Bill, for much of what you brought up), but that's the bottom line.  Are there further questions about application design or anything that you would like to discuss?  Let me know, and I can try to help with that.

    Thanks!
    -Jeff


    -Jeff

    Monday, July 16, 2012 5:36 PM