VENDITE: 1-800-867-1389

 none
Why does performance on Azure storage vary significantly from run to run

    Domanda

  • Why does performance on Azure storage vary significantly from run to run? Nasuni published the performance result on Amazon s3, Azure storage and Rackspace. The report shows that the throughput of Azure storage can go between 30MB/sec or 190MB/sec, and it seems highly dependent on the time of day when the test was being run, while Amazon s3 gets 270MB/sec consistently. Does any one have any insights into this? 

    mercoledì 23 maggio 2012 16:19

Risposte

  • Hi,

    When you mentioned the Windows Azure Storage, I think it specifies to Windows Azure Blob Storage service, not including Table or Queue services. For any single Blob, the target throughout is up to 60 Mbytes/sec. Therefore, for the whole Blob Storage, the total throughout (for a single storage account) may higher or lower depending how many blobs are concurrently accessed, and network bandwidth between Azure Blob Storage and client-side.

    There is a tool from Microsoft Research, by which you can test Windows Azure Storage throughput, check http://research.microsoft.com/en-us/downloads/5c8189b9-53aa-4d6a-a086-013d927e15a7/default.aspx. Also, check the following blog post about Windows Azure Storage abstractions and scalability targets, http://blogs.msdn.com/b/windowsazurestorage/archive/2010/05/10/windows-azure-storage-abstractions-and-their-scalability-targets.aspx.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

    Thanks & Regards,
    Alex
    Microsoft Online Community Support



    venerdì 25 maggio 2012 05:42
  • Hi - thanks for the question!

    Using multiple blocks in parallel is a great way to get maximum throughput, and that is the approach we've based the scalability target of 60MB/s on.  So while you will be able to exceed that when uploading to multiple blobs in parallel, you shouldn't expect to go much faster for a single blob.  For more information on this, including important information on access patterns that will allow for better throughput, I highly recommend the second link Alex sent - Windows Azure Storage Abstractions and their Scalability Targets.

    Let us know if you have any further questions, I see you split the compression question into another thread, so I won't go into that here.


    -Jeff

    mercoledì 30 maggio 2012 10:04

Tutte le risposte

  • Hi,

    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay.

    Appreciate your patience.

     

    Please mark the replies as answers if they help or unmark if not. If you have any feedback about my replies, please contact msdnmg@microsoft.com Microsoft One Code Framework

    giovedì 24 maggio 2012 05:05
  • Hi,

    When you mentioned the Windows Azure Storage, I think it specifies to Windows Azure Blob Storage service, not including Table or Queue services. For any single Blob, the target throughout is up to 60 Mbytes/sec. Therefore, for the whole Blob Storage, the total throughout (for a single storage account) may higher or lower depending how many blobs are concurrently accessed, and network bandwidth between Azure Blob Storage and client-side.

    There is a tool from Microsoft Research, by which you can test Windows Azure Storage throughput, check http://research.microsoft.com/en-us/downloads/5c8189b9-53aa-4d6a-a086-013d927e15a7/default.aspx. Also, check the following blog post about Windows Azure Storage abstractions and scalability targets, http://blogs.msdn.com/b/windowsazurestorage/archive/2010/05/10/windows-azure-storage-abstractions-and-their-scalability-targets.aspx.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

    Thanks & Regards,
    Alex
    Microsoft Online Community Support



    venerdì 25 maggio 2012 05:42
  • Thanks for reply.

    Yes, by azure storage service, I meant Azure Blob Storage service.  My goal is to maximize the data uploading from on-prem server to azure using any technology available. What I think of are 1. concurrency; 2: compression: 3. wire-level protocol change, such as UDP.

    I'm trying to understand the target throughtput 60MB/s for a single blob here. If I'm using the block blob API to upload multiple blocks (for one blob, say a very big file), can I achieve a higher throughput than 60MB/s.

    Do you know any info about when Azure Blob Storage service will support compression, UDP?

    Thank you 

    martedì 29 maggio 2012 02:08
  • Hi - thanks for the question!

    Using multiple blocks in parallel is a great way to get maximum throughput, and that is the approach we've based the scalability target of 60MB/s on.  So while you will be able to exceed that when uploading to multiple blobs in parallel, you shouldn't expect to go much faster for a single blob.  For more information on this, including important information on access patterns that will allow for better throughput, I highly recommend the second link Alex sent - Windows Azure Storage Abstractions and their Scalability Targets.

    Let us know if you have any further questions, I see you split the compression question into another thread, so I won't go into that here.


    -Jeff

    mercoledì 30 maggio 2012 10:04