none
Putblock Request too big

    Question

  • I have a code that looks like following

     

    string outputPreamble = "<gml:FeatureCollection xmlns:gml='http://www.opengis.net/gml' xmlns:xlink='http://www.w3.org/1999/xlink' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:fme='http://www.safe.com/gml/fme' xsi:schemaLocation='http://www.safe.com/gml/output.xsd>'";
          try
          {
            blobObject.PutBlock(DataConstants.MAP_CONTAINER_BLOB, DataConstants.OUTPUT_BLOB, Guid.NewGuid().ToString(), outputPreamble);
          }
          catch (Exception e)
          {
            System.Diagnostics.Trace.WriteLine("Error writing preamble" + e.Message + "-" + e.InnerException);
          }
    

    Thsi returns the error:


    The request body is too large and exceeds the maximum permissible limit.

    System.Net.WebException: The remote server returned an error: (413) The request body is too large and exceeds the maximum permissible limit..at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at Microsoft.WindowsAzure.StorageClient.EventHelper.ProcessWebResponse(WebRequest req, IAsyncResult asyncResult, EventHandler`1 handler, Object sender)

     

    Isn't 4MB the limit for a block? My PutBlock method looks like:

     public bool PutBlock(string containerName, string blobName, string blockId, string content)
        {
          try
          {
            CloudBlobContainer container = BlobClient.GetContainerReference(containerName);
            CloudBlockBlob blob = container.GetBlockBlobReference(blobName);
    
            string blockIdBase64 = System.Convert.ToBase64String(System.Text.ASCIIEncoding.ASCII.GetBytes(blockId));
    
            if (content == "") content = " ";
            UTF8Encoding utf8Encoding = new UTF8Encoding();
            
            using (MemoryStream memoryStream = new MemoryStream(utf8Encoding.GetBytes(content)))
            {
              memoryStream.Position = 0;
              blob.PutBlock(blockIdBase64, memoryStream, null);
             }
    
            return true;
          }
          catch (StorageClientException ex)
          {
            if ((int)ex.StatusCode == 404)
            {
              return false;
            }
            throw;
          }
        }
    

     


    Dinesh Agarwal
    Wednesday, June 15, 2011 5:14 PM

Answers

  • Hi Dinesh,

     In addition to the 4MB per block limit, we have the following limits (see document):

    The maximum blob size currently supported by the Put Block List operation is 200 GB, and up to 50,000 blocks. A blob can have a maximum of 100,000 uncommitted blocks at any given time, and the set of uncommitted blocks cannot exceed 400 GB in total size. If these maximums are exceeded, the service returns status code 413 (RequestEntityTooLarge).

    Can you confirm you have not exceeded the above limits?

     

    Thanks,

    jai

    Sunday, June 19, 2011 5:22 AM

All replies

  • Using Fiddler, to see the precise request and response, should be the first step in diagnosing any problem with Azure Storage.
    Wednesday, June 15, 2011 5:51 PM
  • Yes, for block blob is there limit 4 MBs per block and for PageBlock there is limit 4 MBs for one random write :-) This should be the main problem :-)
    Windows Azure Consultant http://cloudikka.wordpress.com/ (Don't open this link, if you don't understand czech language)
    Wednesday, June 15, 2011 6:00 PM
  • Thanks for quick response but my content is

    string outputPreamble = "<gml:FeatureCollection xmlns:gml='http://www.opengis.net/gml' xmlns:xlink='http://www.w3.org/1999/xlink' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:fme='http://www.safe.com/gml/fme' xsi:schemaLocation='http://www.safe.com/gml/output.xsd>'";

    This definitely will not take 4MB, therefore I do not understand what causes the error.

    I am using fiddler to see the real message but I doubt if size is the real problem.


    Dinesh Agarwal
    Wednesday, June 15, 2011 6:03 PM
  • -- I am using fiddler to see the real message but I doubt if size is the real problem.

    So what are the request and response headers as seen by Fiddler?

    Wednesday, June 15, 2011 6:39 PM
  • Hi, I see the request and response as: http://i.imgur.com/JbIak.png When I receive the error it is shown as: http://i.imgur.com/1VhOk.png I just started using fiddler so if you need more information on something I will appreciate if you can let me know if I need to change any setting.
    Dinesh Agarwal
    Wednesday, June 15, 2011 7:50 PM
  • Hi Dinesh,

    The screenshot links you posted above does not show this particular error. Is it possible that you clear out all other items from Fiddler and show only the request/response for this PotBlock operation.

    Thanks

    Gaurav Mantri

    Cerebrata Software

    http://www.cerebrata.com

     

    Thursday, June 16, 2011 1:14 AM
  • Hi Dinesh,

     In addition to the 4MB per block limit, we have the following limits (see document):

    The maximum blob size currently supported by the Put Block List operation is 200 GB, and up to 50,000 blocks. A blob can have a maximum of 100,000 uncommitted blocks at any given time, and the set of uncommitted blocks cannot exceed 400 GB in total size. If these maximums are exceeded, the service returns status code 413 (RequestEntityTooLarge).

    Can you confirm you have not exceeded the above limits?

     

    Thanks,

    jai

    Sunday, June 19, 2011 5:22 AM
  • Dear Jai,

     

    I guess that is the problem here. I have about 500.000 blocks getting written. I am trying to think of a solution from business point of view but if you can suggest a technical solution it will be greatly appreciated.

     

    Regards,


    Dinesh Agarwal
    Monday, June 20, 2011 4:56 PM
  • Dear All,

    A temporary solution seem to work where I changed the business logic and it produces way lesser blocks (bigger though) than earlier. It works as of now but I was curios to know if the order will be maintained.

    What I am trying to do is when the block is larger than 4MB I split it into multiple blocks of size 4MB and then store them as multiple blocks one after the other. There are multiple worker roles doing the same thing independently. I want my splitted blocks to be stored continuously which I guess is true only if I get lucky. Please correct me if I am wrong and if I am correct please guide me on how to ensure data integrity by maintaing order of blocks. Thank you.

     

    Regards,


    Dinesh Agarwal
    Friday, June 24, 2011 5:47 PM