The request body is too large and exceeds the maximum permissible limit.
System.Net.WebException: The remote server returned an error: (413) The request body is too large and exceeds the maximum permissible limit..at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at Microsoft.WindowsAzure.StorageClient.EventHelper.ProcessWebResponse(WebRequest
req, IAsyncResult asyncResult, EventHandler`1 handler, Object sender)
Isn't 4MB the limit for a block? My PutBlock method looks like:
Yes, for block blob is there limit 4 MBs per block and for PageBlock there is limit 4 MBs for one random write :-) This should be the main problem :-)Windows Azure Consultant http://cloudikka.wordpress.com/ (Don't open this link, if you don't understand czech language)
Hi, I see the request and response as: http://i.imgur.com/JbIak.png When I receive the error it is shown as: http://i.imgur.com/1VhOk.png I just started using fiddler so if you need more information on something I will appreciate if you can let me know
if I need to change any setting.Dinesh Agarwal
The screenshot links you posted above does not show this particular error. Is it possible that you clear out all other items from Fiddler and show only the request/response for this PotBlock operation.
In addition to the 4MB per block limit, we have the following limits (see
The maximum blob size currently supported by the Put Block List operation is 200 GB, and up to 50,000 blocks. A blob can have a maximum of 100,000 uncommitted blocks at any given time, and the set of uncommitted blocks cannot exceed 400
GB in total size. If these maximums are exceeded, the service returns status code 413 (RequestEntityTooLarge).
Can you confirm you have not exceeded the above limits?
I guess that is the problem here. I have about 500.000 blocks getting written. I am trying to think of a solution from business point of view but if you can suggest a technical solution it will be greatly appreciated.
A temporary solution seem to work where I changed the business logic and it produces way lesser blocks (bigger though) than earlier. It works as of now but I was curios to know if the order will be maintained.
What I am trying to do is when the block is larger than 4MB I split it into multiple blocks of size 4MB and then store them as multiple blocks one after the other. There are multiple worker roles doing the same thing independently. I want my splitted blocks
to be stored continuously which I guess is true only if I get lucky. Please correct me if I am wrong and if I am correct please guide me on how to ensure data integrity by maintaing order of blocks. Thank you.