Monday, July 30, 2012 8:01 AM
I wrote a small program that process a local directory and takes each file and publish it to a blobs container.
it works just fine when working with the local azure storage emulator. However, when I work directly with the cloud (azure), there are exceptions:
Error 12002: Error in: WinHttpReceiveResponse: The operation timed out
and the application throws an un-handled exception:
Microsoft C++ exception: utilities::win32_exception
I noticed this happens when I work with larger files (1MB+) on my low upload cables connection. When i tried on a internet connection with better upload the exception got thrown on larger file (2MB and 13MB).
the code to publish the blobs is very simple (in a try/catch block):
auto blob = container.create_block_blob(utilities::conversions::utf8_to_utf16(EncodeName(Name)));
if ( !blob.put(std::move(vecData)).get())
I suspect it's due to the amount of time it takes to send the files to azure but I couldn't find any way to set or control timeout period for it. Is there a way for it?
Since I had this error I tried to work directly with blocks (say 100K size blocks). but even when I try the simple example described in the "Accessing Azure Storage" help file, I immediately get exception on the put_block action. The code is like this (vec has data in the real program) :
vector<unsigned char> vec;
std::string id = string("BaseInformation-1");// + ITA(i+1);
the exception thrown:
InvalidQueryParameterValue and the full xml is:<?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidQueryParameterValue</Code><Message>Value for one of the query parameters specified in the request URI is invalid.
Time:2012-07-30T07:36:42.3472165Z</Message><QueryParameterName>blockid</QueryParameterName><QueryParameterValue>BaseInformation-1</QueryParameterValue><Reason>Not a valid base64 string.</Reason></Error>
Anyone can help here?
Monday, July 30, 2012 6:07 PM
Looks like you are encountering two problems here.
1. The first problem sounds like you are encountering timeout issues on requests that take a long time to upload data to live Azure storage. How long is it actually taking before the put method times out? Right now we are using 30 seconds for request and response timeouts. The right thing for us to do is to expose options for setting timeout values on our underly http_client, which is used to implement our Azure storage services. Then our storage library will appropriately set better default timeout values as well as expose the option to users to explicitly give a timeout value. I will make sure this gets added to our list of work for one of our future refreshes.
2. I think the second problem you are encountering is due to the fact that with put_block you need to handle the base 64 encoding of block ids yourself. Another thing to keep in mind is all block ids must be the same length within a single block blob. We recently did some application building ourselves and considered this too much of a hassle, so we fixed the library to address these usability issues. In a future release we handle the base 64 encoding for the user. We also added options for users who don't care about being able to assign a name to individual blocks in a blob, to just specify an integer value for each block id and allow the library to take care of everything.
For now to work around the issue of needing to handle the base 64 encoding use the function utilities::conversions::to_base64 in asyncrt_utils.h.
Thanks for the feedback and let us know if you encounter any other issues.
- Proposed As Answer by Steven GatesMicrosoft Employee Tuesday, July 31, 2012 5:32 PM
Tuesday, July 31, 2012 10:15 AM
Thanks for the reply.
1. Indeed that would be good thing to expose such properties
2. the example in the documentation doesn't mention the need to base64 encode the id nor the request to have same length block ID. It means that many users will need to pad numerical values to fixed-size length - Is that a limitation of Azure itself or the library? I tried with encoding and the numerical values padding and indeed there are no more exceptions. However, I do not see the data at all - all blobs are zero length. Is a commit action of some kind required? I couldn't find such function. Tried with the blob.put().wait() call but to no avail. What do I miss here?
Tuesday, July 31, 2012 11:48 AM
OK found the commit action. Needed to call put_block_list().get() with vector<put_block_info>. for each added block i needed to add an entry to the vector with the base64 id of the block.
Documentation should really expanded and updated :)
Tuesday, July 31, 2012 5:32 PM
The requirements around block ids comes from Azure itself not from Casablanca. I agree with you about having to deal with the encoding and the padding to make fixed length is a pain. Like I mentioned before when our next release comes out you should take at some of the improvements we made to managing block ids. If you don't care about having string names for the block ids you can just give an integer value and Casablanca will take care of the encoding and padding for you.
You are completely right regarding the need to perform a commit on the blocks uploaded. When blocks are uploaded they are initially uncommitted and will be discarded after a week if not committed. Take a look at this msdn page explaining a bit more about block blobs.
Regarding the documentation we will take note to add more material for our next release. Thanks for the feedback and keep it coming!