Problems with azure June 2012 sdk storage emulator, sql server 2012 RRS feed

  • Question

  • We're experiencing a lot of problems with the June 2012 storage emulator, possibly also related to our recent transition to sql server 2012.

    The particular thing that seems to happen is that we start getting "Server encountered an internal error. Please try again after some time./{"The remote server returned an error: (500) Internal Server Error."}" This occurs via the storage client while attempting to access blobs. We experience this in our own code (e.g., CloudBlob::DownloadToStream), and in all the various tools which can browse blob storage (cerebrata, cloudberry, etc.) Listing containers and blobs seems to work, but viewing them seems to fail. The observed behavior is that the access attempt hangs for a long period of time (many seconds) followed by the error, as you might expect in a timeout situation.

    Turning out detailed logging, we get this:

    7/23/2012 12:58:37 PM [UnhandledException] EXCEPTION thrown: Microsoft.Cis.Services.Nephos.Common.Protocols.Rest.FatalServerCrashingException: The fatal unexpected exception 'Could not find file 'c:\users\username\appdata\local\developmentstorage\sql\blockblobroot\1\a9c2e981-0655-4f7a-b2e6-a943e9297e9f\1'.' encountered during processing of request. ---> System.IO.FileNotFoundException: Could not find file 'c:\users\username\appdata\local\developmentstorage\sql\blockblobroot\1\a9c2e981-0655-4f7a-b2e6-a943e9297e9f\1'.
       at Microsoft.Cis.Services.Nephos.Common.Protocols.Rest.BasicHttpProcessorWithAuthAndAccountContainer`1.EndPerformOperation(IAsyncResult ar)
       at Microsoft.Cis.Services.Nephos.Common.Protocols.Rest.BasicHttpProcessorWithAuthAndAccountContainer`1.<ProcessImpl>d__4.MoveNext()
       --- End of inner exception stack trace ---
       at Microsoft.Cis.Services.Nephos.Common.Protocols.Rest.BasicHttpProcessor.EndProcess(IAsyncResult result)
       at Microsoft.WindowsAzure.DevelopmentStorage.Store.BlobServiceEntry.ProcessAsyncCallback(IAsyncResult ar)
    7/23/2012 12:58:38 PM [UnhandledException] EXCEPTION thrown: Microsoft.Cis.Services.Nephos.Common.Protocols.Rest.FatalServerCrashingException: The fatal unexpected exception 'Could not find file 'c:\users\username\appdata\local\developmentstorage\sql\blockblobroot\1\a9c2e981-0655-4f7a-b2e6-a943e9297e9f\1'.' encountered during processing of request. ---> System.IO.FileNotFoundException: Could not find file 'c:\users\username\appdata\local\developmentstorage\sql\blockblobroot\1\a9c2e981-0655-4f7a-b2e6-a943e9297

    FWIW I can tell you that that 


    does not exist 



    does exist. The blobs that are being referenced were created roughly a week ago by our software running in emulation and happily used during development. A week later, with no known configuration changes or software installations, sadness ensues.

    We've reinitialized storage multiple times, switched from sql express to sql server 2012, granted dbo access rights to NETWORK SERVICE (I'm now pretty much completely convinced that dsinit doesn't configure permissions correctly out of the box.)

    What's most frustrating is that we can get storage emulation to work just fine for "a while" and then it stops working "after some time." Blowing away the storage database (dsinit) and rebuilding and beginning another round of permissions granting seems to take us back to the same cycle.

    We've been using the storage emulator and azure for over two years, and we've never seen anything like this. Naturally, the SQL 2012 upgrade is highly suspicious.

    What would be great is a definitive list of all the accounts/permissions that are necessary to run the storage emulator, as this feels like a permissions issue. Of course, maybe it's not.

    Monday, July 23, 2012 6:30 PM

All replies

  • Hi,

    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay.

    Appreciate your patience.


    Please mark the replies as answers if they help or unmark if not. If you have any feedback about my replies, please contact msdnmg@microsoft.com Microsoft One Code Framework

    Tuesday, July 24, 2012 5:55 AM
  • Thanks. 
    Wednesday, July 25, 2012 5:58 PM
  • Hi,

    Which parameter are you using with dsinit? When the issue happens, are you able to upload a new blob to storage and download it?

    Have you tested /forcecreate command? Note that this option will lose any data stored in storage emulator.


    Cathy Miller

    Friday, July 27, 2012 1:13 PM
  • Good question about upload and download - I believe the answer is no, but I have not tested that.

    Use of forcecreate lets us get the emulator back to a working state (with loss of data).

    However, things have generally reverted back to a bad state after "some period of time."

    At the moment, everything is working well, and perhaps that's the end of the story. But that's what I thought the last time, prior to starting this thread.

    I'll monitor this and update if something happens. Unfortunately, I do not have any nice clean steps to reproduce.

    Sunday, July 29, 2012 7:47 PM
  • I am getting the same top-level error: "Server encountered an internal error. Please try again after some time." but I can't get any more information than that.  How did you get this detailed error message?


    Friday, August 10, 2012 5:30 PM
  • To get that detailed error, you'll need to turn on logging for development storage. That outer exception may also wrap an inner exception where you can get the also not helpful error 500 information.

    Logging is controlled here: 


    in the file called:


    (yes, it is actually misspelled).



    And you'll begin getting piles of logs.

    My issue seems to have miraculously self-healed, btw.

    At some point along the way, when launching a cloud project (emulation) from inside studio, it began prompting me to allow it permissions to run (once for each web and worker instance in the project.) I believe that this behavior was new to my installation of the latest and greatest azure sdk, though I'm not sure. It might also have coincided with a windows update, installation of sql server 2012, or a variety of other things. Because that's the sort of thing the emulator demanded of me a couple of years ago I took no notice.

    At some point, after a reboot of my machine, this behavior ceased. Since then, all has been well with storage. This makes me think that my suspicion "permissions are screwed up" was in the right department, though what precipitated this is still mysterious.

    My best advice is to wipe out storage, reboot, recreate it, reboot. That's the best approximation of the sequence of steps that led to solving what was a very persistent problem for me. Although, that could be total voodoo, since I don't have a real root cause.

    Saturday, August 11, 2012 5:16 PM
  • Unfortunately, this problem has recurred.

    I am able to create new blobs and download them. Old blobs are inaccessible, however.

    Monday, August 13, 2012 7:01 PM
  • I am having this same issue. We upgraded to the June 2012 SDK/ tools, and continuously have these errors occur "after some period of time."  I have tried this with SQlExpress, LocalDB etc and nothing seems to solve it.  

    Please help

    Thursday, August 16, 2012 5:18 PM
  • By the way... I am using SQL Server 2008 R2.  

    Friday, August 17, 2012 3:10 PM
  • I also have this problem.

    This is my first Azure implementation using BLOB storage. It worked for a while but now it does not.

    Visual Studio 2010 Professional (VB.NET)

    SQL Server 2008 R2

    Azure June 2012 SDK

    Windows 7 32 bit

    When I call CloudBlob.DownloadToFile it hangs for a while (I assume it is using a timeout setting in my app.config) then I get:

    StorageServerException Server encountered an internal error. Please try again after some time.

    Stack trace: 

       at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
       at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.Execute()
       at Microsoft.WindowsAzure.StorageClient.RequestWithRetry.RequestWithRetrySyncImpl[TResult](ShouldRetry retryOracle, SynchronousTask`1 syncTask)
       at Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteSyncTaskWithRetry[TResult](SynchronousTask`1 syncTask, RetryPolicy policy)
       at Microsoft.WindowsAzure.StorageClient.CloudBlob.DownloadToStream(Stream target, BlobRequestOptions options)
       at Microsoft.WindowsAzure.StorageClient.CloudBlob.DownloadToFile(String fileName, BlobRequestOptions options)
       at Microsoft.WindowsAzure.StorageClient.CloudBlob.DownloadToFile(String fileName)
       at WGL.GPSTH.Management.clsApplicationHelper.DownloadFileFromAzureBLOB(String inFilePath, CloudBlobContainer inContainer) in C:\.....\clsApplicationHelper.vb:line 232

    Inner exception: System.Net.WebException The remote server returned an error: (500) Internal Server Error.

    EDIT 2012/08/30:

    I can upload new BLOBs to the emulator storage - just can't get previously existing ones out of it any more.

    If the file does not exist in the storage I still get StorageClientException with ErrorCode = StorageErrorCode.BlobNotFound so it looks like a problem with blobs that were created before it got confused.

    I followed Brian's instructions to get the logging going (edit DevelompentStorage.config ) but so far have no logs to examine.

    • Edited by GraemeHart Thursday, August 30, 2012 3:22 PM
    Thursday, August 30, 2012 2:56 PM
  • Alright literally as soon as I first wrote this post I figured out the problem so I'm just rewriting my post with the solution. I was having the exact same problem as described above with identical log files. However I just noticed that for some reason the name of my SQL server instance had been changed to:


    In my DevelopmentStorage.201206.confg. NOTE: DevelompentStorage.config still exists but appears to no longer be used.

    I changed the above config entry to:


    And then everything started working fine again.

    • Edited by Keith Newton Monday, September 3, 2012 10:00 PM
    • Proposed as answer by Keith Newton Monday, September 3, 2012 10:03 PM
    Monday, September 3, 2012 9:48 PM
  • Bingo!  @Keith's answer did it for me.  
    Boy, was this a frustrating problem... it would spontaneously appear and no matter how I tried, I couldn't recreate the steps to trigger the exception.  To boot, the underlying generic web "(500) Internal Server Error" error was skewing my google search results!

    Thanks for sharing, @Keith!

    Tuesday, September 4, 2012 7:03 AM
  • That's very interesting. You're right about the config file name, it seems to have changed. I had apparently edited them both in my early attempts to troubleshoot this, so didn't notice which one was truly controlling.

    I hope that you've found a solution, but I'm a little bit skeptical.

    First, I've had the emulator "work" for a week or more after blowing everything away and starting over, then I lose blobs. Your experience might be different than mine in that respect - but I wouldn't be confident of a fix until I'd had it in place for...well...months, probably. 

    Second, in my case, I don't have sqlexpress (only mssqlserver 2012) installed or running.

    My config file is set thus:


    There's nothing obviously wrong with that - I have the single default instance of sql server running, so AFAIK that should be correct.

    I also wonder what the theory is for why this would cause the specific symptoms I am seeing, which are:

    - At time A. blob storage works correctly, 

    - At time B, storage operations on existing blobs begin erroring out

    Investigation reveals:

    - database entries for the blobs exist in DevelopmentStorageDb201206

    - but the entries point to files that do not exist

    - new blobs can be created without error (i.e. storage is still "working")


    Tuesday, September 4, 2012 4:41 PM
  • Just Posted at


    Copying here:

    Possible Solution (Work Around):
    Exit Storage Emulator.
    Open up administrative Sql Server Management Studio 2012.
    Attach C:\Users\username\DevelopmentStorageDb201206.mdf file.
    If it doesn't allow to attach, copy mdf and log files to some other drive and then attach.
    Find stored procedure: CommitBlockList
    change SET UncommittedBlockIdLength = NULL to SET UncommittedBlockIdLength = 0.
    Execute it.
    Close Management Studio.
    Copy these edited mdf, log files to original location.
    Start Storage Emulator.

    How I got there:
    I found that about every seven days ONLY BLOCK blobs got deleted.
    Creating those blobs again for testing purposes was painful while in the middle of development/testing.
    Tried to find Storage Emulator source code but couldn't find it.
    Turned on logging at C:\Users\username\AppData\Local\DevelopmentStorage
    by adding following to DevelopmentStorage.201206.config


    After painful waiting found following in logs:
    DefragmentBlobFiles BlobInfo Name 40f5e12f-65a5-4a3a-ae46-41c71c8514c0/file1.txt, ContainerName storage1, Directory c:\users\username\appdata\local\developmentstorage\ldb\blockblobroot\1\12735b4b-f9ed-481b-a091-78387facf05b, ROFile , RWFile c:\users\username\appdata\local\developmentstorage\ldb\blockblobroot\1\12735b4b-f9ed-481b-a091-78387facf05b\1, Size5

    I don't think above defragmentation causing any problems.
    Found another log:
    BlockBlob: Load Interval failed. IsGC: True, Exception at System.Number.ParseDouble(String value, NumberStyles options, NumberFormatInfo numfmt) at Microsoft.WindowsAzure.DevelopmentStorage.Store.BlockBlobGarbageCollector.GetTimerIntervalOrDefault(Boolean isGC)

    So for BlockBlobs Uncommitted blocks are garbage-collected by this BlockBlobGarbageCollector. Nowhere I could find how often this uncommitted blocks are garbage-collected. I don't think even this is causing the problem.

    Another log: BlockBlob: Checking Directory C:\Users\username\AppData\Local\DevelopmentStorage\LDB\BlockBlobRoot\1\0477877c-4cb3-4ddb-a035-14a5cf52d86f in the list of valid directories
    BlockBlob: Deleting Directory C:\Users\username\AppData\Local\DevelopmentStorage\LDB\BlockBlobRoot\1\0477877c-4cb3-4ddb-a035-14a5cf52d86f

    THIS ABOVE LOG SHOWS THE PROBLEM. The emulator must be determining valid blockblob directories.

    Checked schema of database DevelopmentStorageDb201206. Found few columns like IsCommitted and UncommittedBlockIdLength. Found that ClearUncommittedBlocks is setting UncommittedBlockIdLength to Null. Any Blobs which were not being deleted were having UncommittedBlockIdLength value 0. So checked stored procedure CommitBlockList and changed UncommittedBlockIdLength to 0 instead of Null. I think emulator in previous version must be checking IsCommitted and UncommittedBlockIdLength both to determine valid blockblob directories, while in this version it might be checking only UncommittedBlockIdLength as Null and deleting all those block blob files.

    As I said, it takes about seven days to find out whether this solution permanently fixes it. I have 4 more days to go to validate it.

    If this is a workaround that works,... Microsoft owes me 6 hours;)

    Thursday, September 6, 2012 11:17 AM
  • I had a similar experience and was able to solve it by moving the storage emulator to SQL Server as opposed to SQL Express as explained here: http://msdn.microsoft.com/en-us/library/windowsazure/gg433134.aspx

    Wednesday, September 12, 2012 9:18 PM
  • I am having the same experience. Dev Storage (and production) had been working fine for a year or so. I updated to the June SDK, Windows 8, and VS2012 at the same time, so don't know which was the cause.

    @rhizohm indicates that switching to SQL Server solves the problem, but the problem has been described as being related to "old" blobs. I wonder if that solution is still working today as the blobs get older?

    Does anyone have a solution that is still working?

    My testing over the last few days seems to indicate that once a blob gets to be more than 3 days old, it is no longer readable. If I re-upload to create the exact same blob as a new blob, it is readable. My unit tests, which create/write/read blobs, all work well (because they are new blobs?)

    Before switching to SQL Server, it would be good to know if that is a durable solution?


    Tuesday, October 16, 2012 7:06 PM
  • I haven't had any issues since I switched to using Sql Server as the backing store.
    Tuesday, October 16, 2012 8:32 PM
  • I am using SQL Server 2012, and I have this issue; also, I've had this issue with both Visual Studio 2010 and Visual Studio 2012, so that doesn't appear to be the issue; also, also, I've had this issue with blobs only a day or two old - two days ago I completely uninstalled and reinstalled the Azure 1.7 SDK (apparently now 1.7.1), and uninstalled/reinstalled VS 2012, and everything was fine... but when I started up my development environment this morning, this issue resurfaced - so, the time thing (a week, three days, etc.) may be true in certain environments, but it doesn't seem to be true/necessary in general.

    As a final note, I tried modifying the sproc, as per @ImageSurf.Net Site's suggestion on Sept. 6th, but I continue to be plagued by the issue.

    Suffice it to say, this is a real time sink - hopefully MSFT will find a way to slip in a patch soon, as they did with the 1.7.1 release that resolved a few of the other defects introduced in the June 2012 SDK.

    Tuesday, October 16, 2012 10:12 PM
  • I installed SQL Server 2012 (on Win8) and switched from SQL Express. It worked for about 12 hours and then stopped again.

    That gave me enough time to test a modification and get it published, but I'm not looking forward to the next time (I have about 1,000 blobs -- it takes a few hours to rebuild them organically, so if this is a long-term problem I would probably have to build a mechanism to capture and restore them on demand).

    Wednesday, October 17, 2012 10:45 PM