locked
Storing logs in Azure instances RRS feed

  • Question

  • Hi guys,

    I know that Azure instances would recycle the files stored on the instances whenever the servers is running maintenance-like tasks and the all the files kept will be gone except the files from the package uploaded.

    My website has logging capability that writes log records into log (txt) files and stored in certain drive and folder in each Azure instances.

    I would like to know what would are the better approaches for storing log and keeping them (not losing them) for future referencing at no additional cost?

    Tuesday, March 13, 2012 3:59 AM

Answers

  • I believe there are two approaches however both of them would require you to make use of blob storage and thus there would be an additional cost. First approach is to make use of block blobs and write your logs directly in blob storage. Second approach would be to make use of Azure drives and write your logs in Azure drive. Please note that with Azure drive only one instance has write permission so you would need to work your way around that.

    Hope this helps.

    • Marked as answer by Arwind - MSFT Wednesday, March 21, 2012 10:42 AM
    Tuesday, March 13, 2012 4:10 AM
  • Hi FbLover2011,

    Just like Gaurav Mantri said there is no way to do this without an additional cost. But the 'official' way to do it is to use the Azure diagnostics. The following MSDN article describes how you can include your custom logs in the diagnostics monitor: 

    http://msdn.microsoft.com/en-us/library/windowsazure/hh411528.aspx 

    In short, you'll be pointing out the directory containing the custom logs that should be picked up:

    DirectoryConfiguration dirConfig = new DirectoryConfiguration();
    dirConfig.Container = "wad-mycustomlogs-container";
    dirConfig.DirectoryQuotaInMB = localResource.MaximumSizeInMegabytes;
    dirConfig.Path = localResource.RootPath;

    Sandrino


    Sandrino Di Mattia | Twitter: http://twitter.com/sandrinodm | Azure Blog: http://fabriccontroller.net/blog | Blog: http://sandrinodimattia.net/blog

    Tuesday, March 13, 2012 5:24 AM

All replies

  • I believe there are two approaches however both of them would require you to make use of blob storage and thus there would be an additional cost. First approach is to make use of block blobs and write your logs directly in blob storage. Second approach would be to make use of Azure drives and write your logs in Azure drive. Please note that with Azure drive only one instance has write permission so you would need to work your way around that.

    Hope this helps.

    • Marked as answer by Arwind - MSFT Wednesday, March 21, 2012 10:42 AM
    Tuesday, March 13, 2012 4:10 AM
  • Hi FbLover2011,

    Just like Gaurav Mantri said there is no way to do this without an additional cost. But the 'official' way to do it is to use the Azure diagnostics. The following MSDN article describes how you can include your custom logs in the diagnostics monitor: 

    http://msdn.microsoft.com/en-us/library/windowsazure/hh411528.aspx 

    In short, you'll be pointing out the directory containing the custom logs that should be picked up:

    DirectoryConfiguration dirConfig = new DirectoryConfiguration();
    dirConfig.Container = "wad-mycustomlogs-container";
    dirConfig.DirectoryQuotaInMB = localResource.MaximumSizeInMegabytes;
    dirConfig.Path = localResource.RootPath;

    Sandrino


    Sandrino Di Mattia | Twitter: http://twitter.com/sandrinodm | Azure Blog: http://fabriccontroller.net/blog | Blog: http://sandrinodimattia.net/blog

    Tuesday, March 13, 2012 5:24 AM
  • Wouldn't catching RoleEnvironment.Stopping event and writing any text file on the instance local disk to blob storage be able to solve this issue without consuming a lot of storage transactions to write the logs to storage ? You'd only have to write the text file to blob storage every time the role is being recycled or shutdown, instead of paying storage transactions on a regular basis.

    I'm not sure about this though, should first try it out. Anyone who knows by default ?


    Be nice to nerds ... Chances are you'll end up working for one!

    Tuesday, March 13, 2012 5:57 AM
  • I'd see some issues with that approach:

    - logs will get lost when an instance crashes suddenly and does not complete shutdown procedure

    - if an instance has been running for some time, there may be lot of logs created. Beside this could exceed local resource capacity at runtime, it will take some time to transfer all the logs. Stopping-event is not raised when the virtual machine of the role instance is rebooted. You could override the Stop method in the role, but that has a hard timeout of 30secs

    Tuesday, March 13, 2012 7:06 AM
  • Yeah that's true. In case the logs you are storing on the machine are important, then it's better to consume the storage transactions to make sure the logs are being transferred in case of an ungraceful shutdown. 

    Thanks perpetualKid


    Be nice to nerds ... Chances are you'll end up working for one!

    Tuesday, March 13, 2012 7:14 AM