Mittwoch, 27. Juni 2012 01:10
I recently noticed a bunch of new tables showing up in one of my storage accounts, tables that I did not create. Based on their names and content I am guessing they were created by the new Azure Mgmt Portal when I was playing with Monitor feature. Can anyone confirm this?
I have to say I find the proliferation of tables a bit disturbing. It appears to be creating 6 new tables for each deployment that I enable Verbose Monitoring. And if I deploy a new version of my Cloud Service and VIP Swap I get 6 new tables. I fear it will get out of hand quickly.
Can anyone from Microsoft speak to this subject?
Mittwoch, 27. Juni 2012 06:50Moderator
I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay.
Appreciate your patience.
Freitag, 29. Juni 2012 05:44
Yes these are part of monitoring feature. These tables are created only when you enable verbose monitoring for deployment. This is where aggregated monitoring data for role and role instances are stored. There are 2 tables created for each aggregation (one at role level and other at role instance) and currently 3 aggregations (5 min, 1 hr and 12 hr aggregations) are supported, resulting in 6 tables. The table is for format WAD<deploymentID><role or role instance><aggregation interval>table. If you create a new deployment with diagnostics connection string pointing to a storage account and enable verbose monitoring for this deployment, 6 tables specific to this deployment are created. New tables are not created just by VIP swap, as long your deployment ID remains the same the same tables are used.
Freitag, 29. Juni 2012 12:57
Hi, from the table names it does not look like these tables are created from enabling analytics. Windows Azure Storage Analytics Metrics data is stored in a table you will see four additional tables per account – there are two types of tables which store the metrics details:
- Capacity information. When turned on, the system creates $MetricsCapacityBlob table to store capacity information for blobs. This includes the number of objects and capacity consumed by Blobs for the storage account. This information will be recorded in $MetricsCapacityBlob table once per day (see PartitionKey column in Table 1). We report capacity separately for the amount of data stored by the user and the amount of data stored for analytics.
- Transaction metrics. This information is available for all the services – Blobs, Tables and Queues. Each service gets a table for itself and hence the 3 tables are:
- $MetricsTransactionsBlob: This table will contain the summary – both at a service level and for each API issued to the blob service.
- $MetricsTransactionsTable: This table will contain the summary – both at a service level and for each API issued to the table service.
- $MetricsTransactionsQueue: This table will contain the summary – both at a service level and for each API issued to the queue service.
For additional details on analytics, please read this post
Freitag, 29. Juni 2012 13:52
Thanks for the explanation Vikram. However I need to clarify, when we deploy a new version of our Cloud Service and VIP Swap we end up with 6 new tables. If we deploy every 3 weeks the tables are going to get out of hand very quickly. So I am wondering if you are considering using a set of known named tables that are shared for monitoring of all deployments, much like Diagnostics and Storage Analytics share tables/containers today?
Or if not then consider deleting the tables when a deployment is deleted. I don't want to have to maintain these tables myself.
Freitag, 29. Juni 2012 13:54Win7Girl, I think you are confusing Storage Analytics with Cloud Service metrics. The discussion here is about the new Cloud Service metrics available in the Preview Azure Portal and the tables it is creating.
Montag, 2. Juli 2012 02:38
We are considering allowing to delete the tables when the deployment is deleted.
To clarify a new version is deployed by deleting the old version in the deployment and uploading a new version followed by a VIP Swap. This is not an update in place. From a deployment of cloud services when do you use in-place update vs delete/redeploy.
- Bearbeitet vikram1 Montag, 2. Juli 2012 02:47
Montag, 2. Juli 2012 13:06
Yes per your clarification, our deployment model is to deploy a new version of our app to Staging, test and then VIP swap to Production. This creates a new deployment id each time and thus creates 6 new tables as soon as we enable verbose logging.
So for us your current model has two significant downsides. First is that since we deploy every 3 weeks to 9 different Cloud Services we would end up with over 900 tables per year. This just doesn't make sense to me. Second is that we have to manually re-enable verbose logging after each deployment for each of the 9 Cloud Services. I would like to have the verbose logging (and all other settings that are configured in the Azure Portal) survive this type of VIP swap deployments.
Thanks for listening Vikram.