locked
Can the scriptcode produced by "Block incremental deployment if data loss might occur" be changed

    Soru

  • I am trying to convert our old ways of installing and upgrading databases with hand crafted scripts to a Database Project in VS2010. Our application is installed at customer locations with different versions of both software and database. At the moment we simple upgrade 1 version. So, we have scripts upgrading from version 1->2 and a separate set for 2->3.

    Let me first explain how we work at the moment.
    Since there is going to be some off-time during the software upgrade and the databases are small <5GB, we remove all stored procedures, triggers, indexes and constraints before we start updating the table structures. The run handcrafted create table and alter table statements to update the schema.

    Since everything is handcrafted we can migrate data very easy. As an example we had (for some reason) 1 column with both longitude and latitude separated bij a colon(:) in a string field, which got updated to 2 separate float fields. We would first add the 2 new columns to the table, then update/migrate the data from the Location field to the 2 new fields, and then drop the old Location column.

    After all changes have been made we add stored procedure, triggers and indexes. Coincidentally these will all be updated to the latest versions by this method. Since we do not allow customers to add extra procedures and indexes we will not lose anything, but could use it to clean up older stuff.

    Now for my question regarding Database projects
    If the Block incremental deployment if data loss might occur is enabled the deployment script generated code to check whether records exist in the tables that is altered and raises an error. The solution then use pre- and post deployment scripts to first copy out and delete the data and later reinsert in the original table.

    If the Block incremental deployment if data loss might occur is disabled the deployment scripts will use the CCDR technique.

    In a scenario where i migrate data from a single Location field to two longitude latitude fields both options present with some issues. The first optione requires me to copy a potentially large table, and the second option will doe just that on its own.

    First of our handcrafted solution is easier, but does not work well with the way a Database project was intended, I get that.

    But what if I could have a separate option to have me take care of saveguarding data-loss, but still aloow data to remain in de table that will be modified. In my example I could have just copied the PK and the Location field to a temp table in Pre-deployment, than had Data Dude do DROP and ADD columns, and in Post-deployment I could have updated the table with a JOIN and migrated the Location data to the newly added fields.

    Is there any setting present in VS2010 an/or the Database Project to accomplish this. I really like the fact I can Source-control these script very easy, and also that I can have simpler upgrade scripts, instead of bute force droppning triggers and stored procedures as we do no, but now I will be forced to empty out any table that requires columns to be dropped, which seems a lot of handcrafting pre- and post deployment code ass well.

    Who has a good idea or solution for this scenario, or does everyone simply accept it works as it works and accept it!

    22 Şubat 2012 Çarşamba 11:05

Yanıtlar

  • If we could have a 'Yes I understand data-loss, always script ALTER TABLE commands' option that would tell Data Dude we take full responsibility for data-loss and have Data Dude then always use ALTER TABLE ADD en DROP statements that would be great.

    Hello Johan,

    As you described above, I think at this time it is OK if you turn off the "Block incremental deployment..." option. And just as Barclay mentioned in his blog, the most the time "Block incremental deployment..." option is used for the entire database project, not for the individual database object. If you would like to make changes to one specific table, and even though the changes can cause the data loss, if you accept the changes, it is OK if you turn that option off.  

    And in the Barclay's blog, Barclay explained us that option as:
    So dropping a column from a table does not get blocked, but changes to modify the schema of the table that may introduce the risk of data loss to data that would exist after the deployment will get blocked.  Adding a NOT NULL column to the middle of the table will get blocked by the option because this change requires data motion since the table will have to be recreated.  What is import here is that you will usually want this option “ON” for your deployments, but you must comprehend this check when managing data motion.  Turing it “OFF” to deploy a schema change to only one table may be risky as it is possible that another change has snuck under the wire that may cause data loss. This option is for the whole deployment, not per object.

    Thanks.


    Vicky Song [MSFT]
    MSDN Community Support | Feedback to us

    28 Şubat 2012 Salı 02:32
    Moderatör

Tüm Yanıtlar

  • Hello Johan,

    Based on my understanding that it is better if you can make good usage of the "Block incremental deployment..." option during you upgrade your tables and migrate data. After you modify the table and would like to deploy it for the first time, you should turn that option on, and have a detailed information which column will cause the data loss. And if you have confirmed the modification, and just would like to deploy these changes to the target database, you can just trun that option off. As my understanding that that option is for the whole deloyment not for only one specific database object.

    Barclay has two very great blogs which introduce us migrate data during deployment,see:

    http://blogs.msdn.com/b/bahill/archive/2009/03/30/managing-data-motion-during-your-deployments-part-1.aspx

    http://blogs.msdn.com/b/bahill/archive/2009/07/02/managing-data-motion-during-your-deployments-part-2.aspx

    Thanks.


    Vicky Song [MSFT]
    MSDN Community Support | Feedback to us

    24 Şubat 2012 Cuma 03:00
    Moderatör
  • I have already read the article from bahil which was very good in explaining the data motion. My problem is probably more a personal one that I really dislike de CCDR method and the loss of control over the scripting process. Even using the data motion technique for the most part replicates this behaviour by copying the data by hand and clearing the base table so as to let Data Dude use a simple ALTER TABLE statement. If we could have a 'Yes I understand data-loss, always script ALTER TABLE commands' option that would tell Data Dude we take full responsibility for data-loss and have Data Dude then always use ALTER TABLE ADD en DROP statements that would be great. This would of couse imply new column would always be added to the end of the table, which is fine by me. Let's see what SQL2012 bring in this area.

    24 Şubat 2012 Cuma 07:29
  • If we could have a 'Yes I understand data-loss, always script ALTER TABLE commands' option that would tell Data Dude we take full responsibility for data-loss and have Data Dude then always use ALTER TABLE ADD en DROP statements that would be great.

    Hello Johan,

    As you described above, I think at this time it is OK if you turn off the "Block incremental deployment..." option. And just as Barclay mentioned in his blog, the most the time "Block incremental deployment..." option is used for the entire database project, not for the individual database object. If you would like to make changes to one specific table, and even though the changes can cause the data loss, if you accept the changes, it is OK if you turn that option off.  

    And in the Barclay's blog, Barclay explained us that option as:
    So dropping a column from a table does not get blocked, but changes to modify the schema of the table that may introduce the risk of data loss to data that would exist after the deployment will get blocked.  Adding a NOT NULL column to the middle of the table will get blocked by the option because this change requires data motion since the table will have to be recreated.  What is import here is that you will usually want this option “ON” for your deployments, but you must comprehend this check when managing data motion.  Turing it “OFF” to deploy a schema change to only one table may be risky as it is possible that another change has snuck under the wire that may cause data loss. This option is for the whole deployment, not per object.

    Thanks.


    Vicky Song [MSFT]
    MSDN Community Support | Feedback to us

    28 Şubat 2012 Salı 02:32
    Moderatör