none
How rollback whole data in sql server from BizTalk

    Question

  • Hi Guys,

    I have one scenario :

    Reeving i big file (suppose more than 1 millions records). I am splitting the file into smaller files having (500 records in each file) and doing parallel processing and also to avoid deadloack m not using "Use Transaction" property in send port(wcf-sql adapter using ).Till now everything is going and working and everyone is happy. But now there is an requirement that if something wrong happened with any record like for a batch some exception occurred and that batch failed to go sql server or slimier other cases . while other batches belonging to same file working well . then we need to roll back the whole data means not a single records need to present in database table, complete rollback (if some batches already processed successfully) .

    How can i achieve this ? whatever i know this is not possible once we split the file into multiple files because then all will work independently.

    "I have a solution , instead of direct sending the data, i can stored everything in temp table and once all batch complete their processing then i can read the data from temp table and then insert/update in respective tables with the help of second Stored procedure or SSIS package ."

    Please provide you suggestion (One more thing because everything is already implemented so i can not avoid debatching incoming file)

    Thanks in advance.

    Monday, February 6, 2017 3:50 AM

Answers

  • First, double, triple, then quadruple check this requirement.  Then make sure the person who confirms it is actually responsible for the business process that owns this data.

    Failing 1 million records due to one error is highly unusual.

    So, I would not try to use any built in transaction mechanism for this.  Instead, tag each record with a 'transaction id' that you own, then program the database to 'release' or 'remove' all records once they essentially finished processing.

    Monday, February 6, 2017 5:38 PM
    Moderator

All replies

  • Hello Giri,

    In SQL/Oracle we have two phase commit. 

    In your case I would recommend try this option. Implement this one at your SQL stored procedure level. 

    https://msdn.microsoft.com/en-us/library/aa754091(v=bts.10).aspx

    • Phase 1—Prepare. In this phase, each of the programs involved in the transaction sends a message to the TP Manager, such as Microsoft Distributed Transaction Coordinator (MS DTC), informing the TP Manager that it is ready to and capable of performing its part of the operation. This phase is also known as prepare because the programs are prepared either to commit the changes or rollback the changes. If the TP Manager receives confirmation from each of the programs involved, it proceeds to phase 2.
    • Phase 2—Commit or Rollback. In this phase, the TP Manager instructs each of the programs to commit or rollback all of the changes that were requested as part of the transaction. A properly executed rollback should return the system to its original state.


    Ram

    Monday, February 6, 2017 3:44 PM
  • Hi,

    If you have an orchestration please use atomic transaction scope. In event of any error the transaction cannot be committed. 


    Regards PK: Please mark the reply as answer or vote it up, as deemed fit.

    Monday, February 6, 2017 4:51 PM
  • First, double, triple, then quadruple check this requirement.  Then make sure the person who confirms it is actually responsible for the business process that owns this data.

    Failing 1 million records due to one error is highly unusual.

    So, I would not try to use any built in transaction mechanism for this.  Instead, tag each record with a 'transaction id' that you own, then program the database to 'release' or 'remove' all records once they essentially finished processing.

    Monday, February 6, 2017 5:38 PM
    Moderator