none
Wrong unique key after update method RRS feed

  • Question

  • The problem is that unique key changes after I updates the datasource.

     

    I use dataset principal for my project.

    Dataset bounded to GUI (datagrid).

     

    This is how I store the dataset changes to db:

     

    this.patientTableAdapter.Update(this.dataset.Patient.GetChanges());

    this.dataset.Patient.Merge(this.dataset.Patient.getChanges(), true);

    this.dataset.Patient.AcceptChanges();

     

    the problem accures after I add and delete new datagrid row without saving it to db.

     

    This is what happens:

    1.  I add new row (without saving it to db).  The unique key advanced to 1.

    2.  I delete new row (without saving it to db).

    3.  I add new row (without saving it to db) The unique key advanced to 2.

    4.  I save chages to db.  The db returns unique key with value = 1.

    5.  I perform Merge method,  merge method addes saved row as new added row with unique key with value = 1, when dataset has this row with unique key equals to 2. 

    In the end of this operation dataset has 2 rows with to different unique keys, when I intended to have one row only.

     

    The same problem accures when I conects to empty db that has rows in the past and thouse rows was deleted, therefore

    next unique key will be equal to lastKey + 1, but dataset startes counting from the AutoIncrementSeed = 1.

    In the end of operation changes that will be pass to db will contain row with unique key = 1 and db will return lastKey + 1.

     

    How do I solve this problem from the root?

     

     

     

     

    Monday, December 31, 2007 10:13 AM

All replies

  • Hello,

     

    The same problem accures when I conects to empty db that has rows in the past and thouse rows was deleted, therefore

    next unique key will be equal to lastKey + 1, but dataset startes counting from the AutoIncrementSeed = 1.

     

    False, ACCESS and SQL Server ( for example ) keep track of the lastkey created . So Even  you delete all rows,  the next record which will be added will be lastkey +1.

    Dataset is disconnected from the database so all keys with autoincrement will be recalculated when the database will be updated or filled with the dataset

    See the link :

    http://msdn2.microsoft.com/en-us/library/ksz8ess1(VS.80).aspx

     

    I remember that in old days it was possible ( for an empty table ) to force the first number to be equal to the initial value.

    I am not sure that it's the same thing with SQL Server

     

    A way is to delete the table and create a new one identical but it's heavy because you modify the structure of the database ( and maybe the master database ).Moreover , you might have some problems if you let some records in another table with values corresponding to deleted rows.

    Let the database engine do its work and i think it's better to not take any risk.

     

    I've seen in a book that you always have to put the value of the autoincrementing keys to -1 to let the database engine to do correctly its work.

     

    I never have used the method Merge() but in Programming Microsoft ADO.Net 20 Core Reference there are 10 pages about this problem, and the author is not fond of both the possible solutions .

    He prefers "to avoid the problem altogether by restructuring your data so that you know the primary key values for your new rows before you submit them to the database".

     

    Sorry, it's a problem ( Merge ) for which i'm unable to help you.

     

    Sorry for my engllish.

     

    Have a nice day

    Monday, December 31, 2007 12:01 PM
  • Easy fix. (Please note that this is shamelessly stolen and used from programming ado.net 2.0 core reference by david sceppa)

     

    Go into your dataset designer.  Select the column of the table which is the primary key and go to properties.

     

    In the properties set the autoincrementseed to be -1, and the autoincrementstep to be -1.

     

    When you create new keys in the database, they will be created with positive indexes, so all your fetches will correctly work and not conflict with new row ID's, and you also have a way in your UI logic of determining rows which don't exist on the dataset.

     

    Finally your table adapter, you must of edited the code for the queries manually, or you've used a stored proc.

     

    What you need to do is setup your insert queries on the table adapter to look like this when you have autogenerated id numbers:

    INSERT INTO [dbo].[TableName] ([Value]) VALUES (@Value);
    SELECT ID, Value FROM TableName WHERE (ID = SCOPE_IDENTITY())

     

    ADO.net does the magic to refill your row with the new information, and the scope_identity call will fetch the newly created ID number, which of course sets your updated row with its new ID.  If you've used stored procs, then use one which takes the row ID number as a parameter for the fetch.

     

    If you have relations which this table can be referenced by (ie you can reference the child table before the rows have been added) you will need to set a foreign key constraint in your dataset, and set the update mode to cascade, which will propagate a change of the primary key to all the referenced tables via the cascade rules. That is, when your insert query runs and fetches the new ID (the actual one made by the DB) then the dataset will automatically update its references to that table with the real ID.

    Tuesday, January 1, 2008 6:44 AM
  • small caveat with the stored procedure method, scope identity will return the last generated ID for your current SQL connection in context.  It is ok for triggers to create ID's while you submit a change, but if your sproc does create more than one entry in a table (eg a database log/audit trail) then your sproc will need to explicitly execute the scope_Identity() call after the insert then return it as an output parameter.

     

    Tuesday, January 1, 2008 6:51 AM
  • Thank's for reply, but I want to correct what is my problem.

     

    I create new row in the dataset (disconected), then delete it (auto generated number advanced ), and then again I create new row and delete it, as result of this next auto generated number will be 3 (suppose I started from 1), then I create new row, fill it with data and send just  changes of the table to Update this is how data looks.

     

    before update. 

     

    table rows:

    1.  row status: deleted. (auto generated number = 1);

    2.  row status: deleted. (auto generated number = 2);

    3.  row status: added. (auto generated number = 3);

     

    changes rows:

    1.  row status: deleted. (auto generated number = 1);

    2.  row status: deleted. (auto generated number = 2);

    3.  row status: added. (auto generated number = 3);

     

    db rows:

    1. null (auto generated number for this row is 1).

     

    after update and Merge method:

     

    table rows:

    1.  row status: added (auto generated number = 1) SAVED ROW.

    2.  row status: added. (auto generated number = 3) NON SAVED ROW.

     

    thouse 2 rows must be one row, but as result of update (auto generated number changed) after MERGE method the dataset includes 2 rows and this is wrong.

     

    changes rows:

    1.  row status: added. (auto generated number = 1);

     

    db rows:

    1.data (auto generated number for this row is 1).

     

    Tuesday, January 1, 2008 9:53 AM
  • This is why you set the auto increment seed and step to negative numbers.

    The first row will be -1

    next is -2, -3 and so on.

     

    ID numbers from your database start at 1 and count upwards. This means that the auto-generated rows and the database rows will NEVER have the same ID number. 

     

    This will stop you from having conflicts between your saved data and your disconnected created rows.

     

    Second issue, i.e why you have two rows.

     

    You will need to edit your table adapters with the advice I gave you in my previous post regarding the select ... where ID = Scope_Identity().  What this does is overwrite your "created" row's ID of 3 with the real ID generated by your database whatever it may be.

     

    INSERT INTO [dbo].[TableName] ([Value]) VALUES (@Value);
    SELECT ID, Value FROM TableName WHERE (ID = SCOPE_IDENTITY())

     

    This is the actual text for your INSERT COMMAND.  Do not run the insert command, then run the select in another place.  the select command must be part of the command sent to the database when your tableadapter attempts to "insert" the row into the db.  The adapter was designed to receive parameters back from the dbase.

     

    I believe that there is a fetch being done which is occuring after your update (or as part of it) which IS fetching the newly created row with its ID of 1, however the acceptchanges() method has been called (automatically after the tableadapter.update() method) which states that the row with autogenerated ID 3 is now a real row that has been saved to the dataset.  The fetch is retrieving the same row as entered into your database, and because they have different ID numbers, the dataset shows you two rows, the one it thought had been added and the one that actually was and was fetched.

     

    If you have done the scope_identity fetch as part of your insert command, then perhaps its glitching there and there is another parameter or something we need to fix?

    Wednesday, January 2, 2008 2:53 AM
  • Thanks for all.

     

    My insert command does have the ID = SCOPE_IDENTITY() part, I think that my problem isn't clear I will try again.

     

    Suppose I have row with wrong id (auto generated) number, then I send this row to db.  Row sended to db as part of

    sub table ( just changes) of table that located in dataset.  Thouse changes sended to db, db fixes the wrong id number, but now I have tow rows: one in table and other in sub table, so now I have to merge this tables, but after merge (I suppose that merge works in context of key number) the original table includes teo rows, when first row is row with wrong id and other with good id, but now I don't know how to decide what row is wrong, of couse I can perform loop for and get keys from db and check if this keys located in table, but I want better solution.

     

    Thank's.

     

     

     

     

    Wednesday, January 2, 2008 1:39 PM
  • Ok it sounds like you have a master detail table?

     

    Like

    Order (orderID, Details....)

    OrderItems(OrderItemID, OrderID, Details....)

     

    Is this correct?

     

    If so, the answer to your question is to setup your data relations (double click on the line between the two tables in the dataset) with update rules set to "cascade".  This setting causes any changes to a primary key to propagate from parents to the child tables.

     

    For this to work though, you need to ensure that your update is sent in the order of Master first, followed by child tables.

     

    If you have referential integrity at the database level, your updates and deletes have to be hierachial as well.

    Thursday, January 3, 2008 7:03 AM
  • Ok this is a solution I found.

     

    I set auto incement number in dataset to start from -1 how is mention above, then I send just changes to DB and after

    DB return updated changes table (with correct id numbers) I delete all rows from dataset tables with negative auto increment values and then I perform merge operation.

     

     

    If there some better solution please let me know.

     

     

    Thank's for all.

     

    Thursday, January 17, 2008 11:39 AM