none
Deadlocking on insert RRS feed

  • Question

  • Hello,

    I have a compiled query to check whether a Bank exists using it's name.

    If no Bank is returned a new one is added.

    I am using messaging so there are a number of threads running simultaneously.

    Now let's say two threads check whether the same bank exists.  Neither finds the bank so both try to insert.  There is a deadlock.

    I need to hold an exclusive lock on the read (a table lock I guess) until the transaction is complete.  Only then the next transaction should be allowed to do it's read, in which case it will find the bank.

    How can I get this done.

    Regards,
    Eben
    Tuesday, March 31, 2009 8:54 AM

Answers

  • Hey Kristofer,

    I actually ended up creating a 'locker' table (key, timestamp) and I issue an update on the timestamp with the relevant table name as the key.  This gets the job done.

    Regards,
    Eben
    Wednesday, April 8, 2009 8:08 AM

All replies

  • What you have is not a deadlock, it is a called race-condition.  It would be a deadlock if two or more threads keep waiting for opposing resources that never come free. Fortunately, this is what transactions are for.  However, since L2S automatically initiates a transaction when you call SubmitChanges I don't understand the particular problem you are having.  If both threads are attempting to insert, one transaction will succeed and the other will fail.  The one that fails should back up and try it again by first checking if it needs to insert (or back up even further if you want.)  My preference is to throw away the DataContext and start over from that point.

    The solution you are describing is called pessmistic concurrency (when you attempt to lock a table or row on read).  L2S is designed for optimistic concurrency, where no locks are taken until the moment an update occurs and then for only short duration.  This makes the server run faster, but you still have to handle the rare case of concurrency failure in your application. 

    If you truly believe you need to use pessimistic concurrency, you can achieve that by initiating your own transaction before you starting fetching objects (causing read locks to occur).  However, note that when you do this you may cause more locking problems on the server than you started with, if different threads/access to the database could access records in different orders.  Then you could get true deadlocks.


    Wayward LINQ Lacky
    Tuesday, March 31, 2009 4:43 PM
    Moderator
  • You can have the check and insert within a lock() {} statement to ensure that only one of the threads performs the insert. It would be something like this:

    static object  AddBankLock = new object();
    
    
    
    public Bank AddBank(string bankName)
    
    {
    
          lock (AddBankLock)
    
          {
    
                Bank bank = GetBank(bankName);
    
                if (bank == null)
                {
    bank = InsertBank(bankName); }
    return bank; } }

    That should prevent the race condition...
    Tuesday, March 31, 2009 6:47 PM
  • Narayan, that's only true if all access to that table is controlled via just that code and all in the same process (really app-domain).  The reality (regardless of the current set up) is that code that causes race conditions exists in multiple applications on multiple machines, so the only general approach that will work all the time is to use database transactions.


    Wayward LINQ Lacky
    Wednesday, April 1, 2009 2:24 AM
    Moderator
  • Hello,

    Thanks for the replies.  Yes, it is a race condition.  But it still results in a deadlock.  How do I know?  The server tells me so :)

    Narayan, I intend running the message processors on several machines so as Matt mentioned your option would still result in the problem described.

    I actually think that read locks are obtained and *that* is why, when a transaction tries to obtain an exclusive lock, a deadlock arises since they are waiting for each other to release the read locks.  A unique constraint would solve my problem, but that does not exist in our structure.

    I guess I could use some dummy table to do an update to cause an exclusive lock.  As originally stated I will end up with duplicate entries if the transactions on the relevant table do not block each other from reading.

    I hope I am not totally confused here.

    Regards,
    Eben
    Wednesday, April 1, 2009 6:08 AM
  • My apologies - in that case you might only have 2 options that I can think of:

    1) Do the above in a stored procedure where you lock the table for the duration of the procedure. I believe you can do this with TABLOCK, TABLOCKX or HOLDLOCK (not sure)...
    2) Use the unique constraint where the second write will cause an exception and you perform a read (I know - this is kludgey). I understand you said you can't use the unique constraint - I am just including this solution for completeness...

    I am nowhere near my server right now so I can't lookup (1) right now to write up a sample stored procedure. I am now curious so if you beat me to it, please let me know if it works for you.

    Hope it does...

    Narayan

    Friday, April 3, 2009 6:49 AM
  • 1) Do the above in a stored procedure where you lock the table for the duration of the procedure. I believe you can do this with TABLOCK, TABLOCKX or HOLDLOCK (not sure)...
    2) Use the unique constraint where the second write will cause an exception and you perform a read (I know - this is kludgey). I understand you said you can't use the unique constraint - I am just including this solution for completeness...


    or:

    3) Catch and handle the exception.

    If just a simple insert in a single table there won't (can't!) be deadlocking. There must be more than one operation involved in the same transaction, and usally more than one table (or multiple portions of the same table) involved to get a deadlock.

    If you _really_ want to handle this at the database level, create a separate 'soft-lock table' where you indicate that you're about to create a 'x/y/z' before you start your operation.

    Using database locks, especially at the table level should be a last recourse - it introduces a heap of new issues, not least a huge concurrency issue.

    Here are some (semi-) related threads:
    http://social.msdn.microsoft.com/Forums/en-US/linqtosql/thread/79f631c1-d7de-41be-8fef-ebbd67ccef71
    http://social.msdn.microsoft.com/forums/en-US/linqprojectgeneral/thread/2d6fdb2e-e17e-4a4c-8da0-6968e60ef855/
    http://social.msdn.microsoft.com/Forums/en-US/linqprojectgeneral/thread/6ad9ed60-59e6-421a-9df1-ca07581de6d8/
    Kristofer - Huagati Systems Co., Ltd. - www.huagati.com
    Friday, April 3, 2009 7:07 AM
    Answerer
  • Hey Kristofer,

    I actually ended up creating a 'locker' table (key, timestamp) and I issue an update on the timestamp with the relevant table name as the key.  This gets the job done.

    Regards,
    Eben
    Wednesday, April 8, 2009 8:08 AM