Deadlock when using SqlBulkCopy to concurrently insert rows into a single non-empty table RRS feed

  • Question

  • Hi!

    Unfortunately, we encounter a deadlock situation when using the SqlBulkCopy class to insert rows into a single table using multiple machines / processes concurrently.

    In order to achieve best database write-out performance we use the SqlBulkCopy class with SqlBulkCopyOptions.TableLock. According to the msdn documentation SqlBulkCopy will "obtain a bulk update lock for the duration of the bulk copy operation " . Reading about bulk update locks it is stated that "Bulk update (BU) locks allow multiple threads to bulk load data concurrently into the same table while preventing other processes that are not bulk loading data from accessing the table."

    Investigating our deadlock graph I can see that 2 processes are holding a BU lock on the destination table while both processes now request an exclusive X lock on that table. Hence, the deadlock situation. The destination table is a heap so there should not be any index update problems.

    After some more investigation using the official Microsoft SqlBulkCopy sample from Adventureworks I realized that performing a bulkcopy operation to an empty table leads to an acquisition and release of a BU lock (on that table). Hence, no concurrency issue here.

    However, performing a bulkcopy operation on a non-empty table leads to a BU lock (on that table) followed by an X (exclusive) lock. Hence the pattern I see in my deadlock graph.

    Does anyone know why SqlBulkCopy obtains an X lock during bulk-insert in case the destination table is not empty?
    How can I concurrently bulk-insert data in a single non empty table using multiple threads in a safe way?

    I think originally we used the default row level locking but also ran into conversion deadlock issues when threads were escalating their row locks to page and table level concurrently.

    Any insights and help is greatly appreciated.

    • Moved by John C GordonMicrosoft employee Thursday, July 30, 2009 12:24 AM (From:SQL Server Data Access)
    • Moved by Phani Raju MSFT Monday, August 3, 2009 6:43 AM (From:ADO.NET Data Services)
    • Moved by Jivko Dobrev - MSFT Friday, August 7, 2009 8:25 PM This is a locking/concurrency problem which belongs to Engine (From:SQL Server Data Access)
    Wednesday, July 29, 2009 10:13 AM


All replies

  • I checked again my bulk copy problem and there is a mistake in my original post. The table into which we concurrently bulk-insert data is not a heap (also not in the AdventureWorks example). The destination table actually contains a clustered index.

    I checked again the SqlBulkCopy locking pattern when removing the primary key (and clustered index) and this only results in an acquisition and release of a BU lock.

    My Conclusion:
    Using SqlBulkCopy with option TableLock on a heap leads to a BU lock -> concurrent inserts are thread-safe (no deadlock issues).
    Using SqlBulkCopy with option TableLock on an non-empty table with an index leads to a BU lock following by an X lock (I assume for index updating) -> NOT thread-safe

    Can anyone confirm that using SqlBulkCopy with multiple threads writing to the same destination table is only thread-safe if the TableLock option is specifed and the destination table is a heap? Otherwise, one can run into deadlock issues?

    Friday, July 31, 2009 1:14 PM
  • I solved the problem by removing the clustered indexes on the bulk insert target tables. According to the documentation here parallel bulk inserts with tablock can only be done on a heap (index-less table).

    • Marked as answer by StefanNaegeli Tuesday, August 18, 2009 8:02 AM
    Tuesday, August 18, 2009 8:02 AM
  • I solved the problem by removing the clustered indexes on the bulk insert target tables. According to the documentation here parallel bulk inserts with tablock can only be done on a heap (index-less table).


    That might have prevented the deadlock scenario from occuring, but it also removes your clustered index, which isn't ideal.  Did you happen to capture a deadlock graph from the deadlock occuring?  That would be the key to figuring out exactly why the deadlock occured, and if it is preventable while still maintaining the clustered index on the table.  Heaps are good for quick loading data, but they have a cost associated with them when you perform update transactions against that data, especially if the data increases in size as a result of the update, leading to forwarded records.  If you have a deadlock graph, post it here and I'll offer ideas on where to go with it
    Jonathan Kehayias
    Please click the Mark as Answer button if a post solves your problem!
    Tuesday, August 18, 2009 4:39 PM
  • Hi Jonathan,

    I don't have the deadlock graph anymore but the deadlock situation was that two processes had a BU lock on the destination table and both wanted to upgrade their BU lock to an exclusive lock at the same time. This lead to a conversion deadlock. The reason why both processes need to upgrade the BU lock is to update the clustered index. If you don't have a clustered index on the table, bulk-inserting only leads to a BU lock and is therefore not conflicting.

    I'm aware of the impact when removing the clustered index. The destination table is used to upload data to an Analysis Services cube.
    The clustered index was introduced to speed up Anlaysis Services processing since our Analysis Services partitions are based on different snapshot ids (the column of my clustered index).
    Since our destination table is now a heap SQL Server will do a table scan on the fact table while processing the measure group.
    This is suboptimal because it leads to increasing processing time with the number of snapshots in the staging database.

    I currently see three possibilities to change our architecture (and fix the Analysis Services processing performance)

    1. Stick with a heap as the destination table and use table partitioning over the snapshot column. (Unfortunately this feature is only available in Enterprise Edition).
    2. Keep the clustered index and bulk-insert the data using the T-SQL statement sequentially using binary files instead of SqlBulkCopy. This however removes the parallelism while staging the data
    3. Somehow enforce an exclusive lock on the destination table right away when bulk-inserting the data. If all processes would take an exclusive lock on the table instead of the bu lock there would not be any conversion deadlock. Of course, there whouldn't be any parallelism on the destination table anymore.


    Wednesday, August 19, 2009 8:43 AM
  • Could the thread blocking scenario discussed here cause SqlBulkCopy's WriteToServer method to fail to throw a timeout exception as scheduled?

    In a multi-client bulk insert scenario (web farm bulk loading queued data to common SqlServer), I'm experiencing infinite waits at .WriteToServer() on the 2nd and subsequent timeouts only.

    Scenario (code block below):
    • #1 (aka "Unstable Client") begins a transaction & hangs.
    • #2 (aka " Stable Client" ) begins a transaction and attempts an insert, which times out on schedule.
    • #2 rolls back its transaction & closes the connection.
    • #2 waits any number of minutes & tries again.
    • #2 waits infinitely on .WriteToServer().
    • #1 eventually rolls back or commits its transaction, releasing its lock.
    • Now #2 throws the timeout exception.

    public void BulkCopy(string connectionString, DataTable table, int bulkTimeout)
        using (SqlBulkCopy bulkCopy = new SqlBulkCopy(
        bulkCopy.BulkCopyTimeout = bulkTimeout;//e.g. 120 sec.
        //... fill with data, map columns...
        // ^^^^ waits indefinitely, doesn't throw until *after*
        //      the lock is released.

    I can deal with timeouts all day, but not infinite waits. I've found no strategy so far -- including internal vs. external transactions, row vs. table locks, multiple active result sets on vs. off -- whereby the timeout exception is consistently thrown on schedule.

    Two questions arise: 1) Would removing the PK and indexes likely resolve this? and 2) Is there strategy not involving table changes that would resolve this?

    Thanks much,

    • Edited by Paul C Smith Friday, February 12, 2010 3:04 AM fmt
    Friday, February 12, 2010 3:01 AM
  • currently i have the same problem that i have infinite waits on bulkCopys,

    has this problem Solved by someone in the meantime ?

    Thanks a lot,


    Thursday, June 14, 2012 6:57 AM
  • What kind of SQL Server version are you using?

    If I can remember right, our problem only happened on SQL 2005 but not on SQL 2008 and later.


    Wednesday, June 20, 2012 12:49 PM
  • sql server 2008 ( SBS Server 2008)
    Wednesday, June 20, 2012 12:52 PM
  • Just to clarify if this problem is the same or at least related.

    Does your destination table have any indexes or keys defined? If yes, is the problem resolved when removing all keys and indexes?


    Wednesday, June 20, 2012 1:05 PM
  • Paul, I know this is 9 years old, but did you ever find a solution to the 2nd and subsequent calls infinitely hanging? We just ran into this and very curious if you ever figured it out. I'm running:

    Microsoft SQL Server 2017 (RTM-CU15) (KB4498951) - 14.0.3162.1 (X64)  


    Wednesday, December 11, 2019 6:01 PM