none
SqlBulkCopy raises exception when targeting a temporary table RRS feed

  • Question

  • We have an API in our software that does bulk imports of data to a temporary table and subsequently issues a MERGE statement using the populated temporary table. The implementation of the API (with the important parts quoted below) works perfectly using SQL Server 2008 SP2+, but we recently discovered that our implementation is causing problems when running our software on Windows Azure (and SQL Azure).

    Reported exception messages are as follows:

    1. System.InvalidOperationException: Cannot access destination table '#Contacts'
    2. System.Data.SqlClient.SqlException: Reference to database and/or server name in 'tempdb..sp_tablecollations_100' is not supported in this version of SQL Server.

    We're fairly sure the following C# statement raises the exception:

    bulk.WriteToServer(table);

    Here's more complete listing of the implementation that contains the above line:

    var table = new DataTable();
    table.Columns.Add(new DataColumn("Id", typeof(Guid)));
    table.Columns.Add(new DataColumn("Status", typeof(int)));
    
    foreach (var contact in enumerable)
    {
    	var row = table.NewRow();
    	row["Id"] = contact.Id;
    	row["Status"] = (int) contact.EmailValidationStatus;
    
    	table.Rows.Add(row);
    }
    
    using (var scope = new TransactionScope(TransactionScopeOption.Required, TransactionOptions))
    {
    	using (var connection = this.CreateSqlConnection())
    	{
    		connection.Open();
    
    		var command = new SqlCommand("CREATE TABLE #Contacts (Id UNIQUEIDENTIFIER, Status INT)", connection);
    		command.ExecuteNonQuery();
    
    		//BulkCopy the data in the DataTable to the temp table
    		using (var bulk = new SqlBulkCopy(connection))
    		{
    			bulk.BulkCopyTimeout = 0;
    			bulk.DestinationTableName = "#Contacts";
    			bulk.WriteToServer(table);
    		}
    
    		command.CommandText = "MERGE INTO dbo.[Contact] AS Target " +
    								"USING #Contacts AS Source " +
    								"ON Target.Id = Source.Id " +
    								"WHEN MATCHED THEN UPDATE SET Target.EmailValidationStatus = @status;";
    
    		command.Parameters.Add(new SqlParameter("@status", (int) status));
    		command.ExecuteNonQuery();
    
    		//Clean up the temp table
    		command.CommandText = "DROP TABLE #Contacts";
    		command.ExecuteNonQuery();
    
    		scope.Complete();
    	}
    }

    In order to verify that the DDL statement for creating the temporary table is valid, I opened SSMS against the SQL Azure database in question and executed the following statement (which completed succesfully).

    CREATE TABLE #Contacts (Id UNIQUEIDENTIFIER, Status INT)
    DROP TABLE #Contacts
    

    Unless there are differences in scoping semantics between SQL Server 2008 SP2+ and SQL Azure, the temporary table is created with a local scope (i.e. to the explicitly managed connection).

    Is this as simple as stating that SqlBulkCopy doesn't support temporary tables on SQL Azure?


    Anders Borum / SphereWorks

    Wednesday, March 28, 2012 1:30 PM

Answers

  • This is a known issue with SQL Azure and bulk copy. The only workaround is to upload to a non-temporary table. You can for example generate a temporary table name using a random guid to avoid collisions, then ensure you clean it up later. Another option is to upload data into a temp table using the TVP feature instead of SqlBulkCopy, let me know if you want some details on how to do this. They are working on fixing this but I don't have any ETA on when this fix will get done yet.

    Matt

    Wednesday, March 28, 2012 9:58 PM

All replies

  • Hi,

    By looking into the error, I think the issue is related to the transaction scope. Try creating the table and then executing the bulkcopy in two different transaction scopes.


    If you found this post useful, Please "Mark as Answer" or "Vote as Helpful". Thanks! Vaibhav Gujral.

    Wednesday, March 28, 2012 1:45 PM
  • Vaibhav,

    thanks for the suggestion. I'm just curious as to why this works with SQL Server 2008 SP2+, and not SQL Azure. The use of a explicit transaction should just ensure that all statements run within the same transaction.


    Anders Borum / SphereWorks

    Wednesday, March 28, 2012 1:50 PM
  • Hi,

    What I know is that though Local Temporary Tables are supported by SQL Azure, Global Temporary Tables are not supported.


    If you found this post useful, Please "Mark as Answer" or "Vote as Helpful". Thanks! Vaibhav Gujral.

    Wednesday, March 28, 2012 2:02 PM
  • Vaibhav,

    a local temporary table is identified by a '#' prefix, and global temporary tables are identified by a '##' prefix. We're using a local temporary table. It's a quite frustrating issue; I can't find much information on SqlBulkCopy and temporary tables on the net. Suggestions of any kind and pointrs are more than welcome!


    Anders Borum / SphereWorks



    Wednesday, March 28, 2012 2:09 PM
  • Hi Anders,

    I'll take a look at this today and figure out what is going on, then post back.


    Matt

    Wednesday, March 28, 2012 4:39 PM
  • Matt,

    thanks for looking into the issue; I'm anxious to figure out, what's causing this. Let me know if you need additional information/details.


    Anders Borum / SphereWorks

    Wednesday, March 28, 2012 5:46 PM
  • This is a known issue with SQL Azure and bulk copy. The only workaround is to upload to a non-temporary table. You can for example generate a temporary table name using a random guid to avoid collisions, then ensure you clean it up later. Another option is to upload data into a temp table using the TVP feature instead of SqlBulkCopy, let me know if you want some details on how to do this. They are working on fixing this but I don't have any ETA on when this fix will get done yet.

    Matt

    Wednesday, March 28, 2012 9:58 PM
  • Note technical detail is SQL Azure does not support calling sp_tablecollations_100 metadata API against a temporary table.  The SqlBulkCopy class needs this to prepare the bulk copy call internally.

    Matt

    Wednesday, March 28, 2012 11:23 PM
  • Matt,

    yes, the error message also reported the same requirement. It seems we're left with using a physical temporary table using a random name (i.e. a GUID as you also suggested). I'll be honest and state that I'm surprised at just how many features of SQL Server isn't supported in SQL Azure; aside from just simpler databases, our experience with moving our data tier and supporting SQL Azure has caused a number of scenarios like this.

    Regarding the TVP feature of SqlBulkCopy, I'd like to know more. Please provide pointers and references to additional information (I do have experience with an XML based approach, shredding the XML in SPROCs). Also, I'd like to know how to keep track of when the bulk copy issue with the temporary table reference is resolved in SQL Azure. Is there an information loop I could be added to (also for verification with our own software)?

    Thanks for looking into the issue Matt, it's highly appreciated and I thank you for the swift reply.


    Anders Borum / SphereWorks


    Thursday, March 29, 2012 8:21 AM
  • One of the design choices we made with SQL Azure was to clearly isolate databases from each other.  We call this "containment".  Basically we had to remove inter-database dependencies because we want to cheaply host databases in a multi-tenant fashion on commodity hardware.  This makes the service cost effective.  This however had the cost of making porting to SQL Azure a bit more difficult for customers since most applications are written not taking containment in mind.  As part of containment changes (blocking the usage of 3 part names) we missed this issue with tempdb but we're working to get this fixed.

    There is a Connect bug already filed on this to track the issue, you can check this bug for status:

    https://connect.microsoft.com/SQLServer/feedback/details/624093/sqlbulkcopy-writetoserver-fails-on-sql-azure-if-destination-table-is-temporary

    Another option is you can start a feature voting item here:

    http://www.mygreatwindowsazureidea.com/forums/34685-sql-azure-feature-voting

    It would be worthwhile to file the feature request and get others to vote on it to raise awareness of this scenario (bulk copy into temp tables).

    The TVP approach to copy is to define a TVP on the server and use it to stream the data into a temporary table.   I'll work to put together a quick sample and post back today.


    Matt

    Thursday, March 29, 2012 5:17 PM
  • Matt,

    thanks for the clarification - it's always nice to learn about the decisions that shape the software we use on a daily basis.

    Could you please elaborate on the fact that the Connect bug you're referring to (same issue I raised in this thread) was submitted in late 2010, and still isn't resolved despite numerous releases of SQL Azure? The reason I'm asking is that our team was quite surprised about this bug (it's not a missing feature; it's a bug). With a shared codebase for deployment on premise and Windows Azure, we're looking at supporting the lowest common denominator - SQL Azure, and especially with lack of temp tables in bulk scenarios, we're looking at ackward implementation (physical table, with each bulk working set identified by a seperate column).

    Thanks in advance


    Anders Borum / SphereWorks

    Friday, April 13, 2012 9:57 AM
  • I've brought this to the attention of the language team and they are working on fixing this but I don't have an ETA yet.  

    Note I looked into the TVP solution and it is not as straightforward as I originally imagined.   The tricky part is you need to define a table-valued parameter type that matches the temp table definition and this means you have to reverse engineer the type.  There is no easy way to reverse engineer the TVP type from T-SQL that I found, you have to read meta-data and construct the type yourself.

    So my original workaround of using randomly generated table name is much easier since you can do a select top 0 * into RandomTableName from SourceTable to create the target random table for bulk upload.


    Matt

    Friday, April 13, 2012 4:36 PM
  • Hi Matt,

    it's been 6 months since I reported the issue and you said you would bring it to the attention of the language team. Although I haven't checked if this feature has been made available on SQL Azure recently, I haven't seen any information in the change log describing the recent updates.

    In case you're wondering, yes, we're waiting ;)


    Anders Borum / SphereWorks

    Friday, October 12, 2012 12:46 PM
  • I've started a feature request (or, let's call it a bug fix request) at http://www.mygreatwindowsazureidea.com/forums/169379-data-management-sql-database/suggestions/3326856-sqlbulkcopy-should-support-temporary-tables-align for those interested in bringing it to Microsoft's attention.

    I'm quite disappointed by the feedback from the language service team.


    Anders Borum / SphereWorks

    Tuesday, November 6, 2012 8:18 AM
  • One year later and no resolution. Any ETA this time?
    Saturday, March 9, 2013 7:56 PM
  • Hi Alex,

    unfortunately the answer is no (as far as I'm concerned). I have involved in a long e-mail correspondence with a number of Microsoft developers regarding this issue - basically the workaround was replacing the use of SqlBulkCopy with TVP (Table Variable Parameters).

    I did a fairly large number of performance tests between SqlBulkCopy and TVPs; in all but a very few edge cases, TVPs came out as the faster approach. Only issue is that TVPs doesn't support the SqlVariant data type, so for this particular requirement we're keeping our SqlBulkCopy approach (and have planned to replace it with TVPs soon).

    If you're interested, I'd have no problem posting my benchmark results here.

    I don't think we're going to see much interest from Microsoft in fixing the SqlBulkCopy issue anytime soon; if they valued the feature, it would have been fixed months ago - which is too bad.


    Anders Borum / SphereWorks

    Saturday, March 9, 2013 8:22 PM
  • I did a fairly large number of performance tests between SqlBulkCopy and TVPs; in all but a very few edge cases, TVPs came out as the faster approach. Only issue is that TVPs doesn't support the SqlVariant data type, so for this particular requirement we're keeping our SqlBulkCopy approach (and have planned to replace it with TVPs soon).

    If you're interested, I'd have no problem posting my benchmark results here.

    My experience is similar to yours, Anders.  Performance using TVPs (using an IEnumerable(SqlDataRecord) gives SqlBulkCopy (using a DataReader source) a run for the money.


    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

    Sunday, March 10, 2013 1:01 AM