none
How can I bulk insert into deltastore? HOW TO use SqlBulkCopy on a table with Non-Clostered columnstore index efficiently ? RRS feed

  • Question

  • Hello SQL server community,

    We have an application that is loading data events in batches.

    The application inserts events into SQL Server table with Bulk insert (System.Data.SqlBulkCopy). (in batches of 1-10000)

    We have added a non-clustered columnstore index to the table.

    Now each bulk insert results in COMPRESSED row group with the size of the batch and after a while things get a bit inefficient => you get lots of relatively small compressed rowstore groups and using the index becomes MUCH slower. 

    Essentially we are back to times when there was no deltastore on NCCIs

    Of course you can run a costly REORGANIZE on your NCCI to merge those tiny closed rowgroups into big ones.


    If you execute an insert statement, index records it in the deltastore. And things are handled much more efficiently. 

     

    Therefore my question: Is there any way to ask SQL server to treat Bulk insert as normal insert when updating columnstore indexes ?

    Another formulation: Is there any way to disable BULKLOAD rowgroup trim reason for columnstore index when bulk-loading data?

    Thank you very much,

    Alexander

    This scripts explains the question more precise:

    To run it you need to create a file on filesystem for BULK INSERT 

    It will create a DB and will clean it up afterwards 

    SQL Server 2017 (14.0.3223.3) was used. 

    Behavior is the same on Microsoft SQL Azure (RTM) - 12.0.2000.8   Aug 27 2019 17:56:41

    USE [master]
    GO

    THROW 51000, 'Create a file C:\TestColumnStoreInservVSBulkInsert.txt with content: "test, 1" and comment this line', 1;  

    CREATE DATABASE TestColumnStoreInservVSBulkInsert 
    GO
    use [TestColumnStoreInservVSBulkInsert]

    CREATE TABLE [Table](
    [value] [varchar](20) NOT NULL INDEX  IX_1 CLUSTERED,
    [state] int not NULL
    )

    CREATE NONCLUSTERED COLUMNSTORE INDEX [NCI_1] ON [dbo].[Table]
    (
    [value],
    [state]
    )WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0) 

    insert into [Table] values (('TestInitail'), (1))

    DECLARE @IndexStateQuery VARCHAR(MAX)  
    SET @IndexStateQuery = 'SELECT i.object_id,   
        object_name(i.object_id) AS TableName,   
        i.name AS IndexName,   
        i.index_id,   
        i.type_desc,   
        CSRowGroups.*
    FROM sys.indexes AS i  
    JOIN sys.dm_db_column_store_row_group_physical_stats AS CSRowGroups  
        ON i.object_id = CSRowGroups.object_id AND i.index_id = CSRowGroups.index_id   
    ORDER BY object_name(i.object_id), i.name, row_group_id;  '

    EXEC (@IndexStateQuery)
    -- Creates a COMPRESSED rowGroup with 1! record  
    --QUESTION: How to make this statement add data to Open Rowgroup ?
    BULK INSERT [Table] FROM 'C:\TestColumnStoreInservVSBulkInsert.txt' WITH ( FORMAT='CSV', ROWS_PER_BATCH = 1);

    EXEC (@IndexStateQuery)
    -- Adds one record to existing open rowgroup 
    insert into [Table] select top 1 * from [Table]
    EXEC (@IndexStateQuery)

    --Costly fix. Merge and recomrpess closed rowgroups
    --ALTER INDEX NCI_1   ON [Table] REORGANIZE   
    --EXEC (@IndexStateQuery)


    --Cleanup
    use [master]
    alter database [TestColumnStoreInservVSBulkInsert] set single_user with rollback immediate
    drop database [TestColumnStoreInservVSBulkInsert]

     


    Monday, September 16, 2019 11:52 AM

All replies

  • What is a purpose in this task to add NCI columnstore index?loading the table to heap table will be much faster than loading the data to a  table that has indexes . In addition, loading data to a [temporary table][Temporary] will also load much faster than loading a table to permanent storage.

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Monday, September 16, 2019 12:03 PM
    Answerer
  • Hello Uri,

    Thank you for a quick reply.

    We are aware that loading data to table with row indexes has certain performance implications.

    But we are completely fine with the performance we were getting before adding the columnstore index

    The only thing we would like to clarify:  

    Is there a way of changing default handling of "Bulk Insert" VS "Insert" by Columnstore indexes  

    I will clarify the question a bit. We are looking into SQL server 2017 as baseline.

    Default behavior:

    1. On Insert: rows are added to currently OPEN Rowstore group  (this is what I refered to as deltastore)

    2. On Bulk insert: new COMPRESSED row-store group is created with records from the batch.

    We are looking for this behavior: 

    1. On Insert: rows are added to currently OPEN Rowstore group

     2. On Bulk insert: rows are added to currently OPEN Rowstore group



    Monday, September 16, 2019 12:35 PM
  • What recovery mode is the database in?

    Can show the code you actually use for the operation?


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Monday, September 16, 2019 8:54 PM
  • Hi Alexander Bartosh,

     

    Would you please refer to  http://www.nikoport.com/2015/08/19/columnstore-indexes-part-62-parallel-data-insertion/?

     

    Best regards,

    Dedmon Dai


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com

    Tuesday, September 17, 2019 9:42 AM
  • Therefore my question: Is there any way to ask SQL server to treat Bulk insert as normal insert when updating columnstore indexes ?

    Small batch sizes with bulk insert are also inefficient with row-store because SQL Server will allocate a full 64K extent for each batch when batch sizes are small. I suggest you use traditional inserts instead of SqlBulkCopy for small batches if you want to avoid the overhead of taking a train to the corner store.


    Dan Guzman, Data Platform MVP, http://www.dbdelta.com

    Tuesday, September 17, 2019 10:46 AM
    Moderator
  • Hello guys,

    Thank you for quick reaction.

    I have compiled this script that demostrates my question.

    The question was updated with the script


    Tuesday, September 17, 2019 11:39 AM
  • Hello Dedmon,

    This is indeed a very nice read, but it does not answer the question.

    Thank you.

    Tuesday, September 17, 2019 11:41 AM
  • Hey Dan,

    Thank you. Did not not know that.  Nice.

    Our regular batch is very likely to be bigger than 64k. 

    It would be nice to update this snippet of documentation with your knowladge!

    https://docs.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlbulkcopy

    The SqlBulkCopy class lets you write managed code solutions that provide similar functionality. There are other ways to load data into a SQL Server table (INSERT statements, for example), but SqlBulkCopy offers a significant performance advantage over them.


    Tuesday, September 17, 2019 11:47 AM
  • It would be nice to update this snippet of documentation with your knowladge!

    https://docs.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlbulkcopy


    Yet another one for my TODO list ;-)

    Regarding the batch size and space considerations with row store, see https://www.dbdelta.com/bulk-load-batch-size-considerations-in-sql-server-2016/. 


    Dan Guzman, Data Platform MVP, http://www.dbdelta.com

    Tuesday, September 17, 2019 12:08 PM
    Moderator
  • Our regular batch is very likely to be bigger than 64k. 

    I have an idea that should provide the best of both worlds. Instead of SqlBulkCopy, try passing a table-valued parameter with the rows to insert and use a INSERT...SELECT query (which could be in a stored proc). I've seen this TVP method provide similar performance to SqlBulkCopy and will avoid the issue with compressed row groups you have with SqlBulkCopy.


    Dan Guzman, Data Platform MVP, http://www.dbdelta.com

    Tuesday, September 17, 2019 12:47 PM
    Moderator
  • Nice idea. 

    It would be even better If one would use INSERT SELECT WITH (TABLOCK)  (Thanks to Dedmon for the link) 

    But in this case you loose the streaming behavior and need to load all your data into memory (Both application and SQL server).

    SqlBulkCopy when used in combination with IDataReader can stream the data when it is consumed by SQL server.

    I would like to have one method of loading data and SqlBulkCopy did a pretty nice job so far. 

    I am sure I am not the only one who is using SqlBulkCopy  in combination with Columnstore Indexes  

     🤔



    Tuesday, September 17, 2019 12:59 PM
  • Interesting thread!

    But let's go back to the problem statement, how many rows in a typical insert?

    Just as a general rule I never think about bulk load below some level, maybe 100k rows, and with an average 8-byte key that wouldn't have any problem with a 64kb extent (except for fragmentation into the final extent, …).

    How many inserts do you have per day, how many are over 100k rows, anything else you can offer about your typical loads?  Non-bulk load of 10k rows shouldn't take more than a couple of seconds (maybe faster), or maybe a little longer if you break it into several transactions for good luck.

    Josh



    • Edited by JRStern Tuesday, September 17, 2019 9:04 PM
    Tuesday, September 17, 2019 9:01 PM
  • SqlBulkCopy when used in combination with IDataReader can stream the data when it is consumed by SQL server.

    You can stream data to a TVP as well. True, there will be intermediate storage on the SQL Server side, but you don't need buffer the data in the client.

    I have an article on my web site which is an introduction how to use TVPs in .NET, and it includes an example with streaming data:
    http://www.sommarskog.se/arrays-in-sql-2008.html


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Tuesday, September 17, 2019 9:26 PM
  • SqlBulkCopy when used in combination with IDataReader can stream the data when it is consumed by SQL server.

    The TVP source can be a DataReader, IEnumerable<SqlDataRecord>, or DataTable. In addition to the examples on Erland's site, the TVP documentation has a C# example of streaming using a DataReader. 


    Dan Guzman, Data Platform MVP, http://www.dbdelta.com

    Wednesday, September 18, 2019 1:45 AM
    Moderator
  • Erland, Dan,

    Thank you for the TVP suggestion.

    If I will not find any switch to disable BULKLOAD rowgroup trim reason

    I was considering using the my favorite ( :/ ) SqlBulkCopy to insert into temporary Memory Based Table and then performing INSERT SELECT WITH (TABLOCK) 

    In your opinion which one would be better, TVP or Temp memory table ? 

    Josh, 

    Most frequent batch size is about 1000-5000. We have as many inserts as SQL can do :) 
    We can not accumulate them to bigger batches. (Logical working of the application) 

    Thank you,

    Alexander

    Wednesday, September 18, 2019 11:21 AM
  • In your opinion which one would be better, TVP or Temp memory table ? 

    I have an opinion on which may be faster but don't listen to me. Instead, race your horses.


    Dan Guzman, Data Platform MVP, http://www.dbdelta.com


    Wednesday, September 18, 2019 12:09 PM
    Moderator
  • Alexander, how long does it take to recreate the columnstore index?

    Josh

    Tuesday, September 24, 2019 6:20 PM
  • It depends on the amount of data events ingested into that specific database. 

    IMHO Rebuild is much less efficient than reorganize, that is much less efficient than default rowgroup creation(when full) method. 

    According to my understanding rebuild will scan all data in the clustered index and will recreate all compressed row groups. 

    Reorganize should be able to only read small, already compressed (e.g bulk insert) rowgroups and combine them to bigger ones

    And normal insert should create only full compressed rowstore groups once you have enough data. (Looks like the most efficient solution to me)

    Thus, IMHO the rebuild is the most not efficient solution here and I am looking for the most efficient one 

    On the other hand if your index has huge number of small compressed rowgroups Reorganizing them all might become less efficient than rebuild of the entire index. 




    Monday, September 30, 2019 10:46 AM
  • Hi there,

    I know this is a bit older post but here goes...

    Have you tried setting COMPRESSION_DELAY = 60?
    If you have regular updates/deletes on the table it might also help reduce fragmentation on the table.

    We went through the same investigative path. 
    The solution in our use case ended up regularly executing the REORGANIZE command. REORGANIZE is incremental and can be started when the database is not so busy then stopped when server gets busy.

    If you have indexes, at some point you are going to need to do maintenance on them. I usually take a rule of thumb of > 10% to 30% reorganize for manual execution. (Depending on table rowcount)
    In the case of column store indexes you need to be especially aware of the impact of updates and deletes, since both mark the current row in the columnstore as deleted but the row is not cleaned out of the index (UPDATE marks as delete and then inserts a new row).

    The other consideration with Columnstore indexes is that the SQL engine uses Rowgroup elimination.
    This means that the engine stores statistics for each column.
    e.g. The second rowgroup's first column is INT and contains values Min = 30  and MAX = 100.
    If each of your inserts contain values that are not incremental e.g. Min = 1 and MAX = 300000, SQL will need to scan each rowgroup in the index. This often results in very inefficient query plans being generated.
    By running the reorganize command regularly, you make sure that the index is organized in such a way that SQL server will just scan minimum rowgroups required to fullfill the query.
    https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-query-performance?view=sql-server-ver15#rowgroup-elimination

    From my experiences, the performance gains from having well performing Columnstore index far outweighs the impact of regular REORGANIZE commands being executed. by executing it regularly it will reduce the time and CPU and improve resource usage when queries use the index.
    The added benefit is that the newly inserted data is compressed sooner and any rows that are marked as deleted gets removed from the index.

    The REORGANIZE command is an online process and if performed regularly should have reduced performance impact. (Use the MAXDOP hint to limit the processors used to further limit performance impact.)
    You can read more around columnstore fragmentation and the rebuild/reorganize commands in the below link.

    https://docs.microsoft.com/en-us/sql/relational-databases/sql-server-index-design-guide?view=sql-server-ver15#columnstore_index

    Tuesday, January 21, 2020 9:43 PM