none
Getting Error "The buffer manager detected that the system was low on virtual memory" in SSIS Package

Answers

  • Hi,

    Suppose you have nearly 1 Crore records at your source side & have to transfer these records to destination in batch wise let's suppose like 1 lakh records per batch. How can I implement this scenario?

    Above can be done by 2 Ways

    1. In SSIS OLEDB destination component, set Data access Mode =Table or view - fast load. There will be options made available, including Rows per batch and Maximum insert commit size. Set these to 10,000 to begin with and increase or decrease according to the speed required and amount of RAM available.

    Effect of Rows Per Batch and Maximum Insert Commit Size Settings

    Rows per batch – The default value for this setting is -1 which specifies all incoming rows will be treated as a single batch. You can change this default behavior and break all incoming rows into multiple batches. The allowed value is only positive integer which specifies the maximum number of rows in a batch.

    Maximum insert commit size – The default value for this setting is '2147483647' (largest value for 4 byte integer type) which specifies all incoming rows will be committed once on successful completion. You can specify a positive value for this setting to indicate that commit will be done for those number of records. You might be wondering, changing the default value for this setting will put overhead on the dataflow engine to commit several times. Yes that is true, but at the same time it will release the pressure on the transaction log and tempdb to grow tremendously specifically during high volume data transfers.

    The above two settings are very important to understand to improve the performance of tempdb and the transaction log. For example if you leave 'Max insert commit size' to its default, the transaction log and tempdb will keep on growing during the extraction process and if you are transferring a high volume of data the tempdb will soon run out of memory as a result of this your extraction will fail. So it is recommended to set these values to an optimum value based on your environment.

    http://www.mssqltips.com/sqlservertip/1840/sql-server-integration-services-ssis-best-practices/

        2. By Using ForEach Loop:

    http://www.mssqltips.com/sqlservertip/1504/how-to-implement-batch-processing-in-sql-server-integration-services-ssis/

    OLE DB Destination Editor (Connection Manager Page)

    http://msdn.microsoft.com/en-us/library/ms188439.aspx


    Thanks Shiven:) If Answer is Helpful, Please Vote

    • Proposed as answer by Eileen Zhao Friday, April 27, 2012 3:21 AM
    • Marked as answer by Eileen Zhao Wednesday, May 02, 2012 7:27 AM
    Wednesday, April 25, 2012 5:02 AM

All replies

  • In almost all cases only adding more memory helps.

    BLOBs can be very large in size, you need to take a look at what they are.

    If you cannot add more memory, then consider simply reducing the number of rows being flown through the buffer (used OLDEDB? If yes, see it in the advanced properties).


    Arthur My Blog

    Friday, April 20, 2012 3:23 PM
    Moderator
  • Arthur is correct.  For a Merge Join, the inputs have to be sorted.  I'm assuming you've used Sort components to accomplish this.  If that's the case, then SSIS is required to load ALL of the rows from EACH source into memory.  That means you need sufficient memory to do so, period.

    You may be able to ameliorate the situation by

    1. Requesting the rows in sorted order from the source, and
    2. Informing SSIS that the rows are in sorted order.

    Accomplish #1 by using an ORDER BY in your SQL statement.

    Accomplish #2 by right clicking on the Source and selecting Advanced Editor.  Next, select the Input/Output tab.  Next, select the Output node.  Find the IsSorted property and change it to TRUE.  Open the Output Node.  Find the columns that participate in your ORDER BY.  In the order in which the columns are listed in the ORDER BY - for example "ORDER BY colA, colB" - set the SortKeyPosition property with an ordinal number - for example set colA to 1, colB to 2.  For more information, search for IsSorted and SortKeyPosition.


    Todd McDermid's Blog Talk to me now on


    Friday, April 20, 2012 4:17 PM
    Moderator
  • Hi Arthur &Todd,

    I have not used Sort Transformation befor Merge Join. I have used Order By Asc in Query side of Source & set IsSorted=True & SortKeyPostion=1.

    How Can I do this "then consider simply reducing the number of rows being flown through the buffer (used OLDEDB? If yes, see it in the advanced properties). "

    One Source is Oracle so I have used Data access Mode=SQL Command From Variable & Another source is SQL Server so there I have used Data access Mode=SQL Command.

    Destination Table is SQL Server & Same table I am using in many packages so given Select * From My_Table  WITH (NOLOCK)

    Please tell me step to reducing the number of rows being flown through the buffer ?


    Thanks Shiven:) If Answer is Helpful, Please Vote


    Monday, April 23, 2012 5:55 AM
  • On what do you join? What are the join columns and what datatypes do they have?
    If it's a single integer column (4 byte), you can put all the key values of the destination in memory using a lookup with full cache.

    4 bytes * 50,000,000 rows = 200 Megabytes.

    Use the match and no-match output of the lookup to determine inserts and updates.


    MCTS, MCITP - Please mark posts as answered where appropriate.

    Monday, April 23, 2012 6:21 AM
  • Hi S Kumar Dubey

    All you need to do is to go to the destination editor and in the connection manager's property page specify a value (relatively small) for the Rows per batch


    Arthur My Blog

    Monday, April 23, 2012 6:10 PM
    Moderator
  • Hi ArthurZ,

    The property Rows per batch I can set in Destination when Data access Mode=Table or View - Fast Load but I have mentioned above that I have used Data access Mode=SQL Command because same table I am using in other package & that can be ran at same time. So it is displaying table lock. 


    Thanks Shiven:) If Answer is Helpful, Please Vote

    Tuesday, April 24, 2012 4:44 AM
  • Hi again S Kumar Dubey,

    I suggest then you process the data in batches. E.g. the SQL statement would grab only a subset then process it, and then gets the next one.

    Besides, to eliminate the shared lock (which should not even be) you can use a hint in SQL Server called WITH NO LOCK like

    SELECT ABD, 123 FROM MyTable WITH(NOLOCK)


    Arthur My Blog

    Tuesday, April 24, 2012 8:19 PM
    Moderator
  • Hi,

    Suppose you have nearly 1 Crore records at your source side & have to transfer these records to destination in batch wise let's suppose like 1 lakh records per batch. How can I implement this scenario?

    Above can be done by 2 Ways

    1. In SSIS OLEDB destination component, set Data access Mode =Table or view - fast load. There will be options made available, including Rows per batch and Maximum insert commit size. Set these to 10,000 to begin with and increase or decrease according to the speed required and amount of RAM available.

    Effect of Rows Per Batch and Maximum Insert Commit Size Settings

    Rows per batch – The default value for this setting is -1 which specifies all incoming rows will be treated as a single batch. You can change this default behavior and break all incoming rows into multiple batches. The allowed value is only positive integer which specifies the maximum number of rows in a batch.

    Maximum insert commit size – The default value for this setting is '2147483647' (largest value for 4 byte integer type) which specifies all incoming rows will be committed once on successful completion. You can specify a positive value for this setting to indicate that commit will be done for those number of records. You might be wondering, changing the default value for this setting will put overhead on the dataflow engine to commit several times. Yes that is true, but at the same time it will release the pressure on the transaction log and tempdb to grow tremendously specifically during high volume data transfers.

    The above two settings are very important to understand to improve the performance of tempdb and the transaction log. For example if you leave 'Max insert commit size' to its default, the transaction log and tempdb will keep on growing during the extraction process and if you are transferring a high volume of data the tempdb will soon run out of memory as a result of this your extraction will fail. So it is recommended to set these values to an optimum value based on your environment.

    http://www.mssqltips.com/sqlservertip/1840/sql-server-integration-services-ssis-best-practices/

        2. By Using ForEach Loop:

    http://www.mssqltips.com/sqlservertip/1504/how-to-implement-batch-processing-in-sql-server-integration-services-ssis/

    OLE DB Destination Editor (Connection Manager Page)

    http://msdn.microsoft.com/en-us/library/ms188439.aspx


    Thanks Shiven:) If Answer is Helpful, Please Vote

    • Proposed as answer by Eileen Zhao Friday, April 27, 2012 3:21 AM
    • Marked as answer by Eileen Zhao Wednesday, May 02, 2012 7:27 AM
    Wednesday, April 25, 2012 5:02 AM