locked
Reading the variable "OnPipelinePostPrimeOutput" failed with error code 0xC0014052. RRS feed

  • Question

  • Hi,

    I have this SSIS package were I get some really strange errors when running the package as an SQL job on the server (SQL Server 2008) but it works perfectly from inside VS 2008. The server is a 6-core machine with hyperthreading. 

    The error messages that I get is e.g.:

    • Reading the variable "OnPipelinePostPrimeOutput" failed with error code 0xC0014052.
    • Pure virtual function (the package stops right after I have started the job)

    The package does not fail with the same error every time, it is random. It processes not the same number of files everytime before failing.

    I have no clue what these error messages mean. I found another post that gave some pointers regarding the "pure virtual function" error message but I do see that should be the reason.

    I would really appreciate some help!

    Best regards
    Lars

    Tuesday, January 24, 2012 8:07 PM

All replies

  • Lars,

    I 'd say it is the locking as a result of contention.

    We need to know what you have done in your package, this is the key.

    If you want, I can shoot in the dark and tell you that you used many DFTs in parallel in it, true? If so rewrite to use multiple packages instead.


    Arthur My Blog
    Tuesday, January 24, 2012 8:25 PM
  • Yes, I have approximately 13 parallel DFT (Data Flow Tasks) running. Every DTF handles 365 files so I guess there will be many things running i parallell. But shouldn't SSIS be able to handle that, the package works from inside VS 2008?

    Every DTF should be self-contained and no variables should be shared between the different DTF:s. Is this parallel design not allowed?

    I do not get these errors running from inside VS 2008, is that because my local work station only have 4 cores (with hyperthreading) or is it because the package is run in a different way when run on the server?

    Best regards
    Lars

    Tuesday, January 24, 2012 8:39 PM
  • Do not compare your Dev env with the Prod, not a fair comparison.

    This will require a time consuming tuning, use multiple packages and they will run in proper isolation.

    See

    http://www.mattmasson.com/index.php/2012/01/too-many-sources-in-a-data-flow/?utm_source=rss&utm_medium=rss&utm_campaign=too-many-sources-in-a-data-flow


    Arthur My Blog
    Tuesday, January 24, 2012 9:02 PM
  • Arthur, thanks for your help!

    I have a question regarding the link that you have supplied. The 13 parallel DFT:s that I have only have one single Source -> Destination combination per DTF which is how the customer redesigned its package. Could that still cause these race conditions?

    Best regards
    Lars

     

    Tuesday, January 24, 2012 9:17 PM
  • Hi Lars,

    a separate data source and destination per DFT needs to be used. Must help.

    But you can also do this trick: make multiple packages with a single DFT. Schedule them to run in Agent as different steps - they will run asynchronously and would not block each other at all.


    Arthur My Blog
    Tuesday, January 24, 2012 10:05 PM
  • Hi Arthur,

    A separate data source and destination per DFT is exactly what I am using and that is what fails. I am already using the same setup as the one that the customer changed in the article.

    This is my setup: I have a script task that checks if there are any files to import, if that is the case the package proceeds to the single large sequence container. If no files are found the package stops.

    In the sequence container I have 13 parallel sequence containers with the following setup:

    1. Runs a delete on an import table.
    2. A ForEach task that polls an XML file from a directory and sends it to the DFT.
    3. In the DFT the XML Task reads the xml-file, verifies it against an xsd, makes some data conversion (mainly Unicode-> non-Unicode), some sorting (Sort Task) and then sends the data to a OLE DB Destination.
    4. If the file import was successful the file i moved to a "processed directory" otherwise to a "not processed directory".
    5. Then it moves back to step 2.

    All these DFT:s run processes different file types and sends to different import tables.

    What is shared by these 13 parallel sequence containers are only four read_only variables that contains the UNC-path to the import directories and the location of the schema files (but that shouldn't be a problem, right?)

    Can you see some obvious error with this setup that could cause these race condition errors?

    Best regards
    Lars

    Tuesday, January 24, 2012 10:35 PM
  • If the parallel DFT:s share the same OLE DB connection manager, can that cause race conditions? I have 13 separate OLE DB Destination tasks but they all share the same single OLE DB connection manager, is that a problem?

    Thanks in advance!

    Best regards
    Lars

    Wednesday, January 25, 2012 8:00 AM