none
Shared Memory Provider: Timeout error [258] RRS feed

  • Question

  • Hi All,

    Hopefully there is somebody that can help me...

    When running the etl I'm getting the error: <SSIS Task>: Shared Memory Provider: Timeout error [258] ; followed by the message "Communication link failure".

    What is special about this message that it happens on a SQL Execute task (random task) and the Timeout is after 2 minutes.

    When executing the packages separatly it is working fine. The SQL Tasks that are failing are also quit heavy, but reasonable and takes between >2min and 10 - 15 min. Statements are stored procedures that puts an index on 3 mil. records or update statements,...

    I had a look to all my (SSIS-etl) timeouts and they have the default value 0, the "remote query timeout" of the server is set to 10 minutes. According to me, these are the only one that exists?

    There are 2instances on the server each instance has 24GB allocated, the server has 64 in total. Also when the etl runs (that results in an error) no other etl is running on the 2 instances. I'm working with the oledb \sql server native client 11.0 provider: SQLNCLI11.1.

    It is frustrating because I don't have a clear error message. Maybe there or other places to look? I had a look on the application log & sql server log but it did not made me any wiser...

    Any help is appriciated,

    Bram

    Tuesday, April 1, 2014 8:38 AM

Answers

All replies

  • Hi ,

    Can you change to SQLOLEDB.1 rather than SQLNCLI.1 and see if you have the same issue



    Regards, PS

    Tuesday, April 1, 2014 9:21 AM
  • PS, I tried it with SQLOLEDB.1 but I still receive the error message.

    Something that I forgot to mention is that I'm using sql server 2012

    Thx

    Wednesday, April 2, 2014 6:26 AM
  • This is a part of the sql error log at the same time of the error, I'm not sure if it's related but you never know:

    Date,Source,Severity,Message

    04/01/2014 20:00:21,spid14s,Unknown,last target outstanding: 358<c/> avgWriteLatency 20
    04/01/2014 20:00:21,spid14s,Unknown,average throughput:   5.08 MB/sec<c/> I/O saturation: 5995<c/> context switches 14539
    04/01/2014 20:00:21,spid14s,Unknown,FlushCache: cleaned up 72812 bufs with 3099 writes in 112026 ms (avoided 476 new dirty bufs) for db 9:0
    04/01/2014 19:53:56,spid14s,Unknown,last target outstanding: 708<c/> avgWriteLatency 33
    04/01/2014 19:53:56,spid14s,Unknown,average throughput:  35.88 MB/sec<c/> I/O saturation: 25622<c/> context switches 43694
    04/01/2014 19:53:56,spid14s,Unknown,FlushCache: cleaned up 640748 bufs with 25437 writes in 139511 ms (avoided 59488 new dirty bufs) for db 9:0
    04/01/2014 19:44:13,spid14s,Unknown,last target outstanding: 682<c/> avgWriteLatency 75
    04/01/2014 19:44:13,spid14s,Unknown,average throughput:  55.22 MB/sec<c/> I/O saturation: 24846<c/> context switches 43655
    04/01/2014 19:44:13,spid14s,Unknown,FlushCache: cleaned up 646031 bufs with 25310 writes in 91397 ms (avoided 118 new dirty bufs) for db 9:0
    04/01/2014 18:34:03,spid14s,Unknown,last target outstanding: 194<c/> avgWriteLatency 16
    04/01/2014 18:34:03,spid14s,Unknown,average throughput:   9.68 MB/sec<c/> I/O saturation: 4396<c/> context switches 8644
    04/01/2014 18:34:03,spid14s,Unknown,FlushCache: cleaned up 78398 bufs with 3367 writes in 63280 ms (avoided 77538 new dirty bufs) for db 10:0

    Wednesday, April 2, 2014 6:40 AM
  • Hi ,

    • Check the below link for possible errors. Its states that your error is due to OBDC connection attempt when server is not ready to process a new local connection, possibly due to overload: 

    http://blogs.msdn.com/b/sql_protocols/archive/2005/09/28/474698.aspx

    Can you check the server utilization at the time of Executing the Package


    Regards, PS

    Wednesday, April 2, 2014 7:03 AM
  • How could I reduce this overload?

    At the time of the etl-run, it is the only etl running at that time. The tasks or executed at this moment in sequence. We also added 2 extra CPU's and doubled the memory from 12 to 24 GB but the errors keep comming.

    The statements are the same as in older versions, these are running in parallel without any problem. The difference is that the statements have to work with more data: example applying index on 3mil records (before switching in the partition),...

    I can imagine that server is running at its maximum, but that shouldn't be a problem?

    Wednesday, April 2, 2014 7:30 AM
  • Hi Bram,

    Does your package runs in 32-bit runtime mode? The 32-bit DTExec process can consume up to 2GB virtual memory. If possible, try to run the package in 64-bit runtime mode or break the existing package into several child packages, and use Execute Package Task to call the child packages in the parent package. When using child packages, set the ExecuteOutOfProcess option of each Execute Package Task to True so that each process can consume its own 2GB virtual memory.

    If the issue persists, enable logging for the package, and post detailed error message for further analysis.

    References:

    Regards,


    Mike Yin
    TechNet Community Support

    Tuesday, April 8, 2014 8:51 AM
    Moderator
  • An old issue, but i just hit this issue as well.  All my timeouts are turned off, but i was hitting the 2 minute mark like you were.  

    My issue was in an SSIS package that called other packages, and MARS was turned on for the connection.  Once i got rid of that switch, everything was able to continue on like normal.


    thus spake the master programmer: After three days without programming, life becomes meaningless.

    • Proposed as answer by Argrithmag Monday, November 16, 2015 7:25 PM
    Monday, November 16, 2015 7:24 PM