established connection failed error during copy activity RRS feed

  • Question

  • Hello everyone!

    I have created dozens of pipelines that all do similar things in my v2 Azure Data Factory. These pipelines connect out to an on-prem server, extracts the data, and writes them to files in an Azure Blob storage. This is all done in a foreach activity on each pipeline.

    Each pipeline was created using the ADF UI, is for a different on-prem SQL Server, and the data is written out to different files within the same container and same folder.

    This hasn't been a problem until recently. Starting on June 27th, and for every night since,  one or two of the pipelines fail on a single copy within the foreach loop. Each time it's always the same error, but every night it's almost always a different pipeline and different item in the foreach loop.

    I'm stumped as to the cause, but the error is:

    "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond"

    The IP address is always the same.

    Does anyone have any ideas or suggestions as to what may be going on?

    Thank you.

    Wednesday, July 3, 2019 12:44 PM

All replies

  • Hey Matt, 

    1. Could you please check if there is any maintenance activity going on in On-Prem SQL Server? 

    2. As you might be using Self Hosted Integration Runtime(SHIR), is there any update going on that ground ?

    3. There can be other jobs running at same time in SQL Server, reducing the resource availability, that might cause failure too.

    4. If those processes/activities are running in parallel, SHIR might be running out of memory/bandwidth. 



    Thursday, July 4, 2019 12:14 PM
  • The error does not help much . If I were you I could have looked into the IR logs . As its appears its seems the IR is having issues and you should have more info on the logs . Also capturing the performance metrics on the IR server will help .

    Also as a short term solution please enable the retry option on the Copy activity as I think it should help .

    Please do let us know how it goes .


    Thanks Himanshu

    Friday, July 5, 2019 4:19 PM
  • Good idea, I will review the logs.

    Monday, July 8, 2019 12:29 PM
  • No errors but his warning is spammed around the time of the process: Odd that it would only cause one copy activity to fail in my loop.

    'Type=Microsoft.WindowsAzure.Storage.StorageException,Message=The remote server returned an error: (404) Not Found.,Source=Microsoft.WindowsAzure.Storage,StackTrace=   at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext)
       at Microsoft.DataTransfer.Runtime.AzureBlobProfiler.GetProfileForBlob(IList`1 fieldList),''Type=System.Net.WebException,Message=The remote server returned an error: (404) Not Found.,Source=System,StackTrace=   at System.Net.HttpWebRequest.GetResponse()
       at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteSync[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext),'
    Job ID: 5d67f4c4-0bfd-642b-4c37-1963bf79e92e
    Log ID: Warning

    It looks like the issue is trying to connect out to my Azure Blob Storage account, but I am not sure how I can resolve this and why it just started to occur.

    Monday, July 8, 2019 1:37 PM
  • Its a 404 error , which means that the "resource not found" and you mentioned that the failures happens for different resources/ servers . Just in case if you the retry and retry interval didn't helped , I will put a wait activity inside the foreach loop and see if that helps . In any case  i think this needs some thorough investigation and if you have a support plan , please go ahead and reach out to the support team otherwise please send an email to azcommunity@microsoft.com  with the following details .
    1. Your subscription id
    2. Link to this thread  .

    Thanks Himanshu

    Tuesday, July 9, 2019 2:08 AM