ADFv2 error troubleshooting


  • I have a pipeline that is pulling tables from an AWS PostgreSQL database and I am experiencing failures on a daily basis.  The error is always the same (example Run ID e4dd0441-abee-4b8a-90a0-fc8f1ad271d3).  I don't have access to the AWS environment but have been told the error is not being logged there despite the Azure ADF error stating the failure happened on the Source side.  I am logging ADF to Log Analytics but haven't found any additional error details there.  What could be happening and what does the error mean?

    {"ErrorCode":2200,"Message":"Failure happened on 'Source' side. ErrorCode=UserErrorFailedFileOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Upload file failed at path "MY DATA LAKE FILE PATH REMOVED".,Source=Microsoft.DataTransfer.Common,''Type=Microsoft.Data.Mashup.MashupValueException,Message=PostgreSQL: Exception while reading from stream,Source=Microsoft.MashupEngine,'","Details":null,"IsUserError":false}

    Tuesday, June 26, 2018 3:52 PM

All replies

  • Hi,

    By saying that "experiencing failures on a daily basis", do you mean that the issue occurs everyday with a certain pattern? For example, after several successful copies? Could you please help us better understand this? This seems like a throttling issue, do you know whether there is any limit on your AWS PostgreSQL?

    BTW, does this issue start to occur recently?


    Thursday, June 28, 2018 11:25 AM
  • Thanks for the reply!

    The issue is occuring daily but only for certain tables.  Primarily one specific table.  I have roughly 50 tables that get extracted from the AWS PostgreSQL DB and most of them run in parallel and succeed.  I moved the tables that were having issues into a separate sequential Copy activity so they don't run with any other tables in parallel.  Yet I still get the same error.  If I continue to try and load the failed table(s) eventually it may load or it may not.  

    I agree it seems like a throttling issue and that is what I've told the rest of the team on this project.  Unfortunately, I don't have access to the AWS PostgreSQL DB outside of read access for data extraction.  The AWS PostgreSQL DB is from a vendor that won't give access to look at the administration settings.  They have said there is no errors being reported in their AWS instance and that there are no settings that would throttle the connection, and no timeouts set.  I'm not completely sure any of that is accurate, but it's what they are telling me.

    The issue did start consistently failing on 6/23.

    Edit: I was looking closer on the VM that my Self-Hosted IR is on (needed to connect Azure to AWS PostgreSQL DB) and found this error in the Applicataions and Services/Connectors log: Copy failed with error: 'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Upload file failed at path **MYFOLDERPATH**/\**MYTABLE**.csv

    Would this error be logged if the connection was lost?  It's odd that the path has a forward slash and a back slash but it is getting set dynamically and works on all the other tables so I don't think it's really the problem.  More likely how it's getting logged is the problem.

    • Edited by FrankMn Thursday, June 28, 2018 5:04 PM
    Thursday, June 28, 2018 3:48 PM
  • FranMn - Did you ever get this issue resolved? I am having a very similar issue with the same error messages including the back slash except I'm using SQL Server and Azure blob storage!

    Thursday, March 7, 2019 8:37 PM
  • It's been a while since I worked on this but I believe there was a configuration setting I had to change involving the I.R.  I'll try and look around a little to see if I can dig up what I eventually did.
    Wednesday, March 13, 2019 8:16 PM