locked
Azure Data Factory Alerting RRS feed

  • Question

  • Hi,

    I have created an Azure Data Factory to copy data from Azure CosmosDB to Azure Blob Storage. The copy activity pipeline is working as expected, currently set to run on a weekly basis every Monday. I am also interested in setting up alerting in the event of a failure in the pipeline execution and alerting in the event that the copy activity does not run at all. For the first scenario, I believe that there is already a metric defined for this particular case "PipelineFailedRuns". However, I am uncertain if you support a metric for the second scenario, where I would like to be alerted in case the the activity doesn't run at all (data absence). Can you please confirm if this is supported? 

    Monday, June 10, 2019 7:48 PM

Answers

  • I personally have never seen a trigger itself fail once set up.
    I have seen some triggers fail to move from inactive to active state.  That might be a result of an invalid trigger template.  I have seen a trigger fail to validate.  However I haven't seen an actual trigger failure.

    Here are some more ideas:

    If knowing  whether the pipeline does not run is a critical need, then you could actually try hosting the trigger mechanism on your end.  A cron-job which sends an API request to kick off the pipeline, and then follows up with an API request to verify the pipeline has started.  Then you are fully in control of notification.  This of course requires some minor set up, and is no longer a managed service, but it does free you up to do whatever you feel best.

    If you want to keep everything in the cloud, there are more possible work-arounds.  You could set up a dummy pipeline that runs each day the real pipeline does not, so that you can work with the maximum 24 hour evaluation period.  Given that a pipeline can be as simple as a single Set Variable activity, the cost of running the dummy pipeline would be minimal.

    Another possibility would be to create another service which looks for the evidence of your pipeline run, and creates messaging appropriate, such as email.  The service could be another pipeline that runs after the production one, or the service could be a Logic App, or something entirely different.

    • Marked as answer by Gus_7 Wednesday, June 12, 2019 7:16 PM
    Wednesday, June 12, 2019 6:40 PM

All replies

  • Hello Gus_7 and thank you for your inquiry.  You asked about being alerted if an activity does not run.  There is a much-neglected feature of Data Factory (v2) in regards to that.  I'm sure you know about making activity B only run when activity A succeeds.  However there is also the option to make activity B run when activity A is skipped.

    Did you want an alert for when the pipeline is not run?

    Tuesday, June 11, 2019 12:55 AM
  • Hi Martin, 

    Correct, I am interested in getting an alert when the pipeline does not run. Do you know if there's a way to have this configured using ADF? 

    Thanks!

    Tuesday, June 11, 2019 4:34 PM
  • Hmm that is an interesting challenge.  I will assume you mean top-level runs, not those kicked off by the Execute Pipeline activity.  After some thought, I do have a workaround.

    If you know the exact number of pipeline runs to happen in a day, you could set an alert for when the total number of runs is less than you expect for that day (or other time period).

    Since you probably also want to know about failed runs, you could select 'Succeeded pipeline runs' less than the expected number.  This would alert you to both one not running, and one failing.

    The drawback to that, is manual pipeline runs would also count towards the total, not just triggered runs.  If you want just triggered runs, there is a "Succeeded trigger runs".  I don't see a "Failed trigger runs" though.

    Does this help?

    Tuesday, June 11, 2019 10:31 PM
  • This might actually help me achieve what I'm trying to accomplish. I do know the number of runs my pipeline needs to run, which in my case would be 1 run per week. However, trying to configure this alert using the "Succeeded trigger runs" metric, I think this is not possible as the max evaluation period is 24Hrs and I would need an evaluation period of 1 week. I don't see how I can extend it to a week in the Alert setup UI.

    On the other hand, I do see the "Failed trigger runs" metric which I could use to set an alert. Now if my trigger is scheduled to run on a given day/time of the week, I should receive an alert if there's a issue. But would it also raise an alert if the scheduled trigger does not at run at all? If so, then this works. 



    Wednesday, June 12, 2019 5:14 AM
  • I personally have never seen a trigger itself fail once set up.
    I have seen some triggers fail to move from inactive to active state.  That might be a result of an invalid trigger template.  I have seen a trigger fail to validate.  However I haven't seen an actual trigger failure.

    Here are some more ideas:

    If knowing  whether the pipeline does not run is a critical need, then you could actually try hosting the trigger mechanism on your end.  A cron-job which sends an API request to kick off the pipeline, and then follows up with an API request to verify the pipeline has started.  Then you are fully in control of notification.  This of course requires some minor set up, and is no longer a managed service, but it does free you up to do whatever you feel best.

    If you want to keep everything in the cloud, there are more possible work-arounds.  You could set up a dummy pipeline that runs each day the real pipeline does not, so that you can work with the maximum 24 hour evaluation period.  Given that a pipeline can be as simple as a single Set Variable activity, the cost of running the dummy pipeline would be minimal.

    Another possibility would be to create another service which looks for the evidence of your pipeline run, and creates messaging appropriate, such as email.  The service could be another pipeline that runs after the production one, or the service could be a Logic App, or something entirely different.

    • Marked as answer by Gus_7 Wednesday, June 12, 2019 7:16 PM
    Wednesday, June 12, 2019 6:40 PM
  • Thank you! Your suggestions are greatly appreciated and helpful and either one of them can help me achieve the result I'm looking for. 
    • Marked as answer by Gus_7 Wednesday, June 12, 2019 7:15 PM
    • Unmarked as answer by Gus_7 Wednesday, June 12, 2019 7:15 PM
    Wednesday, June 12, 2019 7:15 PM
  • Thank you for your cooperation, patience, feedback, and marking as answered.  I love getting kudos!
    Wednesday, June 12, 2019 9:51 PM