Error while running Auto ML Experience in ML Service RRS feed

  • Question

  • Hi,

    I am trying to run Azure Automated ML experiments. But I am getting the below error.

    Run is Failed.
    AutoML experiment timed out because there was no response from child iterations in given time

    2019-08-20T08:49:36Z Successfully mounted a/an Azure File Shares at /mnt/batch/tasks/shared/LS_root/jobs/aj_poc/azureml/automl_e1259cca-c792-4abc-8acc-723d7f18d108_setup/mounts/workspacefilestore
    2019-08-20T08:49:37Z Mounted //ajpoc5123452290.file.core.windows.net/azureml-filestore-35691af9-0502-4a18-868f-c76c629ba851 at /mnt/batch/tasks/shared/LS_root/jobs/aj_poc/azureml/automl_e4321cca-c792-4eec-8bdd-723d7f18d187_setup/mounts/workspacefilestore
    2019-08-20T08:49:37Z No blob file systems configured
    2019-08-20T08:49:37Z No unmanaged file systems configured
    2019-08-20T08:49:37Z Starting output-watcher...
    Login Succeeded
    Using default tag: latest
    latest: Pulling from azureml/azureml_9baeaefd828387b612345e44f8c5474d
    f7277927d38a: Already exists
    8d3eac894db4: Already exists
    edf72af6d627: Already exists
    3e4f86211d23: Already exists
    f17a38bdd1c4: Pulling fs layer
    d1c505f1fda2: Pulling fs layer

    Tuesday, August 20, 2019 9:26 AM


All replies

  • Hello,

    Could you please let us know if this is an error seen for every run on the experiment? Is it possible to retry the same to check if this persists?

    Also, is it possible to share the document or experiment details to replicate the issue from our end?


    Wednesday, August 21, 2019 5:10 AM
  • This is the first time I have created an experiment. I may be wrong some where so just trying to find the root cause.

    I followed the steps mentioned on the below blog and uploaded basic IRIS dataset for classification problem.


    Looks like something to do with Blob storage. Not sure what.

    Wednesday, August 21, 2019 5:20 AM
  • Hello,

    Thanks for sharing the link. It looks like you are using a different dataset with similar settings that might not work for this run. Could you please check if you can change the advanced settings with higher training job time?


    Thursday, August 22, 2019 8:22 AM
  • I was able to fix this by re-running the experiment with no changes.
    Tuesday, August 27, 2019 8:53 AM