locked
The System cannot Find the Path Specified RRS feed

  • Question

  • I am testing my Azure Batch program by repetitively uploading updated application packages in Azure. I am frequently

    getting the error: 'The system cannot find the path specified' when I attempt to invoke it using a

    command line from the main program. After I re-upload it 2 or 3 times, the program finally finds the path and

    executes the program. I would like to know why this happens and mitigate it ASAP because it is taking me much longer

    than it should to test the program.

    Wednesday, August 9, 2017 6:52 PM

Answers

  • To confirm, by 'new versions' I mean are you uploading a new zip file and giving the package itself a different version number - that is, specifying a new version number in the portal (or PowerShell or whatever).  From your description, I think this is what you're doing, but just wanted to double check that you weren't just revving the program's internal version number and re-uploading in place.

    It sounds like this might be an issue where you are specifying app packages at the pool level.  Assuming you are creating a new package version in the portal, are you updating the pool spec to reflect the new version?  Such changes will only be applied to new compute nodes joining the pool, or on a reboot.  So existing compute nodes will continue to have the old package installed (and therefore tasks that use the path to the new version will not resolve).

    For your rapid update scenario you might find it better to specify the app package references on the tasks rather than the pool.  E.g.:

    * Create package v0.7 and install it in your account.

    * Create a task with a reference to v0.7 and a command line of %..._MYAPP#0.7%\MyApp.exe.

    * Verify that v0.7 ran.

    * Create package v0.8 and install it in your account.

    * Create a task with a reference to v0.8 and a command line of %..._MYAPP#0.8%\MyApp.exe.

    * Verify that v0.8 ran.

    This avoids any issues with packages being specified to load at node startup and therefore becoming stale during a rapid update workflow.

    Note that if you reuse the same package version, you may see the behaviour you do at the moment.  This is because Batch does not check for package updates every time a new task runs (because thousands of compute nodes hitting storage on every little task would cause high storage load and slow down task processing).  We check reasonably frequently (I think currently it's after about 30 seconds or so) at first, but if the package has not changed after a while we back off and check less frequently (after a few minutes I believe).

    If your scenario is same package version then it is this caching strategy that's probably at the root of your problem -- revving package versions should solve this.

    • Marked as answer by Hackathor Tuesday, August 15, 2017 3:04 PM
    Thursday, August 10, 2017 8:56 PM

All replies

  • Check if there is any setting with 32 or 64 bit mode.
    Wednesday, August 9, 2017 7:54 PM
  • Are you using PaaS (cloudServicesConfiguration) or IaaS (virtualMachineConfiguration) nodes?  This will help us to route the issue internally.

    Could you also let us know more about what you mean by 'repetitively uploading' e.g. are you uploading new versions or overwriting a version in-place, how often are you doing it, is this on a new pool each time or are you continuing to use the same pool, etc.  If you have specific steps that reproduce the issue then those would be really valuable.  Thanks!

    Wednesday, August 9, 2017 8:43 PM
  • By 'repetitively uploading' I meant that I upload new versions.

    I use cloud services configuration and here are the pool properties:

       targetDedicatedComputeNodes: 1,                                         
       virtualMachineSize: "small",                                               

      cloudServiceConfiguration: new CloudServiceConfiguration(osFamily: "4")

    I think the key issue here is not just that the file cannot find the path specified, but the

    fact that the same behavior gets repeated after several updates. When this error does not occur,

    old versions of the application were running even though and after I deleted the corresponding application package and I specified in the task command line to

    run the new version.

    I checked this by making the program output the version number. Same here, after several updates, the program

    finally runs the correct version of the program. This happens every single time.

    Hope this helps.

     

    Thursday, August 10, 2017 2:14 PM
  • To confirm, by 'new versions' I mean are you uploading a new zip file and giving the package itself a different version number - that is, specifying a new version number in the portal (or PowerShell or whatever).  From your description, I think this is what you're doing, but just wanted to double check that you weren't just revving the program's internal version number and re-uploading in place.

    It sounds like this might be an issue where you are specifying app packages at the pool level.  Assuming you are creating a new package version in the portal, are you updating the pool spec to reflect the new version?  Such changes will only be applied to new compute nodes joining the pool, or on a reboot.  So existing compute nodes will continue to have the old package installed (and therefore tasks that use the path to the new version will not resolve).

    For your rapid update scenario you might find it better to specify the app package references on the tasks rather than the pool.  E.g.:

    * Create package v0.7 and install it in your account.

    * Create a task with a reference to v0.7 and a command line of %..._MYAPP#0.7%\MyApp.exe.

    * Verify that v0.7 ran.

    * Create package v0.8 and install it in your account.

    * Create a task with a reference to v0.8 and a command line of %..._MYAPP#0.8%\MyApp.exe.

    * Verify that v0.8 ran.

    This avoids any issues with packages being specified to load at node startup and therefore becoming stale during a rapid update workflow.

    Note that if you reuse the same package version, you may see the behaviour you do at the moment.  This is because Batch does not check for package updates every time a new task runs (because thousands of compute nodes hitting storage on every little task would cause high storage load and slow down task processing).  We check reasonably frequently (I think currently it's after about 30 seconds or so) at first, but if the package has not changed after a while we back off and check less frequently (after a few minutes I believe).

    If your scenario is same package version then it is this caching strategy that's probably at the root of your problem -- revving package versions should solve this.

    • Marked as answer by Hackathor Tuesday, August 15, 2017 3:04 PM
    Thursday, August 10, 2017 8:56 PM