none
Changes to the Web Service publishing process (Operationalization) RRS feed

  • Question

  • With the new release of Azure ML features, we are updating the web service publishing workflow. Below are the list of changes:

    • Web Service Input / Output modules: these replace the input and output ports previously used to create web services. You can now drag and drop them on the experiment from the left pane under Web Services (instead of right-clicking on the module).

    • Generation of Scoring experiment from Training experiment: We have automated the steps in creating a Scoring experiment by adding the Web service input and output modules, and automated the saving of the Trained model. With this feature, you no longer need to manually save a trained model and add it to a Scoring Experiment. The "Create Scoring Experiment" button will trigger these events and generate the Scoring Experiment for you. The resulting Scoring experiment can be published as a web service as is, or user can make modifications to the generated workflow including location of input/output modules.
    • To use this feature:
        ○ Create a Training Experiment (includes an algorithm such as Boosted Decision Tree and a Train Module)
        ○ Click Run
        ○ Click Create Scoring Experiment
        ○ Review the resulting Experiment, then Click Run
        ○ Then click Publish Web Service
      A sample Training Experiment:

    • Publishing a Training experiment as a web service:Similar to the Scoring experiment, we have now automated the steps to create a web service from the Training experiment using the "Prepare web service" command button. This action adds Web service input and output modules to the training experiment. The experiment can then be published as a web service. This  approach to creating a web service is needed for setting up the Trained model for retraining using the Retraining APIs. But is not the recommended approach for creating web services - Scoring experiment should be used for that purpose.
    • Publishing an experiment as a web service: in cases where the experiment does not involve training, such as when processing, transforming and cleaning up data, it can also be published as a web service using the "Prepare web service" button.

    We will publish more detailed documentation for this and other features shortly. Please let us know if you have any questions.

    Thanks,

    The Azure ML Team



    Wednesday, February 11, 2015 1:37 AM

Answers

All replies

  • Haven't tried it yet myself but i was wondering how the "create scoring experiment" feature will behave when i have more than jus a training model in my training experiment (read: I am comparing how different algorithms perform so to choose the best one).
    Saturday, February 14, 2015 11:10 AM
  • Hi Elenat,

    It will prompt you to select one by clicking on it. 

    thanks,

    Raymond

    


    Saturday, February 14, 2015 4:10 PM
  • thanks!
    Saturday, February 14, 2015 5:30 PM
  • Hey,

    For a boosted decision tree, is it possible to get the web service output to provide the % scored rather than the scored category, and further, is it possible to get a highlight of the factors leading to the score (even like a top 10 or something would be incredibly useful)?

    Thursday, March 12, 2015 8:59 AM
  • Hey Freppas, can you give some more information about your first ask? For classifiers, we already output the scored probability along with the scored label. You can limit the output to just the probability using the Project Columns module to exclude unwanted columns. The web service is not concerned with domain-specific functionality - it will output exactly what you see being output in Studio when you set up the web service ports

    The second ask is something we're working on bringing to our product - feature importance. Again, you would have to make it happen in the Studio before you publish the web service for the web service to have that functionality

    Thursday, March 12, 2015 4:25 PM
    Moderator
  • OK, so the first question is simply a matter of projecting columns. That should be easy enough, and I really should have thought of that, haha.

    Any idea of when this functionality will come, or if there's any way of replicating it (even an approximation)?

    Thanks for all your help, and the work you guys are putting into Azure ML, it's a really great product!

    Thursday, March 12, 2015 11:56 PM
  • I think you can use a linear regression model and look at the learned weights to get an estimate of feature importance. You can also use the Feature Selection module, and the chosen features are probably the top features in your final model as well

    AK

    Sunday, March 15, 2015 3:26 PM
    Moderator
  • but wouldn't the linear regression weights only be applicable to that model, and not necessarily transfer over to others? Haven't checked the feature selection module, will give it a spin!

    Thanks again for the insights

    Sunday, March 15, 2015 3:39 PM
  • Yes, both suggestions I provided are only reasonable estimates - we're also working on enabling tree visualizations to show exactly what inferences were made at each node in the DAG as I'd mentioned before. This would be the non-lossy way to recover the important features

    Regards,

    AK

    Sunday, March 15, 2015 4:29 PM
    Moderator
  • Cool, not sure this will do exactly what I need, but I'll move my question onto a new thread to stop taking this too far off topic.
    Tuesday, March 17, 2015 2:33 AM