Hello Laura,

When you use the Evaluate model module the accuracy of a trained model is measured, You provide a dataset containing scores generated from a model, and the Evaluate Model module computes a set of industry-standard evaluation metrics which are returned based
on the type of model being evaluated. In this case it is a classification model. For classification models the metrics returned could be any of the following, they are ranked by the metric you select for evaluation.

**Accuracy **measures the goodness of a classification model as the proportion of true results to total cases.**Precision **is the proportion of true results over all positive results.**Recall **is the fraction of all correct results returned by the model.**F-score** is computed as the weighted average of precision and recall between 0 and 1, where the ideal F-score value is 1.**AUC** measures the area under the curve plotted with true positives on the y axis and false positives on the x axis. This metric is useful because it provides a single number that lets you compare models of different types.**Average log loss** is a single score used to express the penalty for wrong results. It is calculated as the difference between two probability distributions – the true one, and the one in the model.**Training log loss** is a single score that represents the advantage of the classifier over a random prediction. The log loss measures the uncertainty of your model by comparing the probabilities it outputs to the known values (ground truth)
in the labels. You want to minimize log loss for the model as a whole.

Could you please check if the above values are seen in your results of evaluate model as these are the metrics reported by this module after the run.

-Rohit