ai-center
latest
false
UiPath logo, featuring letters U and I in white

AI Center

Last updated May 21, 2025

Artifacts

Evaluation report

The evaluation report is a PDF file containing the following information in a human-readable format:

  • ngrams per class
  • Precision-recall diagram
  • Classification report
  • Confusion matrix
  • Best Model Parameters for Hyperparameter search

ngrams per class

This section contains the top 10 n-grams that affects the model prediction for that class. There is a different table for each class that the model was trained on.

Precision recall diagram

You can use this diagram and the table to check the precision, recall trade-off, along with f1-scores of the model. The thresholds and corresponding precision and recall values are also provided in a table below this diagram. This table will choose the desired threshold to configure in your workflow so as to decide when to send the data to Action Center for human in the loop. Note that the higher the chosen threshold, the higher the amount of data that gets routed to Action Center for human in the loop will be.

There is a precision-recall diagram for each class.

For an example of a precision-recall diagram, see the figure below.



For an example of a precision-recall table, see the table below.

Precision

Recall

Threshold

0.8012232415902141

0.6735218508997429

0.30539842728983285

0.8505338078291815

0.6143958868894601

0.37825683923133907

0.9005524861878453

0.4190231362467866

0.6121292357073038

0.9514563106796117

0.2519280205655527

0.7916427288647211

Classification report

The classification report contains the following information:

  • Label - the label part of the test set
  • Precision - the accuracy of the prediction
  • Recall - relevant instances that were retrieved
  • F1 score - the geometric mean between precision and recall; you can use this score to compare two models
  • Support - the number of times a certain label appears in the test set

For an example of a classification report, see the table below.

Label

Precision

Recall

F1 Score

Support

0.0

0.805

0.737

0.769

319

1.0

0.731

0.812

0.77

389

2.0

0.778

0.731

0.754

394

3.0

0.721

0.778

0.748

392

4.0

0.855

0.844

0.85

385

5.0

0.901

0.803

0.849

395

Confusion matrix



Best model parameters for hyperparameter search

When the BOW.hyperparameter_search.enable variable is set to True the best model parameters picked by the algorithm are displayed in this table. To retrain the model with different parameters not covered by the hyperparameter search you can also set these parameters manually in the Environment variables. For more information on this, check the (doc:light-text-classification#environment-variables) section.

For an example of this report, see the table below.

Name

Value

BOW.ngram_range

(1, 2)

BOW.min_df

2

BOW.lr_kwargs.class_weight

balanced

dataset.text_pp_remove_stop_words

True

Hyperparameter search report

This report is a PDF file generated only if the BOW.hyperparameter_search.enable parameter is set to True. The report contains the best values for the optional variables and a diagram to display the results.


JSON files

You can find separate JSON files corresponding to each section of the Evaluation Report PDF file. These JSON files are machine-readable and you can use them to pipe the model evaluation into Insights using the workflow.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
© 2005-2025 UiPath. All rights reserved.