document-understanding
2021.10
false
UiPath logo, featuring letters U and I in white
OUT OF SUPPORT

Document Understanding User Guide

Automation CloudAutomation Cloud Public SectorAutomation SuiteStandalone
Last updated Nov 11, 2024

Evaluation Pipelines

An Evaluation Pipeline is used to evaluate a trained ML model.

Evaluate a Trained Model

Configure the evaluation pipeline as follows:

  • In the Pipeline type field, select Evaluation run.
  • In the Choose package field, select the package you want to evaluate.
  • In the Choose package major version field, select a major version for your package.
  • In the Choose package minor version field, select a minor version you want to evaluate.
  • In the Choose evaluation dataset field, select a representative evaluation dataset.
  • In the Enter parameters section, there is one environment variable is relevant for Evaluation pipelines you could use:
  • eval.redo_ocr which, if set to true, allows you to rerun OCR when running the pipeline to assess the impact of OCR on extraction accuracy. This assumes an OCR engine was configured when the ML Package was created.

The Enable GPU slider is disabled by default, in which case the pipeline is runs on CPU. We strongly recommend that Evaluation pipelines run only on CPU.

  • Select one of the options when the pipeline should run: Run now, Time based or Recurring.


  • After you configure all the fields, click Create. The pipeline is created.

Artifacts

For an Evaluation Pipeline, the Outputs pane also includes an artifacts / eval_metrics folder which contains two files:



  • evaluation_default.xlsx is an Excel spreadsheet with a side-by-side comparison of ground truth versus predicted value for each field predicted by the model, as well as a per-document accuracy metric, in order of increasing accuracy. Hence, the most inaccurate documents are presented at the top to facilitate diagnosis and troubleshooting.
  • evaluation_metrics_default.txt contains the F1 scores of the fields which were predicted.

    For line items, a global score is obtained for all columns taken together.

  • Evaluate a Trained Model
  • Artifacts

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.