Subscribe

UiPath Document Understanding

UiPath Document Understanding

Invoices retrained with one additional field

🚧

Intended Audience

The aim of this page is to help first time users get familiar with Document Understanding.


For scalable production deployments, we strongly recommend using the Document Understanding Process available in UiPath Studio under the Templates section.

This quickstart shows you how to retrain the Invoices out-of-the-box ML model to extract one more field.

Let’s use the same workflow we used for the receipts in the previous quickstart and modify it so it can support invoices.

To do that, we need to perform the following steps in our workflow:

  1. Modify the taxonomy
  2. Add a classifier
  3. Add a Machine Learning Extractor
  4. Label the data
  5. Retrain the Invoices ML model

Now, let us see every step in detail.

1. Modify the taxonomy


In this step, we need to modify the taxonomy to add the invoice document type.

To do so, open Taxonomy Manager and create group named "Semi Structured Documents", a category named "Finance", a document type named "Invoices". Create the above listed fields with user friendly names along with respective data types.

  • name - Text
  • vendor-addr - Address
  • billing-name - Text
  • billing-address - Address
  • shipping-address - Address
  • invoice-no - Text
  • po-no - Text
  • vendor-vat-no - Text
  • date - Date
  • tax - Number
  • total - Number
  • payment-terms - Text
  • net-amount - Number
  • due-date - Date
  • discount - Number
  • shipping-charges - Number
  • payment-addr - Address
  • description - Text
  • items - Table
    • description - Text
    • quantity - Number
    • unit-price - Number
    • line-amount - Number
    • item-po-no - Text
    • line-no - Text
    • part-no - Text
    • billing-vat-no - Text

2. Add a classifier


In this step, we need to add a classifier so we can process both receipts and invoices with our workflow.

Since our workflow now supports two document types, "Receipts" and "Invoices", we need to add the classifier to differentiate between different document types coming in as input:

  1. Add a Classify Document Scope after the Digitize Document activity and provide the DocumentPath, DocumentText, DocumentObjectModel, and Taxonomy as input arguments and capture the ClassificationResults in a new variable. We need this variable to check what document(s) we are processing.

  2. We also need to specify one or more classifiers. In this example, we are using the Intelligent Keyword Classifier. Add it to the Classify Document Scope activity.
    This page helps you take an educated decision on what classification method you should use in different scenarios.

  3. Train the classifier as described here.

  4. Configure the classifier by enabling it for both document types.

  5. Depending on your usecase, you might want to validate the classification. You can do that using the Present Classification Station or the Create Document Classification Action and Wait For Document Classification Action And Resume activities.

3. Add a Machine Learning Extractor


In this step, we need to add a Machine Learning Extractor to the Data Extraction Scope activity and connect it to the Invoices public endpoint.

The procedure is exactly the same as for the previous Receipts Machine Learning Extractor that we’ve added before:

  1. Add a Machine Learning Extractor activity along side the Receipts Machine Learning Extractor.

  2. Provide the Invoices public endpoint, namely https://du.uipath.com/ie/invoices/, and an API key to the extractor.

  3. Configure the extractor to work with invoices by mapping the fields created in the Taxonomy Manager to the fields available in the ML model:

  1. Do not forget to use the ClassificationResults variable outputted by the Classify Document Scope as input to the Data Extraction Scope, instead of specifying a DocumentTypeId.
    You should end up with something like this:
  1. Run the workflow to test that it works correctly with invoices.

4. Label the data


We need to label the data before retraining the base Invoices ML model in order for it to support the new IBAN field.

  1. Collect the requirements and sample invoice documents in sufficient volume for the complexity of the usecase you need to solve.
    Label 50 pages, as explained on this documentation page.
    For our usecase, you can use these documents.

  2. Gain access to an instance of Data Manager either on premises or in AI Center in the Cloud. Make sure you have the permissions to use Data Manager.

  3. Create an AI Center Project and go to Data Labeling > UiPath Document Understanding and create a Data Labeling session.

  4. Configure an OCR Engine as described here, try importing a diverse set of your production documents and make sure that the OCR engine reads the text you need to extract.
    More suggestions in this section. Only proceed to next step after you have settled on a OCR engine.

  5. Create a fresh Data Manager session, and import a Training set and an Evaluation set, while making sure to check the Make this a Test set checkbox when importing the Evaluation set.
    More details about imports here.

  6. Create and configure the IBAN field as described here.
    More advanced guidelines are available in this section.

  7. Label a Training dataset and an Evaluation dataset as described here and here.
    The prelabeling feature of Data Manager described here can make the labeling work a lot easier.

  8. Export first the Evaluation set and then the Training set to AI Center by selecting them from the filter dropdown at the top of the Data Manager view.
    More details about exports here.

Next up, let’s create our model, retrain and deploy it.

5. Retrain the Invoices ML model


Now that our workflow supports processing invoices, we need to extract the IBAN from our invoices, which is a field that does not get picked up by default by the out-of-the-box Invoices ML model. That means we need to retrain a new model, starting from the base one.

  1. Create an ML Package as described here. If your document type is different from the ones available out-of-the-box, then choose the DocumentUnderstanding ML Package. Otherwise, use the package closest to the document type you need to extract.

  2. Create a Training Pipeline as described here using the Input dataset which you exported in the previous section from Data Manager.

  3. When the training is done and you have package minor version 1, run an Evaluation Pipeline on this minor version and inspect the evaluation.xlsx side by side comparison.
    Use the detailed guidelines here.

  4. If the evaluation results are satisfactory, go to the ML Skills view and create an ML Skill using the new minor version of the ML Package. If you want to use this to do prelabeling in Data Manager, you need to click on the Modify Current Deployment button at the top right of the ML Skill view and toggle on the Make ML Skill Public.

  5. After creating the ML skill, we now need to consume it in Studio. The easiest way to do that is to make the ML Skill public as described here. Then, the only thing left to do is simply replace the Invoices ML model public endpoint that we’ve initially added to the Machine Learning Extractor in our workflow with the public endpoint of the ML Skill.

  6. Run the workflow and you should see the newly added IBAN field being extracted along the default invoices fields.

Download example


Download this sample project using this link. You need to change the Machine Learning Extractor for Invoices from Endpoint mode to your trained ML Skill.

Updated 3 months ago


Invoices retrained with one additional field


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.