- Overview
- Getting started
- Building models
- Consuming models
- ML packages
- 1040 - ML package
- 1040 Schedule C - ML package
- 1040 Schedule D - ML package
- 1040 Schedule E - ML package
- 1040x - ML package
- 3949a - ML package
- 4506T - ML package
- 941x - ML package
- 9465 - ML package
- ACORD125 - ML package
- ACORD126 - ML package
- ACORD131 - ML package
- ACORD140 - ML package
- ACORD25 - ML package
- Bank Statements - ML package
- Bills Of Lading - ML package
- Certificate of Incorporation - ML package
- Certificate of Origin - ML package
- Checks - ML package
- Children Product Certificate - ML package
- CMS 1500 - ML package
- EU Declaration of Conformity - ML package
- Financial Statements - ML package
- FM1003 - ML package
- I9 - ML package
- ID Cards - ML package
- Invoices - ML package
- Invoices Australia - ML package
- Invoices China - ML package
- Invoices Hebrew - ML package
- Invoices India - ML package
- Invoices Japan - ML package
- Invoices Shipping - ML package
- Packing Lists - ML package
- Payslips - ML package
- Passports - ML package
- Purchase Orders - ML package
- Receipts - ML Package
- Receipts Japan - ML package
- Remittance Advices - ML package
- UB04 - ML package
- Utility Bills - ML package
- Vehicle Titles - ML package
- W2 - ML package
- W9 - ML package
- Public endpoints
- Supported languages
- Insights dashboards
- Data and security
- Licensing
- How to
Document Understanding Modern Projects User Guide
Evaluate project success
You can track the metrics described in this page to establish the success of your projects.
The most relevant metric to follow is Estimated time saved, expressed in number of hours. This metric can be directly tied to ROI, tracks the time saved having a Document UnderstandingTM process in place, considering a human processes one document page in the configured time. The metric uses the following formula:
{Estimated time saved} = {number of document processed} * {x} (minutes) - {validation time}
- Number of documents processed: the total number of documents processed using the automation.
- x: the time a user would need to process one document without automation.
- validation time: the time the user spent in Classification or Validation Station, validating all processed documents. This is considered 0 for documents which do not require validation.
10 minutes = 2 documents * 5 minutes (manual processing) - 0 minutes validation
Besides Estimated time saved, there are other important metrics that are worth tracking to evaluate the performance and success of a project.
Metric | Description | When to track |
---|---|---|
Number of documents processed | The total number of processed documents. | Use this metric to:
|
Validation time | The total time spent validating the classification and extraction results. | Use this metric to:
|
Average handling time | The average time required to process a document, including time
users spend with validation, considering 0 for the documents
which do not require validation. The following formula is
used:
{Average handling time} = sum of({time spent processing documents} / {number of documents processed}) Example: If 10 documents are processed and it takes 2 minutes per document, 5 documents validated by a human in the loop add 2 extra minutes: (2*5 + 4*5 = 3 minutes Average handling time | Use this metric to track how much time users need to process a document now that you have the automation in place. |
Field corrections trend | The number of corrected fields, by field modification (edited value, edited box, marked as missing, etc.), per month. In addition to corrections, the metric also provides a baseline of the total number of fields so you can establish how many of the total fields were corrected. | Use this metric to:
|
- To improve Estimated time saved:
- Increase the volume of processed documents.
- Improve Validation time and straight-through-processing by adding business rules in place.
- To improve Validation time and Average handling time:
- Define the logic around which to decide whether or not to send documents for validation to a human in the loop.
- Fine-tune your model and add training data for fields that are not extracted accurately. Act based on the field corrections trend to figure out which fields require the most attention and focus on improving the extraction there.
- Use the Generative Validation feature.