- Overview
- Document Processing Contracts
- About the Document Processing Contracts
- Box Class
- IPersistedActivity Interface
- PrettyBoxConverter Class
- IClassifierActivity Interface
- IClassifierCapabilitiesProvider Interface
- ClassifierDocumentType Class
- ClassifierResult Class
- ClassifierCodeActivity Class
- ClassifierNativeActivity Class
- ClassifierAsyncCodeActivity Class
- ClassifierDocumentTypeCapability Class
- ExtractorAsyncCodeActivity Class
- ExtractorCodeActivity Class
- ExtractorDocumentType Class
- ExtractorDocumentTypeCapabilities Class
- ExtractorFieldCapability Class
- ExtractorNativeActivity Class
- ExtractorResult Class
- ICapabilitiesProvider Interface
- IExtractorActivity Interface
- ExtractorPayload Class
- DocumentActionPriority Enum
- DocumentActionData Class
- DocumentActionStatus Enum
- DocumentActionType Enum
- DocumentClassificationActionData Class
- DocumentValidationActionData Class
- UserData Class
- Document Class
- DocumentSplittingResult Class
- DomExtensions Class
- Page Class
- PageSection Class
- Polygon Class
- PolygonConverter Class
- Metadata Class
- WordGroup Class
- Word Class
- ProcessingSource Enum
- ResultsTableCell Class
- ResultsTableValue Class
- ResultsTableColumnInfo Class
- ResultsTable Class
- Rotation Enum
- SectionType Enum
- WordGroupType Enum
- IDocumentTextProjection Interface
- ClassificationResult Class
- ExtractionResult Class
- ResultsDocument Class
- ResultsDocumentBounds Class
- ResultsDataPoint Class
- ResultsValue Class
- ResultsContentReference Class
- ResultsValueTokens Class
- ResultsDerivedField Class
- ResultsDataSource Enum
- ResultConstants Class
- SimpleFieldValue Class
- TableFieldValue Class
- DocumentGroup Class
- DocumentTaxonomy Class
- DocumentType Class
- Field Class
- FieldType Enum
- LanguageInfo Class
- MetadataEntry Class
- TextType Enum
- TypeField Class
- ITrackingActivity Interface
- ITrainableActivity Interface
- ITrainableClassifierActivity Interface
- ITrainableExtractorActivity Interface
- TrainableClassifierAsyncCodeActivity Class
- TrainableClassifierCodeActivity Class
- TrainableClassifierNativeActivity Class
- TrainableExtractorAsyncCodeActivity Class
- TrainableExtractorCodeActivity Class
- TrainableExtractorNativeActivity Class
- Document Understanding Digitizer
- Document Understanding ML
- Document Understanding OCR Local Server
- Document Understanding Process - Studio Template
- Document Understanding Activities
- About the Document Understanding Package
- Project Compatibility
- Set PDF Password
- Merge PDFs
- Get PDF Page Count
- Extract PDF Text
- Extract PDF Images
- Extract PDF Page Range
- Extract Document Data
- Create Validation Task and Wait
- Wait for Validation Task and Resume
- Create Validation Task
- Classify Document
- Create Classification Validation Task
- Create Classification Validation Task and Wait
- Wait for Classification Validation Task and Resume
- Intelligent OCR
- About the IntelligentOCR Activities Package
- Project Compatibility
- Load Taxonomy
- Digitize Document
- Classify Document Scope
- Keyword Based Classifier
- Intelligent Keyword Classifier
- Present Classification Station
- Create Document Classification Action
- Wait for Document Classification Action and Resume
- Train Classifiers Scope
- Keyword Based Classifier Trainer
- Intelligent Keyword Classifier Trainer
- Data Extraction Scope
- RegEx Based Extractor
- Form Extractor
- Intelligent Form Extractor
- Present Validation Station
- Create Document Validation Action
- Wait for Document Validation Action and Resume
- Train Extractors Scope
- Export Extraction Results
- Manual Validation for Digitize Documents
- Data Extraction Using FlexiCapture
- Anchorbased Data Extraction Using Intelligent Form Extractor
- Validation Station
- ML Services
- OCR
- OCR Contracts
- Release Notes
- About the OCR Contracts
- Project Compatibility
- IOCRActivity Interface
- OCRAsyncCodeActivity Class
- OCRCodeActivity Class
- OCRNativeActivity Class
- Character Class
- OCRResult Class
- Word Class
- FontStyles Enum
- OCRRotation Enum
- OCRCapabilities Class
- OCRScrapeBase Class
- OCRScrapeFactory Class
- ScrapeControlBase Class
- ScrapeEngineUsages Enum
- ScrapeEngineBase
- ScrapeEngineFactory Class
- ScrapeEngineProvider Class
- OmniPage
- PDF
- [Unlisted] Abbyy
- [Unlisted] Abbyy Embedded
Manual Validation for Digitize Documents
The example below explains how to manually extract data from an image and present the output in a separate file. It presents activities such as Digitize Document or Present Validation Station. You can find these activities in the UiPath.IntelligentOCR.Activities package.
This is how the automation process can be built:
-
Open Studio and create a new Process named by default Main.
Note: Make sure to add all the needed files (.json
files and all the images) inside the project folder. -
Drag a Sequence container in the Workflow Designer and create the following variables:
Variable Name
Variable Type
Default Value
Text
String
 DOM
UiPath.DocumentProcessing.Contracts.Dom.Document
 Data
UiPath.DocumentProcessing.Contracts.Taxonomy.DocumentTaxonomy
 DocumentTaxonomy
UiPath.DocumentProcessing.Contracts.Taxonomy.DocumentTaxonomy
 TaxonomyJSON
String
 HumanValidated
UiPath.DocumentProcessing.Contracts.Results.ExtractionResult
 -
Drag a Read Text File activity inside the sequence.
- In the Properties panel, add the name of the file, in this case
"taxonomy.json"
, in the FileName field. - Add the variable
TaxonomyJSON
in the Content field.
- In the Properties panel, add the name of the file, in this case
-
Add an Assign activity below the Read Text File activity.
- Add the variable
Data
in the To field and the expressionDocumentTaxonomy.Deserialize(TaxonomyJSON)
in the Value field. This activity builds the taxonomy for extraction.
- Add the variable
-
Drag a Digitize Document activity below the Assign activity.
- In the Properties panel, add the value
1
in the DegreeOfParallelism field. - Add the expression
"Input\Invoice01.tif"
in the DocumentPath field. - Add the variable
DOM
in the DocumentObjectModel field. - Add the variable
Text
in the DocumentText field.
- In the Properties panel, add the value
-
Drop a Google OCR engine inside the Digitize Document activity.
- In the Properties panel, add the variable
Image
in the Image field. - Select the check box for the ExtractWords option. This option extracts the on-screen position of all detected words.
- Add the expression
"eng"
in the Language field. - Select the option
Legacy
from the Profile drop-down list. - Add the value
2
in the Scale field.
- In the Properties panel, add the variable
-
Place a Present Validation Station activity below the Digitize Document activity.
- In the Properties panel, add the variable
DOM
in the DocumentObjectModel field. - Add the expression
"Input\Invoice01.tif"
in the DocumentPath field. - Add the variable
Text
in the DocumentText field. - Add the variable
Data
in the Taxonomy field. - Add the variable
HumanValidated
in the ValidatedExtractionResults field.
- In the Properties panel, add the variable
-
Drag a For Each activity under the Present Validation Station activity.
- In the Properties panel, select the option
UiPath.DocumentProcessing.Contracts.Results.ResultsDataPoint
from the TypeArgument drop-down list. - Add the expression
HumanValidated.ResultsDocument.Fields
in the Values field.
- In the Properties panel, select the option
-
Drag a Log Message activity inside the Body of the For Each activity.
- Select the option
Info
from the Level drop-down list. - Add the expression
item.FieldName
in the Message field.
- Select the option
-
Drag a Log Message activity below the first Log Message activity.
- Select the option
Info
from the Level drop-down list. - Add the expression
item.Values(0).Value.ToString
in the Message field.
- Select the option
-
Drag a Write Line activity under the Log Message activities.
- Add the value
""
in the Text field.
- Add the value
- Run the process. The robot uses the IntelligentOCR activities to manually process the data and to present the results.
Download example from here.