- Release Notes
- Getting started
- Notifications
- Projects
- Datasets
- Data Labeling
- ML packages
- Out of the box packages
- Pipelines
- ML Skills
- ML Logs
- Document UnderstandingTM in AI Center
- AI Center API
- Licensing
- AI Solutions Templates
- How to
- Basic Troubleshooting Guide
Overview
UiPath provides a number of machine learning capabilities out-of-the-box on UiPath® AI Center. A notable example is Document UnderstandingTM. In addition, UiPath built and open-source models (serving-only and retrainable) are continuously added to AI Center.
class
, break
, from
, finally
, global
, None
, etc. Make sure to choose another name. The listed examples are not complete since package name is used for class <pkg-name>
and import <pck-name>
.
The following packages are available in platform today :
Model |
Category |
Type |
Availability |
---|---|---|---|
Image Classification | UiPath Image Analysis | Custom Training | Preview |
Signature Comparison | UiPath Image Analysis | Pre Trained | General Availability |
Custom Named Entity Recognition | UiPath Language Analysis | Custom Training | General Availability |
Light Text Classification | UiPath Language Analysis | Custom Training | General Availability |
Multilingual Text Classification | UiPath Language Analysis | Custom Training | General Availability |
Semantic Similarity | UiPath Language Analysis | Pre Trained | Preview |
Multilabel Text Classification | UiPath Language Analysis | Custom Training | Preview |
TM Analyzer Model | UiPath Task Mining | Custom Training | General Availability |
Image Moderation | Open-Source Packages - Image Analysis | Pre Trained | N/A |
Object Detection | Open-Source Packages - Image Analysis | Pre Trained and Custom Training | N/A |
English Text Classification | Open-Source Packages - Language Analysis | Custom Training | N/A |
French Text Classification | Open-Source Packages - Language Analysis | Custom Training | N/A |
Japanese Text Classification | Open-Source Packages - Language Analysis | Custom Training | N/A |
Language Detection | Open-Source Packages - Language Analysis | Pre Trained | N/A |
Named Entity Recognition | Open-Source Packages - Language Analysis | Pre Trained | N/A |
Sentiment Analysis | Open-Source Packages - Language Analysis | Pre Trained | N/A |
Text Classification | Open-Source Packages - Language Analysis | Custom Training | N/A |
Question Answering | Open-Source Packages - Language Comprehension | Pre Trained | N/A |
Semantic Similarity | Open-Source Packages - Language Comprehension | Pre Trained | N/A |
Text Summarization | Open-Source Packages - Language Comprehension | Pre Trained | N/A |
English To French Translation | Open-Source Packages - Language Translation | Pre Trained | N/A |
English To German Translation | Open-Source Packages - Language Translation | Pre Trained | N/A |
English To Russian Translation | Open-Source Packages - Language Translation | Pre Trained | N/A |
German To English Translation | Open-Source Packages - Language Translation | Pre Trained | N/A |
MultilingualTranslator | Open-Source Packages - Language Translation | Pre Trained | N/A |
Russian To English Translation | Open-Source Packages - Language Translation | Pre Trained | N/A |
TPOT Tabular Classification | Open-Source Packages - Tabular Data | Custom Training | N/A |
TPOT Tabular Regression | Open-Source Packages - Tabular Data | Custom Training | N/A |
XGBoost Tabular Classification | Open-Source Packages - Tabular Data | Custom Training | N/A |
XGBoost Tabular Regression | Open-Source Packages - Tabular Data | Custom Training | N/A |
Example packages that can be immediately deployed and added to a RPA workflow, more can be found in the product
This is a model for image content moderation based on a deep learning architecture commonly referred to as Inception V3. Given an image, the model will output one of four classes 'explicit', 'explicit-drawing', 'neutral', and 'pornographic' together with a normalized confidence score for each class probability.
It is based on the paper 'Rethinking the Inception Architecture for Computer Vision' by Szegedy et al which was open-sourced by Google.
This model predicts the sentiment of a text in the English Language. It was open-sourced by Facebook Research. Possible predictions are one of "Very Negative", "Negative", "Neutral", "Positive", "Very Positive". The model was trained on Amazon product review data thus, the model predictions may have some unexpected results for different data distributions. A common use case is to route unstructured language content (e.g. emails) based on the sentiment of the text.
It is based on the research paper "Bag of Tricks for Efficient Text Classification" by Joulin, et al.
This model predicts the answer to a question of a text in the English Language based on some paragraph context. It was open-sourced by ONNX. A common use case is in KYC or processing financial reports where a common question can be applied to a standard set of semi-structured documents. It is based on the state-of-the-art BERT (Bidirectional Encoder Representations from Transformers). The model applies Transformers, a popular attention model, to language modeling to produce an encoding of the input and then trains on the task of question answering.
It is based on the research paper “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”.
This model predicts the language of a text input. Possible predictions are one of the following 176 languages:
Languages |
---|
af als am an ar arz as ast av az azb ba bar bcl be bg bh bn bo bpy br bs bxr ca cbk ce ceb ckb co cs cv cy da de diq dsb dty dv el eml en eo es et eu fa fi fr frr fy ga gd gl gn gom gu gv he hi hif hr hsb ht hu hy ia id ie ilo io is it ja jbo jv ka kk km kn ko krc ku kv kw ky la lb lez li lmo lo lrc lt lv mai mg mhr min mk ml mn mr mrj ms mt mwl my myv mzn nah nap nds ne new nl nn no oc or os pa pam pfl pl pms pnb ps pt qu rm ro ru rue sa sah sc scn sco sd sh si sk sl so sq sr su sv sw ta te tg th tk tl tr tt tyv ug uk ur uz vec vep vi vls vo wa war wuu xal xmf yi yo yue zh |
It was open-sourced by Facebook Research. The model was trained on data from Wikipedia, Tatoeba, and SETimes used under the Creative Commons Attribution-Share-Alike License 3.0. A common use case is to route unstructured language content (e.g. emails) to an appropriate responder based on the language of the text.
It is based on the research paper "Bag of Tricks for Efficient Text Classification" by Joulin, et al.
This model delivers translation directly between any pair of 200+ languages. You can find the full list of languages and the corresponding code to use each of them here.
This is the HuggingFace integration of No Language Left Behind model open sourced by Meta AI Research. The model was published under the following license: License.
Input description
text
: the text to be translated.from_lang
: the language code of the text to be translated.to_lang
: the language code of the targeted text.
{"text" : "UN Chief says there is no military solution in Syria", "from_lang" : "eng_Latn", "to_lang" : "fra_Latn" }"
{"text" : "UN Chief says there is no military solution in Syria", "from_lang" : "eng_Latn", "to_lang" : "fra_Latn" }"
Output description
"Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie"
"Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie"
This is a Sequence-to-Sequence machine translation model that translates English to French. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Convolutional Sequence to Sequence Learning" by Gehring, et al.
This is a Sequence-to-Sequence machine translation model that translates English to German. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Facebook FAIR's WMT19 News Translation Submission" by Ng, et al.
This is a Sequence-to-Sequence machine translation model that translates English to Russian. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Facebook FAIR's WMT19 News Translation Submission" by Ng, et al.
This is a Sequence-to-Sequence machine translation model that translates English to Russian. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Facebook FAIR's WMT19 News Translation Submission" by Ng, et al.
This is a Sequence-to-Sequence machine translation model that translates English to Russian. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Facebook FAIR's WMT19 News Translation Submission" by Ng, et al.
This model returns a list of entities recognized in text. The 18 types of named entities recognized use the same output class as in OntoNotes5 which is commonly used for benchmarking this task in academia. The model is based on the paper 'Approaching nested named entity recognition with parallel LSTM-CRFs' by Borchmann et al, 2018.
The 18 classes are the following:
Entity |
Description |
---|---|
PERSON |
People, including fictional. |
NORP |
Nationalities or religious or political groups. |
FAC |
Buildings, airports, highways, bridges, etc. |
ORG |
Companies, agencies, institutions, etc. |
GPE |
Countries, cities, states. |
LOC |
Non-GPE locations, mountain ranges, bodies of water. |
PRODUCT |
Objects, vehicles, foods, etc. (Not services.) |
EVENT |
Named hurricanes, battles, wars, sports events, etc. |
WORK_OF_ART |
Titles of books, songs, etc. |
LAW |
Named documents made into laws. |
LANGUAGE |
Any named language. |
DATE |
Absolute or relative dates or periods. |
TIME |
Times smaller than a day. |
PERCENT |
Percentage, including ”%“. |
MONEY |
Monetary values, including unit. |
QUANTITY |
Measurements, as of weight or distance. |
ORDINAL |
“first”, “second”, etc. |
CARDINAL |
Numerals that do not fall under another type. |
Example packages that can be trained by adding data to AI Center storage and starting a pipeline, more models can be found in the product.
This is a generic, re-trainable model for English text classification. Common use cases are email classification, service ticket classification, custom sentiment analysis among others. See English Text Classification for more details.
This is a generic, re-trainable model for French text classification. Common use cases are email classification, service ticket classification, custom sentiment analysis among others. See French Text Classification for more details.
This is the preview version of a generic, retrainable model for text classification. It supports the top 100 Wikipedia languages listed here. This ML Package must be trained, and if deployed without training first, the deployment will fail with an error stating that the model is not trained. It is based on BERT, a self-supervised method for pretraining natural language processing systems. A GPU is recommended especially during training. A GPU delivers ~5-10x improvement in speed.
This preview model allows you to bring your own dataset tagged with entities you want to extract. The training and evaluation datasets need to be in CoNLL format.
This is a generic, re-trainable model for tabular (e.g. csv, excel) data classification. That is, given a table of columns and a target column, it will find a model for that data. See TPOT AutoML Classification for more details.
This is a generic, re-trainable model for tabular (e.g. csv, excel) data classification. That is, given a table of columns and a target column, it will find a model (based on XGBoost) for that data. See TPOT XGBoost Classification,
- Ready-to-deploy
- Image Moderation
- Sentiment Analysis
- Question Answering
- Language Identification
- Multilingual Translator
- English To French
- English To German
- German To English
- English To Russian
- Russian To English
- NamedEntityRecognition
- Re-trainable
- English Text Classification
- French Text Classification
- Multi Lingual Text Classification
- Custom Named Entity Recognition
- Tabular Classification AutoML - TPOT
- Tabular Classification - TPOT XGBoost