- Release Notes
- Getting started
- Notifications
- Projects
- Datasets
- Data Labeling
- ML packages
- Out of the box packages
- Pipelines
- ML Skills
- ML Logs
- Document UnderstandingTM in AI Center
- AI Center API
- Licensing
- AI Solutions Templates
- How to
- Basic Troubleshooting Guide
- About troubleshooting
- ML Skills
- Pipelines

AI Center
Pipelines
linkThis section provides frequently encountered errors related to Pipelines.
Pipeline failed due to ML package issue
linkA pipeline run is failing due to ML Package issue.
A potential cause for this error can be a wrong minor version chosen when running the pipeline.
Choosing the correct minor version when running pipelines
When deploying a new package, the only minor version available is 0. The reason for this is because there are no pipelines executed on this package yet.
Training pipelines
If you are deploying a training pipeline, we strongly recommend you always use minor version 0.
Full pipelines
If you are deploying a full pipeline, we strongly recommend you always use minor version 0.
Evaluation pipelines
Evaluation pipelines are used to evaluate a trained ML model. You can execute this on any version of the ML Package to get the corresponding evaluation scores. This is similar with grading or evaluating an ML model with the evaluation dataset.
- Pre-trained packages (out of the box packages): since these packages are pre-trained, run the evaluation pipeline on the minor version 0. To evaluate the model after training using specific data, choose the minor version you want to evaluate (trained version).
- Document Understanding:
- Since these are generic, retrainable models, the models need to be trained first. Only run an evaluation pipeline when the models are trained and a new minor version of the package is available.
- Select the most recent minor version, or any other minor version (except 0), for which the evaluation scores can be obtained.
Pipeline killed automatically
linkPipeline is killed automatically
Pipelines are automatically killed after seven days to avoid being stuck for longer periods of time and consuming licenses. Use the following recommendations.
- Enable GPU.
- Apply dataset optimization technics.
- Reduce the number of EPOCHS.
Pipeline running for too long
link- Waiting for License
- Running
- Failed
- Killed
Check the sections below for more details on each status.
- Enable GPU
- Optimize the dataset. For detailed information on Document Understanding datasets, check the Training High Performing Models page from the Document Understanding guide.
Waiting for licenses
- Open Automation Cloud™.
- Go to theAdmin > Licenses page.
- Check if the corresponding AI Units are available.
Running
- Select the stuck pipeline.
-
Check the Logs section.
- If the logs are recent and are streaming, the pipeline is in progress.
- If the last log is generated a long time back, download the logs using the Download button and share it with our Support department. If the download button is not visible or disabled, copy the logs from the Logs section and share it with our Support department.
Failed
If the pipeline run is in the Failed state, check the following possible reasons.
Check that document type data is in dataset folder and follows folder structure
The following error occurs:
Error: Document type data not valid, check that document type data is in dataset
folder and follows folder structure.
The format of the folder provided for training needs to be in the dataset format.
- Make sure that the provided dataset is correct.
- Make sure that the provided dataset is exported from Document Manager. For more information on datasets related to Document Understanding, check the Export Documents page from the Document Understanding guide.
- In case of scheduled pipelines for automatic retraining loop, select the folder containing the exports from the Data Labeling sessions and latest.txt.
Images or directory does not exist or is empty for invoices dataset
The pipeline run fails because images/directory does not exist/is empty for invoices dataset.
The dataset path provided for either training dataset or evaulation dataset is empty.
To fix this, update the dataset path for evaluation or training, according to the pipeline.
Unschedulable nodes are available
Unschedulable 0/n nodes are available
error, contact our
Support department with the your Automation Cloud™ tenant information.
No space left on device
No space left on device
error, contact our Support
department with the your Automation Cloud™ tenant information.
Killed
The Killed status is usually displayed when the pipeline was killed by the user. For more information on managing pipelines, check the Managing pipelines page.
If the pipeline status is Killed without any user intervention, the most common reason is that pipelines are automatically killed after seven days. For more information on pipeline statuses, check the About pipelines page.
Pipelines failed due to datasets issues
linkA pipeline run is failing due to dataset structure, input parameters, path, folders, or evaluation dataset.
Wrong dataset format
The following error occurs:
#Error: Training and / or test set is empty, verify that training / test split is correctly set in split.csv
This error is most commonly is usually caused by the wrong dataset format or incorrect ratio of train and validation set in split.csv. Check the Training Dataset page for general guidelines on how to create a training dataset.
Evaluation set not provided
The following error occurs:
#Error: Training failed for pipeline type: FULL_TRAINING, error: Full / evaluation pipelines require an evaluation dataset. Please re-run the pipeline providing an evaluation dataset
This error typically occurs when an evaluation dataset was not provided. Check the Training Dataset page for general guidelines on how to create a training dataset.
Pipeline failed due to insufficient licenses error
linkWhen running a pipeline, it can occasionally fail because of an Insufficient Licenses error in the Pipeline Data page.
This error does not imply the absence of an actual license. Rather, it signals that all available AI units, whether at the tenant or organization level, have been consumed.
Use this procedure for AI units at organization level.
-
Navigate to Admin > Licenses > Consumables > AI Units to access the AI Units page.
The blue graphic on this page displays the consumption of AI units for all tenants in the organization.
-
Hover your mouse over the blue bar to check the consumption or select
View Usage for a monthly usage breakdown.
Note: Do not select Refresh in this section, as it will refresh the API Key.
-
If the Consumed count indicates that all available AI units
have been consumed, take a screenshot of the AI unit usage and save this image.
- File a licensing query to get in touch with the UiPath® Licensing team for purchasing additional AI units. Make sure to include a copy of the saved image in your query.
Use this procedure for AI units at tenant level.
- Navigate to the Admin page and select the desired tenant on the left side of the page.
-
Navigate to Licenses > Consumables > AI Units to access the AI Units page.
The blue graphic on this page displays the consumption of AI units for the chosen tenant.
- If the AI units are fully consumed at the tenant level, but there is still an available balance at organization level, select Edit Allocation from the upper-right section of the screen.
-
Allocate more AI units at the tenant-level by providing a new total AI unit
amount.
File not found error
link[Error]
FileNotFoundError: [Errno 2] No such file or directory:
'/workspace/model/microservice/models/multi_task_base/network.p
This error can occur due to an inadequate number of documents in the dataset. The training pipeline needs enough documents in the dataset so that it has can split enough for both the training and validation subsets.
For example, if you only have one document in the dataset, the single document will only be available for allocation for training, but there are no documents that can be used for validation.
-
Review the pipeline logs or the
Split.csv
file in the dataset to check the number of documents that are split between the Train and Validate subsets.In the pipeline logs, if the subsets only list TRAIN, or if only TRAIN documents are listed in theSplit.csv
file, additional documents are needed in the dataset. After the documents are labeled, perform a new export from Document Manager so that a proper split between TRAIN and VALIDATE can take place. -
Check the number of documents in the dataset.
You can further review the pipeline log to find the number of documents available in the subsets. You can also review the
Split.csv
file after the dataset has been exported from Document Manager to confirm how many documents should be allocated for TRAIN and how many should be allocated for VALIDATE.
- Pipeline failed due to ML package issue
- Choosing the correct minor version when running pipelines
- Pipeline killed automatically
- Pipeline running for too long
- Waiting for licenses
- Running
- Failed
- Killed
- Pipelines failed due to datasets issues
- Wrong dataset format
- Evaluation set not provided
- Pipeline failed due to insufficient licenses error
- File not found error