marketplace
latest
false
Marketplace User Guide
Last updated Sep 5, 2024

Standards for Quality Content

All listings on the Marketplace should meet the following general guidelines:

GuidelinesDetails
Modular and Reusable Components
  • Solution Accelerators should be designed with modularity in mind, offering thought-out architecture for process templates, machine learning models, reusable libraries, and more.

  • These modular components should be easy to integrate and combine to build customized processes for the specific use case originally in mind or have the capability to transfer various reusable components across processes.

  • This should reduce duplication effort and promote consistency with development guidelines across different processes.

Configurable Parameters and Settings
  • Solution Accelerators should provide configurable parameters and settings that can be adjusted to match the requirements of the end user and their business needs.

  • Configurations can include but are not limited to variables, thresholds, timeouts, endpoints, machine learning models, human in the loop assignees, and other adjustable parameters to allow for customization and adaptability.

  • The process Configuration file, Orchestrator Assets and process arguments are some of the means you can use to achieve configurability.

Scalable and Adaptable Architecture
  • Architecture design should be scalable, adaptable, capable of handling diverse automation requirements, and allow for easy expansion with more robots or more processes as business needs evolve or new use cases emerge.

  • Architecture should allow integration with various systems, applications, and technology found within the industry of the use case. Users should be able to easily swap various Solution Accelerator components or integrate new ones to suit their organization’s needs.

Integration Capabilities
  • Pre-built Integration Service connectors or connections built using Connector Builder should be utilized, when possible, to enable automation using APIs with an out of the box library of connectors while also providing a standard way to set up and manage connections with standardized authentication.

  • Seamless integration with external systems and applications streamlines data exchange and enables Solution Accelerators to work with existing IT infrastructure, minimizing the need for extensive modifications or custom development.

  • If API automation is not possible, UI automation should be contained within a GUI Integration Layer / Application Layer as described further below. This should utilize Object Repository within an Application Library.

Extensibility and Customization
  • Solution Accelerators should be designed to be extensible and customizable, allowing Solution Accelerator users to tailor the automation workflow to their organization’s specific needs.

  • Solution Accelerators should be developed with the mindset that organizations should use the specific Solution Accelerator as a foundation for the use case while having the ability to adapt them to their unique business needs.

Note:

Through these areas, a Solution Accelerator should provide a structured but flexible framework for building an efficient and scalable automation solution. In summary, your Solution Accelerator should promote modularity, adaptability, best practice compliance, and easy integration with systems and applications to facilitate rapid development and deployment of automation.

Attention:

For a listing to be published on UiPath Marketplace, you must include in the listing's Description all details about the UiPath products that are used in the automation or that are compatible with your automation, and the role that they play.

Partners may not include the names of third parties or third parties' apps or other third party products in the text of their listing or product description on UiPath Marketplace without express authorization from the third party.

Standards for Solution Accelerators

1. Layered Architecture

A Solution Accelerator’s design should implement a separation between the business logic layer (implementation layer) and the application layer (GUI interaction layer, if necessary). This can be achieved directly within the various process workflows or, if reusability is required, by using libraries.

2. Business Logic Layer (Implementation Layer)

  • This layer is responsible for implementing the core business logic and automation workflows of the Solution Accelerator.

  • It focuses on the specific tasks and processes that the Solution Accelerator aims to automate within a given domain or use case.

  • The business logic layer defines the rules, conditions, and actions required to achieve the desired automation outcomes.

  • It may involve data manipulation, decision-making, integration with external systems, and other processing tasks.

  • This layer is designed to be modular and customizable, allowing organizations to adapt the Solution Accelerator to their specific business requirements.

  • It typically utilizes UiPath automation capabilities, such as activities, workflows, and variables, to orchestrate the automation flow.

3. Application Layer

  • The application layer serves as the interface between the automation workflow and the GUI (graphical user interface) or API (application programming interface) of the various applications and/or systems participating in the automated process.

  • This layer could handle the interactions with the user interface elements, such as buttons, fields, menus, and dialogs, within the target applications or systems. This layer could also handle the implementation of the application programming interface within the target applications or systems, such as data entry through code communication instead of using the application’s user interface.

  • This layer can include activities and components that enable UI automation, such as screen scraping, data input/output, and navigation. This layer can also include application logic for programmatic implementation of the same controls.

  • The application layer is designed to be adaptable to the specific target application – any updates to APIs or to user interfaces should take place in one spot and then adapt across all implementations of that component.

  • The GUI Interaction Layer should provide flexibility to handle variations in UI elements, screen layouts, and navigation paths.

  • Within the API Interaction Layer, the output of any workflow should be consistent with the workflow's objective. For example, if your workflow is called “Retrieve All Users”, one should expect a collection of User objects to be returned and not a JSON which one would then need to further parse to extract the required user data.

  • Keep API call duration at a minimum by making use of pagination and by applying relevant filters whenever these are implemented by the target API.

Note:

A separation between a business layer and an application layer ensures a clear distinction between the automation and process logic versus the application specific details. This enables a modular and scalable architecture where changes in the GUI or API can be managed separately from the core business logic. Easier maintenance, reusability within the Solution Accelerator or transferring to other processes, and customization of the Solution Accelerator is enabled through this separation. The end user of the Solution Accelerator can easily replace the application layer to accommodate any changes in target applications without affecting the underlying process logic. Similarly, the business logic can be modified or extended independently from the application to cater to evolving business needs.

Standard Architecture Types

Transactional / Queue Based Processes

This is the standard Dispatcher-Performer Model. The Robotic Enterprise Framework should be utilized for the implementation of a straightforward transaction-based process.

The Robotic Enterprise Framework is a project template based on State Machines. It is created to fit all the best practices regarding logging, exception handling, application initialization, and others, being ready to tackle a complex business scenario. The template contains several pre-made State containers for initializing applications, retrieving input data, processing it and ending the transaction. All these states are connected through multiple transitions which cover almost every need in a standard automation scenario. There are also multiple invoked workflows, each handling particular aspects of the project.

The Dispatcher and Performer model is a pre-designed solution to separate the two main stages of a process by placing a queue in between. This way, the production of transaction items is entirely independent from their consumption. This asynchronism breaks the time dependency between the Dispatcher and the Performer.

In this standard approach, a Dispatcher is an automation to load data into a UiPath queue. It extracts data from one or multiple sources and uses the same to create queue items for Performer robots to process. Information is pushed to one or more queues allowing the dispatcher to use a common format for all data stored in queue items. This data can then be processed in an automation later, the Performer. The Performer can process the data as needed as queue items are processed one at a time. It uses error handling and retry mechanisms for each processed item. A major advantage of the performer is its scalability (multiple performers can be used with a single queue).

2. Document Understanding Processes

Most of the processes working with documents have in common their requirement to also "understand" their content. Hence, a dedicated framework specialized in document understanding has been put in place – the Document Understanding (DU) Process Framework.

This framework is essentially a UiPath Studio Project template based on a typical document processing flowchart. The process provides logging, error handling, retry mechanisms, and all the methods that should be used in a DU workflow. The workflow has an architecture decoupled from other connected automations:

  • It does not matter where the files to be processed come from or what triggers the execution – this is the responsibility of an upstream process.

  • It does not matter where the extracted information should be used – this is the downstream process's responsibility.

  • The framework’s architecture is common for both attended and unattended robots:

  • Document understanding logic (digitization, classification, data extraction)

  • Human-in-the-loop logic (validation) using Action Center for unattended robots, or a local Validation Station for attended ones

As a result of these mechanisms and architecture, the great majority of automations using Document Understanding typically combine a Dispatcher-Performer model with a Document Understanding framework in between:

  • The Dispatcher gathers documents to be processed from the upstream application or system.

  • The Document Understanding Process extracts the necessary information from each document one at a time with scalability because of the Dispatcher method.

  • Finally, the Performer utilizes the extracted data from the document to complete the process.

3. Transactional Processes with Action Center

This architecture consists of Dispatcher – Performer - Finalizer processes with human in the loop, or Long-Running Workflow, process somewhere in the middle. The standard template for longrunning workflows is the Orchestration Process Template. Long-Running workflows have best practices that need to be followed to support service orchestration, human intervention, and long-running transactions in unattended environments.

This architecture is utilized when human intervention is needed to approve or monitor the automation. As a result, any flows after Action Center tasks must account for both acceptance and rejection.

4. Further Architectures

Other architectural decisions can exist and be appropriate based upon the automation needs:

  • A finalizer can always be considered for any cleanup needed within a process.

  • A reporter that is run infrequently or ad-hoc can be considered to send automation statistics to the necessary stakeholders

  • Extract, Transform, and Load (ETL) processes could combine data from multiple sources into a large central repository.

  • Other automation frameworks such as the UiPath Attended Framework can be considered if applicable to the process.

Process Best Practices

It’s necessary to follow these best practices when developing any UiPath process for a Solution Accelerator:

  • Follow the out of the box Workflow Analyzer rules. When analyzed with this tool, your project should raise minimal if not zero warnings. Naming conventions, design best practices, maintainability and readability rules, and usage rules need to be followed. Some key examples of those rules:

    • There should be no hard coded delays.

    • No activities should have default names.

    • No two activities should have the same name within a workflow.

    • Arguments need to follow the in_, out_, and io_ naming convention.

    • Deeply nested activities should not exist and should be considered a strong argument for dividing the workflow into smaller components .

  • Before starting the development, thoroughly analyze the process requirements and design a solution that addresses the specific needs. Break down the process into smaller tasks and identify dependencies to ensure a clear and efficient workflow.

  • Identify reusable components or workflows that can be easily maintained and reused in other projects and separate them from an early stage. This modular approach improves reusability, simplifies debugging, and promotes scalability.

  • Implement robust error handling mechanisms to gracefully handle exceptions and failures. Use try-catch blocks and provide informative error messages to aid in troubleshooting and enhance the stability of the process.

    • Errors should be specific and display a relevant error message. A null pointer exception should not be thrown if a string should be populated but isn’t as a result of an application’s return value – the exception should be classified as a business rule exception caused by the application.

  • Incorporate configurable settings in your process, such as input parameters, to allow flexibility and adaptability. This enables users to easily customize the process based on their specific requirements without modifying the core workflow.

  • Validate inputs to ensure they meet the required criteria and handle exceptions for invalid or unexpected data. Implement proper data handling techniques, such as data cleansing and transformation, to ensure accurate and reliable processing.

  • Incorporate logging mechanisms to capture relevant information during the execution of the process. This helps troubleshooting and provides valuable insights for process optimization. Use debugging tools to identify and resolve issues efficiently.

    • Logging mechanisms for reporting and UiPath Insights should also be considered.

  • Thoroughly test the process to ensure its functionality and reliability. Use test cases and data to validate the expected outcomes and handle edge cases. This helps identify and fix any errors or inconsistencies before deployment.

  • Regularly review and enhance your processes based on feedback, evolving requirements, and technological advancements. Continuously seek opportunities to optimize the process, improve efficiency, and incorporate new features or functionalities.

Library Best Practices

It’s necessary to follow these best practices when developing any UiPath Library for a Solution Accelerator:

  • Follow the out of the box Workflow Analyzer rules. Your project should be able to be run against Workflow Analyzer and have minimal if not zero warnings. Naming conventions, design best practices, maintainability and readability rules, and usage rules need to be followed. Some key examples of those rules:

    • There should be no hard coded delays.

    • No activities should have default names.

    • No activities should have the same name within a workflow.

    • Arguments should NOT follow the in_, out_, and io_ naming convention as those arguments will appear as properties when the Library is being created. The default workflow analyzer rule for invalid argument names can be ignored while creating a Library.

    • Deeply nested activities should not exist and should be considered for dividing the workflow into smaller components.

  • Any UI interaction should occur only through Object Repository.

  • Break down your library into smaller, modular components that focus on specific tasks or functionalities. This promotes reusability and allows for easier maintenance and updates.

  • Supply comprehensive documentation for your reusable library, including usage instructions, input/output descriptions, and any dependencies or prerequisites. Clear documentation helps users understand how to effectively use the library.

  • Error Handling: Implement robust error handling mechanisms within the library to handle exceptions and failures gracefully. Use try-catch blocks and supply informative error messages to aid in troubleshooting.

    • Errors should be caught within the processes of the Solution Accelerator and not processed within the Library

    • Any business exceptions should throw a business rule exception. Any application exceptions should throw a system exception

  • Confirm inputs and handle edge cases to ensure the library functions correctly and prevents unexpected errors or undesired outcomes. Proper input validation enhances the reliability and stability of the library.

    • This applies to any API automation output as well.

  • Thoroughly test the process to ensure its functionality and reliability. Use test cases and data to validate the expected outcomes and handle edge cases. This helps identify and fix any errors or inconsistencies before deployment.

  • Regularly review and update your library to incorporate feedback, address bugs, and enhance functionality based on evolving requirements. Continuous improvement ensures that the library remains relevant and effective over time.

  • When updating libraries, design your next update with backward compatibility in mind.

Example

Non-breaking change: Extending a library with a new workflow.

Potentially breaking change: Adjusting an existing workflow.

If unsure, test the backward compatibility with an intermediary release and if needed, move the update to a new workflow or library which could be separately consumed by only those processes requiring the update. In time, the old workflow can be deemed obsolete.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.