UiPath Activities

About the UIAutomation Activities Pack

The UIAutomation activities package contains all the basic activities used for creating automation projects. These activities enable the robots to:

  • Simulate human interaction, such as performing mouse and keyboard commands or typing and extracting text, for basic UI automation.
  • Use technologies such as OCR or Image recognition to perform Image and Text Automation.
  • Create triggers based on UI behavior, thus enabling the Robot to execute certain actions when specific events occur on a machine.
  • Perform browser interaction and window manipulation.

As of v2018.3, the UiPath.Core.Activities package was split into the UIAutomation and System packs. Find out more about the Core Activities Split.

Particular scenarios might require management of strict UIAutomation dependencies versions. For example, a language for the Tesseract OCR engine must be manually installed per UiPath.Vision version. This means that for processes using that language you need to use the corresponding UIAutomation activities package. You can find out more on this page.

The UIAutomation activities package contains the following internally-developed dependencies:

  • UiPath.Vision - enables the functionality of OCR and Computer Vision engines.
  • UiPath - an essential library for UIAutomation activities.

Release Notes

The table below enlists the dependencies shipped with each version of the UiPath.UIAutomation.Activities package:

UiPath.UIAutomation.Activities
UiPath.Vision
UiPath

18.3.6877.28298

1.1.0

9.0.6877.24355

18.3.6897.22543

1.1.0

9.0.6893.27943

18.3.6962.28967

1.1.0

9.0.6962.24417

18.4.2

1.2.0

10.0.6913.22031

18.4.3

1.2.0

10.0.6929.25268

18.4.4

1.2.0

10.0.6992.20526

18.4.5

1.2.0

10.0.7020.22745

18.4.6

1.2.0

10.0.7194.26789

19.1.0

1.2.0

10.0.6957.21531

19.2.0

1.3.0

10.0.6957.21531

19.3.0

1.4.0

10.0.7004.31775

19.4.1

1.5.0

19.4.7054.14370

19.4.2

1.5.0

19.4.7068.19937

19.5.0

1.6.0

19.5.7079.28746

19.6.0

1.6.0

19.6.7108.25473

19.7.0

1.6.1

19.7.7128.27029

19.8.0-ce

1.7.0

19.8.7173.31251-ce

19.10.1

1.8.1

19.10.7243.31457

Computer Vision

Important!

Due to the fact that the Computer Vision activities have moved to the UIAutomation pack in 19.10, Installing the UIAutomation v19.10.1 pack in a project that already contains a version of the Computer Vision pack throws an error.

The AI Computer Vision pack contains refactored fundamental UIAutomation activities such as Click, Type Into, or Get Text. The main difference between the CV activities and their classic counterparts is their usage of the Computer Vision neural network developed in-house by our Machine Learning department. The neural network is able to identify UI elements such as buttons, text input fields, or check boxes without the use of selectors.

Created mainly for automation in virtual desktop environments, such as Citrix machines, these activities bypass the issue of inexistent or unreliable selectors, as they send images of the window you are automating to the neural network, where it is analyzed and all UI elements are identified and labeled according to what they are. Smart anchors are used to pinpoint the exact location of the UI element you are interacting with, ensuring the action you intend to perform is successful.

To use the Computer Vision activities in the current project, you need an ApiKey which can be obtained from the Cloud Platform as detailed here
The ApiKey must then be inserted in the ApiKey field in the Computer Vision Project Settings property category. You can see the Project Settings page for more information.

Note:

The settings regarding the server connection are project-wide, and are reflected in all subsequent CV Screen Scope activities.

Using the Computer Vision Activities

All of the activities in this pack only function when inside a CV Screen Scope activity, which establishes the actual connection to the neural network server, thus enabling you to analyze the UI of the apps you want to automate. Any workflow using the Computer Vision activities must begin with dragging a CV Screen Scope activity to the Designer panel. Once this is done, the Indicate on screen button in the body of the scope activity can be used to select the area of the screen that you want to work in.

Note:

Double-clicking the informative screenshot displays the image that has been captured and highlights in purple all of the UI elements that have been identified by the neural network and OCR engine.

Note:

Area selection can also be used to indicate only a portion of the UI of the application you want to automate. This is especially useful in situations where there are multiple text fields that have the same label and cannot be properly identified.

Once a CV Screen Scope activity is properly configured, you can start using all of the other activities in the pack to build your automation.

Indicating On Screen

The activities that perform actions on UI elements can be configured at design time by using the Indicate On Screen button present in the body of the activities. The activities that have this feature are:

Clicking the Indicate On Screen (hotkey: I) button opens the helper wizard.

The CV Click, CV Hover, and CV Type Into activities also feature a Relative To button in the helper wizard, which enables you to configure the target as being relative to an element.

The Indicate field specifies what you are indicating at the moment. When the helper is opened for the first time, the Target needs to be indicated. For each possible target, the wizard automatically selects an anchor, if one is available.

After successfully indicating the Target, the wizard closes and the activity is configured with the target you selected.

If no unique anchor is automatically identified, the Indicate field informs you of this fact, enabling you to indicate additional Anchors, which make the target easier to find.

The Show Elements (hotkey: s) button in the wizard highlights all UI elements that have been identified by the Computer Vision analysis, making it easier for you to choose what to interact with.

The Refresh Scope (hotkey: F5) button can be used at design time, in case something changes in the target app, enabling you to send a new picture to the CV server to be analyzed again.

The Refresh After Delay (hotkey: F2) button performs a refresh of the target app after waiting 3 seconds.

Important!

Please remember that whenever you choose to submit errors in the behavior of the neural network, you are helping it learn and indirectly helping us give you a better product. Submit as many issues as you can, as this gives us the opportunity to acknowledge and fix them.


About the UIAutomation Activities Pack


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.