订阅

UiPath 活动

UiPath 活动指南

关于“用户界面自动化活动包”

The UIAutomation activities package contains all the basic activities used for creating automation projects.

❗️

Important!

Automation processes that use UIAutomation activities cannot run under a locked screen.

📘

Note:

  • Starting with v2020.10, the UIAutomationNext package has been deprecated and the existing UIAutomation package has been expanded to include all the modern features previously available in UiAutomationNext. You are also able to install the unified UIAutomation activities package even on Studio versions 2020.4.1 and lower. This displays all the classic and modern activities in the activities pane. Read more about the Modern Design Experience.

  • Starting with UiPath.UIAutomation.Activities v19.11, all Abbyy related activities have been moved to a separate package. Install the UiPath.Abbyy.Activities package if you want to use its activities for OCR, Cloud OCR, classification, and data extraction.

These activities enable the robots to:

  • 模拟人机交互(如执行鼠标和键盘命令或键入和提取文本),以实现基本的用户界面自动化。
  • Use technologies such as OCR or Image recognition to perform Image and Text Automation.
  • 基于用户界面行为创建触发条件,从而使机器人能够在机器发生特定事件时执行某些操作。
  • 执行浏览器交互和窗口操作。

📘

Note:

The UiPath.Vision dependency package includes third-party libraries. These external dependencies are used exclusively for the purpose of enabling the implementation of specific activities in the UiPath.UIAutomation.Activities package.

Here are some examples:
AbbyyOnlineSdk.dll - used exclusively in the Abbyy Cloud OCR activity, at run-time, as a wrapper over the Abbyy online service calls.
Interop.FREngine.v11.dll - used exclusively in the Abbyy OCR activity, at run-time, as a wrapper over the Abby FineReader Engine calls.
Interop.MODI.dll - used exclusively in the Microsoft OCR activity, at run-time, when executed on a Windows 7 or Windows Server machine.

As of v2018.3, the UiPath.Core.Activities package was split into the UIAutomation and System packs. Find out more about the Core Activities Split.

Particular scenarios might require management of strict UIAutomation dependencies versions. For example, a language for the Tesseract OCR engine must be manually installed per UiPath.Vision version. This means that for processes using that language you need to use the corresponding UIAutomation activities package. You can find out more on this page.

“用户界面自动化活动包”中包含内部开发的以下依赖项:

  • UiPath.Vision - 支持 OCR 和计算机视觉引擎的功能。
  • UiPath -“用户界面自动化”活动的必备库。

发行说明

依赖项

The table below enlists the dependencies shipped with each version of the UiPath.UIAutomation.Activities package:

UiPath.UIAutomation.Activities

UiPath.Vision

UiPath

18.3.6877.28298

1.1.0

9.0.6877.24355

18.3.6897.22543

1.1.0

9.0.6893.27943

18.3.6962.28967

1.1.0

9.0.6962.24417

18.4.2

1.2.0

10.0.6913.22031

18.4.3

1.2.0

10.0.6929.25268

18.4.4

1.2.0

10.0.6992.20526

18.4.5

1.2.0

10.0.7020.22745

18.4.6

1.2.0

10.0.7194.26789

18.4.7

1.2.1

10.0.7445.17204

19.1.0

1.2.0

10.0.6957.21531

19.2.0

1.3.0

10.0.6957.21531

19.3.0

1.4.0

10.0.7004.31775

19.4.1

1.5.0

19.4.7054.14370

19.4.2

1.5.0

19.4.7068.19937

19.5.0

1.6.0

19.5.7079.28746

19.6.0

1.6.0

19.6.7108.25473

19.7.0

1.6.1

19.7.7128.27029

19.8.0-ce

1.7.0

19.8.7173.31251-ce

19.10.0-ce

1.8.1

19.10.7230.26901

19.10.1

1.8.1

19.10.7243.31457

19.11.0

2.0.0

19.10.7275.19994

19.11.1

2.0.0

19.10.7312.25504

19.11.2

2.0.1

19.10.7312.25504

19.11.3

2.0.1

19.10.7452.28108

20.4.1

2.0.3

20.4.7422.14731

20.4.2

2.0.3

20.4.7472.17184

20.4.3

2.0.3

20.4.7537.15740

20.10.5

2.2.0

20.10.7585.27318

20.10.6

2.2.0

20.10.7585.27318

20.10.7

2.2.0

20.10.7641.24102

20.10.8

2.2.0

20.10.7641.24102

20.10.9

2.2.0

20.10.7641.24102

21.4.3

3.0.1

21.4.23.31065

计算机视觉

📘

Important!

Due to the fact that the Computer Vision activities have moved to the UIAutomation pack in 19.10, Installing the UIAutomation v19.10.1 pack in a project that already contains a version of the Computer Vision pack throws an error.

The AI Computer Vision pack contains refactored fundamental UIAutomation activities such as Click, Type Into, or Get Text. The main difference between the CV activities and their classic counterparts is their usage of the Computer Vision neural network developed in-house by our Machine Learning department. The neural network is able to identify UI elements such as buttons, text input fields, or check boxes without the use of selectors.

我们创建这些活动主要是为了实现 Citrix 机器等虚拟桌面环境的自动化,协助避开选取器不存在或不可靠的问题。因为它们会将正在自动化的窗口的图像发送到神经网络,然后由神经网络对图像进行分析,识别图像上的所有用户界面元素并根据具体内容进行标记。智能锚点用于精确定位正与您交互的用户界面元素的确切位置,确保您成功执行所要执行的操作。

To use the Computer Vision activities in the current project, you need an ApiKey which can be obtained from the Automation Cloud as detailed here
The ApiKey must then be inserted in the ApiKey field in the Computer Vision Project Settings property category. You can see the Project Settings page for more information.

📘

Note:

与服务器连接相关的设置适用于整个项目,对此,所有后续“计算机视觉屏幕作用域”活动中均有所体现。

使用“计算机视觉”活动

此活动包中的所有活动仅适用于“计算机视觉屏幕作用域”活动,该活动会建立与神经网络服务器的实际连接,从而便于您分析要自动化的应用程序用户界面。任何使用“计算机视觉”活动的工作流均须先将“计算机视觉屏幕作用域”活动拖动至“设计器”面板。拖动完成后,您便可使用作用域活动主体中的“在屏幕上指定”按钮来选择要使用的屏幕区域。

📘

Note:

双击信息截图后,系统会显示已捕获的图像,并会以紫色高亮显示神经网络和 OCR 引擎识别出的所有用户界面元素。

📘

Note:

区域选择也可用于仅指定要自动化的部分应用程序用户界面。对于多个文本字段具有相同标签且无法正确识别的情况,该功能尤为有用。

Once a CV Screen Scope activity is properly configured, you can start using all of the other activities in the pack to build your automation.

Indicating On Screen

The activities that perform actions on UI elements can be configured at design time by using the Indicate On Screen button present in the body of the activities. The activities that have this feature are:

Clicking the Indicate On Screen (hotkey: I) button opens the helper wizard.

The CV Click, CV Hover, and CV Type Into activities also feature a Relative To button in the helper wizard, which enables you to configure the target as being relative to an element.

The Indicate field specifies what you are indicating at the moment. When the helper is opened for the first time, the Target needs to be indicated. For each possible target, the wizard automatically selects an anchor, if one is available.

After successfully indicating the Target, the wizard closes and the activity is configured with the target you selected.

If no unique anchor is automatically identified, the Indicate field informs you of this fact, enabling you to indicate additional Anchors, which make the target easier to find.

The Show Elements (hotkey: s) button in the wizard highlights all UI elements that have been identified by the Computer Vision analysis, making it easier for you to choose what to interact with.

The Refresh Scope (hotkey: F5) button can be used at design time, in case something changes in the target app, enabling you to send a new picture to the CV server to be analyzed again.

The Refresh After Delay (hotkey: F2) button performs a refresh of the target app after waiting 3 seconds.

📘

Important!

请记住,每当您选择提交神经网络的行为错误时,您都是在帮助它学习,同时也在间接协助我们为您提供更优质的产品。请尽量提交更多问题,以便我们有机会了解并修复问题。

Updated 19 days ago


关于“用户界面自动化活动包”


建议的编辑仅限用于 API 参考页面

You can only suggest edits to Markdown body content, but not to the API spec.