Orchestrator offers a multi-tenancy option. By using more than one tenant, users can split a single instance of Orchestrator into multiple deployment environments, each with its own Robots, processes, logs, and so on. All tenants share the same Orchestrator database.
This enables the isolation of desired resources from the rest of the organization. Automation resources are accessible from only within that tenant.
Folders enable you to limit access to the administration of the automation (i.e., who can create Robots, access certain Processes, etc.) while sharing the automation itself across the necessary departments of your organization.
Use meaningful names and descriptions for each provisioned Robot. Every time a new Robot is provisioned, the type of Robot should be chosen accordingly.
For Unattended Robots, the Windows credentials are needed to run unattended jobs on them.
For Attended Robots, credentials are not needed because the jobs are triggered manually by human agents, directly on the machine where the Robots are installed.
The next step after registering the Attended Robot to Orchestrator is to check if its status is Available on the Robots page.
Once in a while, old versions of processes that are not used anymore should be deleted. Versions can be deleted one-by-one by selecting them manually and clicking the Delete button or the Delete Inactive button. The latter deletes all the process versions that are not used by any process.
It’s recommended to keep at least one old version to be able to rollback if something is wrong with the latest process version.
If the Robot needs to run multiple processes with no interruption, all the jobs should be triggered one after another, even if the Robot is busy. These jobs go in a queue, with the Pending status, and when the Robot is available again, Orchestrator launches the next job.
It’s better to stop a job than to kill it.
To stop a job, the Should Stop activity is needed in the process workflow. This activity returns a Boolean that indicates if the Stop button was clicked.
The Kill button sends a Kill command to the Robot. This should be used only when needed because the Robot might be right in the middle of an action.
Besides the obvious functionality, triggers can be used to make a Robot run 24/7. Jobs can be scheduled one after another (at least one minute distance). If the Robot is not available when the process should start, the process is added to the jobs queue and is executed as soon as a Robot becomes available.
Use a meaningful name and description for each queue created.
At the end of each transaction's life cycle, it is mandatory to set the result of the item processing. Otherwise, transactions with the New status are automatically transitioned to Abandoned after 24 hours.
Using the Set Transaction Status activity, a queue item's status can be set to Successful or Failed. Keep in mind that only the Failed items with Application ErrorType will be retried if configured.
When the same Robots should process two or more types of items, there are at least two ways to manage them using queues:
- Create multiple queues, one for each type, and create a process that checks all the queues in a sequence, and the one with new items should trigger the specific process.
- Create a single queue for all the items, and for each item, create an argument “Type” or “Process”. Knowing this parameter, the robot should decide what process should be invoked.
The Add Transaction Item activity brings the option of getting all the Transactions functionalities without using a queue, but one should still be created before. This activity adds an item to the queue and sets its status to In Progress. Start using the item right away, and don’t forget to use the Set Transaction Status activity at the end of your process.
The Add Log Fields activity adds more arguments to Robot logs for better management. After using it in the workflow, the Log Message activity also logs the previously added fields.
Updated 18 days ago