UiPath Orchestrator

The UiPath Orchestrator Guide

About Logs

The Logs page displays logs generated by Robots. To make it easier to sift through all the generated data, you can view it in a filtered manner, as follows:

  • all logs generated by an indicated Robot, from the Robots page;
  • all logs generated by a Robot within an indicated job, from the Jobs page.

If Orchestrator is unavailable, logs are stored in a local database (C:\Windows\SysWOW64\config\systemprofile\AppData\Local\UiPath\Logs\execution_log_data), within the available disk space, until the connection is restored. When the connection is restored, the logs are sent in batches in the order they had been generated.



The database is not deleted after the logs have been successfully sent to Orchestrator.

The status of a job is stored in the memory of the UiPath Robot service. When Orchestrator becomes available, the information regarding the job status is synced between the two. However, if Orchestrator is not available and you restart the UiPath Robot service, the information is lost. This means that whenever Orchestrator becomes available the job is executed again.

For more information, see the Managing Logs in Orchestrator page.

Logs can be sent to ElasticSearch and/or to a local SQL database, thus enabling you to have non-repudiation logs. The two are independent of each other, and as such, an issue encountered in one does not affect the other.

Configure the location where logs are stored in web.config, by changing the value of the writeTo parameter as described here.

The Logs page displays the entries that Robots send to Orchestrator if the logs are sent to Elasticsearch or an SQL database. If logs are sent to both Elasticsearch and SQL, then the Logs page displays only the entries sent to Elasticsearch.



If you accumulate more than 2 million Robot logs per week in the SQL database, performance might degrade after a few months without cleaning up older logs. For such a large number of logs, we recommend using Elasticsearch. See here database maintenance procedures regarding logs.

If you use Elasticsearch to store your Robot logs, please note that, in certain circumstances, only 10.000 items can be queried.

Messages are logged on the following levels: Trace, Debug, Info, Warn, Error and Fatal.

Custom messages can also be sent to this page from Studio, with the Log Message activity. The messages can be logged at all the levels described above and should be used for diagnostic purposes.

For example, in the screenshot below, you can see that we logged a custom message at a Fatal severity level.

All logs can be exported to a .csv file, by clicking the Export button. The filters applied to this page are taken into account when this file is generated. For example, if you set to view logs only from the last 30 days with an Info severity level, only the entries that meet these criteria are downloaded.

To ensure the best performance, please note that the exported entries are not in reverse chronological order.

Please note that logs may not be in the proper order only in the following scenario:

  • There are two or more robot log entries with almost equal timestamps - they are equal up to the millisecond (time expressed as yyyy-MM-dd HH\:mm\:ss.fff is the same), but differ in the millisecond's subunits (the last four values in yyyy-MM-dd HH\:mm\:ss.fffffff are different).
  • The logs are viewed in Orchestrator with the default sort order in the grid (sort by Time descending).

However, the database and exported .csv file are not affected.



Both server exceptions from Orchestrator, and the stack trace on the Job Details window, are logged in English, regardless of what language was chosen by the user.

Updated 2 years ago

About Logs

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.