- Getting Started
- About OData and References
- Enumerated Types
- Authenticating
- Building API Requests
- Permissions Per Endpoint
- Response Codes
- Rate limits and large data field usage optimization
- Swagger Definition
- Orchestrator APIs
- Alerts Requests
- Assets Requests
- Calendars Requests
- Environments Requests
- Folders Requests
- Generic Tasks Requests
- Jobs Requests
- Libraries Requests
- License Requests
- Packages Requests
- Permissions Requests
- Personal Workspaces Requests
- Processes Requests
- Process Data Retention Policy Requests
- Queue Items Requests
- Queue Retention Policy Requests
- Robots Requests
- Roles Requests
- Schedules Requests
- Settings Requests
- Storage Bucket Requests
- Tasks Requests
- Task Catalogs Requests
- Task Forms Requests
- Tenants Requests
- Transactions Requests
- Users Requests
- Webhooks Requests
Orchestrator API Guide
Rate limits and large data field usage optimization
- They ensure a predictable system: knowing the API call limit helps in better designing and maintaining your applications. It provides a predictable environment, minimizing surprises due to unexpected limit breaches.
- They improve performance: by controlling the traffic on our servers, we ensure optimal performance and quicker responses, significantly improving your product experience.
- They enhance security: the limits outlined below act as an additional layer of security, protecting your system from potential cyber threats.
- They ensure fair usage: our rate limits assure equitable resource allocation to all users, and smooth operation even during peak use periods.
The limits and large data fields optimizations outlined below require some adjustments on your end, but we are confident that they will bring long-term benefits.
These are the limits we enforce:
Endpoint |
Limits |
Effective since | Examples |
---|---|---|---|
|
|
Community, Canary, and Enterprise tenants: July 2024 |
|
|
|
Community, Canary, and Enterprise tenants: July 2024 |
|
|
100 API requests/day/tenant |
Community, Canary, and Enterprise tenants: October 2024 |
N/A |
| 100 API requests/day/tenant |
Community, Canary, and Enterprise tenants: October 2024 |
N/A |
|
100 API requests/day/tenant |
Community, Canary, and Enterprise tenants: October 2024 |
N/A |
|
100 API requests/day/tenant |
Community, Canary, and Enterprise tenants: October 2024 |
N/A |
1 Non-automation usage refers to API calls originating from API integrations outside of processes, such as PowerShell scripts and third party monitoring tools.
2 Automation usage refers to API calls originating from Get Queue Items, Get Jobs, and Orchestrator Http Request activities.
GET/odata/Jobs(<job_id>)
is not rate limited.
These limits do not apply to adding queue items and processing jobs. As such, there is no impact on adding a queue item, removing it from a queue, setting its status, or on starting and processing any number of jobs.
You can check your API usage per month or day on the tenant-level API audit tab in the Monitoring window.
Header |
Description |
Example |
---|---|---|
|
All requests beyond the aforementioned limits are returned an HTTP 429 response which includes this header. It displays the number of seconds that you need to wait until the endpoint is available to you again. |
Retry-After: 10 means that the rate limit on the endpoint expires in 10 seconds. Any retries within these 10 seconds result in a 429 response.
|
|
The number of calls remaining |
X-RateLimit-Remaining: 30 means that you have 30 calls remaining in the current time range
|
If the number of requests per minute is below 10, it is rendered as 0.
The following activities are impacted by these limits:
- Get Job
- Get Queue Items
- Orchestrator Http Request (when used to call the
GET /odata/Jobs
orGET /odata/QueueItems
endpoints)
Retry-after
response header, meaning that they perform automatic retries of Orchestrator operations. Please make sure to always use the
latest version of System activities to benefit from this.
This is what we recommend you do to make sure that you both comply with our limits, and take full advantage of them:
- Review your API usage patterns and the information you retrieve from our previously mentioned
GetAll
-type endpoints. - Adjust your API call frequency and data extraction procedures to align with these limits where necessary.
- Use the Insights real-time data export option for real-time exports.
- See the Exporting jobs and Exporting queue items sections for
examples on how to retrieve jobs and queue items
data for reporting and archiving purposes only.
Important:
- These endpoints are limited to 100 API requests/day/tenant.
Once that limit is exceeded, a #4502 error will be displayed, stating that the daily limit per tenant has been reached. The limit resets at 00:00 UTC.
- Do not use these endpoints for real-time data retrieval.
- These endpoints are limited to 100 API requests/day/tenant.
- Make sure you always use the latest version of System activities.
- Get in touch with your account manager or our support team if you have any questions or need further clarification.
These alerts, available in the API Rate Limits section of your alerting settings, inform you when the limits are exceeded, and provide valuable information about the impacted endpoint.
- Request rate exceeded the limit in the last day - Warn severity:
- It is sent daily, in the application and via email.
- You are subscribed to it by default.
- It includes the name of the endpoint for which the number of requests was exceeded.
- It includes a link to the tenant-level API audit monitoring window, focused on the daily view. Details...
- It requires the Audit - View permission.
- Request rate exceeded the limit - Error severity:
- It is sent every ten minutes, in the application and via email.
- You are unsubscribed to it by default.
- It includes the name of the endpoint for which the number of requests was exceeded.
- It includes a link to the tenant-level API audit monitoring window, focused on the detailed 10-minute view. Details...Note: There might be a 10-minute delay between the moment that the limit is exceeded and the time the alert is sent.
- It requires the Audit - View permission.
Alert scenarios
You are alerted in the following scenarios:
- When you exceed 100 API requests/minute/tenant through non-automation usage.
- When you exceed 1,000 API requests/minute/tenant through automation usage.
The API endpoints used for retrieving lists of jobs and queue items can prove problematic when used for real-time monitoring and data export. For example:
-
When requesting up to 1,000 items, with each item amounting to up to 1 MB of large data, the response to a single API call can be 1 GB in size. Since there are intermediaries that do not allow responses of this size, requests fail.
-
When using complex filters, then paginating a queue with multi-million queue items, requests might start timing out after a few dozen pages. This is due to the amount of data that needs to be retrieved from the database.
Jobs - GetAll
endpoint responses. These are the impacted fields:
Endpoint |
Omitted fields |
What you can use instead |
Effective since |
---|---|---|---|
|
|
|
Community and Canary tenants: March 2024 Enterprise tenants: July 2024 |
GET /odata/Jobs
endpoint, either via API or via the Get Jobs, Get Queue Items, or Orchestrator HTTP Request activities, you need to find out whether you use any of the listed fields. If you do, please be aware that the content of
these fields will be returned as null.
We recommend that you test out processes in your Canary tenants to assess the impact.
GET/odata/QueueItems
endpoint is optimized by applying these size limitations to its fields:
Field |
Limit | Effective since |
How to tell that you are impacted |
How to address this |
---|---|---|---|---|
Progress |
1,048,576 characters |
> Community and Canary tenants: April 2024 > Enterprise tenants: May 2024 |
A specific error message is returned if the data you are trying to upload exceeds these limits. |
We recommend that you use storage buckets and/or Data Service blob storage if you need to store more data. |
104,857 characters |
All tenants: September 2024 | |||
AnalyticsData/Analytics |
5,120 characters |
> Community and Canary tenants: June 2024 > Enterprise tenants: September 2024 | ||
OutputData/Output |
51,200 characters | |||
SpecificContent/SpecificData |
256,000 characters | |||
ProcessingException - Reason |
102,400 characters | |||
ProcessingException - Details |
102,400 characters |
These limits are calculated based on the UTF-16 encoding style, which is mainly used by SQL Server to store data.
Information is stored in SQL Server via data types like NVARCHAR. In these data types, each character, including widely used characters from languages like Chinese, Japanese, and Korean, is stored using 2 bytes. This may be misleading when you check the data payload using Notepad or in UTF-8, since these display 1 byte per character (primarily ASCII 0-127 abc123 etc).
For instance, if you were to store a Chinese character like 文 in a text file with UTF-8 encoding, it would be stored as a 3-byte sequence (E6 96 87), thus consuming more storage space. The difference between encoding styles makes the number of characters unreliable as a limit.
The following filter is also limited for performance purposes:
Filter |
Limit |
Effective since |
How to tell that you are impacted |
How to address this |
---|---|---|---|---|
|
> If you do not use the
$top filter, you receive 100 records by default.
> If you use the
$top filter, you receive a maximum of 100 records. Anything exceeding 100 triggers a 400 Bad Request error message.
|
> Community and Canary tenant: June 2024 > Enterprise tenants: September 2024 |
Enterprise: We aim to send an email notification to administrators if we detect the usage of this filter in API calls. However, we ask that you keep a close eye on your end as well. |
We recommend that you modify your process or API usage logic accordingly if you expect to exceed this limit. |
Jobs
and QueueItems
fields:
- See the Exporting jobs and Exporting queue items sections for examples on how to retrieve jobs and queue items data.
- Use the Insights real-time data export option.
- Get in touch with your account manager or our support team if the previous methods do not work for you.
Rate limits and large data fields changes will not be implemented in on-premises environments.
If you are using standalone Orchestrator and are thinking of moving to cloud, you can use the IIS request logs to determine the request rate for the impacted endpoints. The analysis depends on how you aggregate the logs, for which you can use, for instance, Microsoft Log Parser.
For assessing the impact on large data fields, we recommend testing out your processes in Canary tenants.