automation-suite
2024.10
true
- Overview
- Requirements
- Pre-installation
- Preparing the installation
- Installing and configuring the service mesh
- Downloading the installation packages
- Configuring the OCI-compliant registry
- Granting installation permissions
- Installing and configuring the GitOps tool
- Deploying Redis through OperatorHub
- Applying miscellaneous configurations
- Running uipathctl
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite
- Migrating standalone products to Automation Suite
- Step 1: Restoring the standalone product database
- Step 2: Updating the schema of the restored product database
- Step 3: Moving the Identity organization data from standalone to Automation Suite
- Step 4: Backing up the platform database in Automation Suite
- Step 5: Merging organizations in Automation Suite
- Step 6: Updating the migrated product connection strings
- Step 7: Migrating standalone Orchestrator
- Step 8: Migrating standalone Insights
- Step 9: Deleting the default tenant
- Performing a single tenant migration
- Migrating between Automation Suite clusters
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Troubleshooting
- Log streaming does not work in proxy setups
- 500 errors and rate limiting on S3 requests in ODF

Automation Suite on OpenShift installation guide
Last updated Nov 13, 2025
Services that send S3 requests through OpenShift Data Foundation (ODF) can encounter rate limiting or 500 Internal Server
Error responses. In ODF, storage management is handled by NooBaa. When the number of requests surges beyond a threshold, NooBaa
allocates additional memory. If that allocation exceeds the CPU or memory limits configured on the NooBaa deployment, the
pod can be terminated by the out-of-memory (OOM) killer. This termination causes service interruptions, request throttling,
and error responses.
To address the issue, you must increase the CPU and memory limits and requests for the NooBaa deployment so it can handle workload spikes without being terminated. Adjusting limits is the main resolution, while increasing requests helps improve resource allocation.
Take the following steps:
- Retrieve the relevant BackingStore by running the following command:
oc get backingstores.noobaa.io -n openshift-storageoc get backingstores.noobaa.io -n openshift-storage - Patch the BackingStore to increase CPU and memory resource limits by running the following command:
oc patch BackingStore -n openshift-storage <backing-store-name> --type='merge' -p '{ "spec": { "pvPool": { "resources": { "limits": { "cpu": "1000m", "memory": "4000Mi" }, "requests": { "cpu": "500m", "memory": "500Mi" } } } } }'oc patch BackingStore -n openshift-storage <backing-store-name> --type='merge' -p '{ "spec": { "pvPool": { "resources": { "limits": { "cpu": "1000m", "memory": "4000Mi" }, "requests": { "cpu": "500m", "memory": "500Mi" } } } } }'