Automation Suite
2023.10
false
Banner background image
Automation Suite on EKS/AKS Installation Guide
Last updated Apr 19, 2024

Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS

You can migrate from Automation Suite deployed on a Linux machine to Automation Suite on EKS/AKS. To do that, you must move your data from one Automation Suite flavor to another using uipathctl.

One of the advantages of this migration process is that you can attempt to perform it multiple times with no impact on your existing cluster.

Important:

You can only migrate to a new installation of Automation on EKS/AKS. Migrating to an existing installation of Automation on EKS/AKS is currently not supported.

Requirements

To migrate from Automation Suite on Linux to Automation Suite on EKS/AKS, you must meet the following requirements:

  • You must establish connectivity between the two environments.

  • You must have an external objectstore configured in your source cluster. If you use in-cluster storage, see Migrating in-cluster objectstore to external objectstore.

  • The version of your Automation Suite on Linux must be 2022.10 or newer.

  • Offline-only requirements: You must hydrate the source cluster.

Process overview

#

Migration step

1.

Mandatory. Download uipathctl.

For download instructions, see uipathctl.

2.

Mandatory. Download versions.json.

For download instructions, see versions.json.

3.

Prepare the docker images for both source and target cluster.

Optional. If your deployment is offline or if you use a private OCI registry, make sure the required images are available.

4.

Prepare the target cluster:

  1. Create the input.json file.
  2. Run the prerequisites check.

5.

Run the migration and move the data.

The migration executes pods on both the source and target clusters. The external object storage configured for the source cluster, specifically the Platform bucket, is used as an intermediate migration storage location.

Source cluster:

  • general-migration-* pods are responsible for exporting Kubernetes objects from the source cluster into the target cluster.
  • volume-migration-* pods are responsible for copying PVC data into the intermediate external storage.

Target cluster:

  • inbound-pvc-migration-* pods are responsible for creating PVCs in the target cluster and copying the source data into them.

6.

  • Run the installation of Automation Suite on AKS or EKS.

Data migration and responsibilities

Data

Migration mechanism

StatusResponsibility

SQL

Retained

You have two options:

  1. Reuse the same databases for the new installation. Point the cluster configuration's SQL connection strings to the existing database server.

  2. Clone your databases and use the clones instead.

Customer

Docker registry

Not migrated

If you use a private registry, you must hydrate the target registry. If you use registry.uipath.com for the target cluster, no further steps are needed.)

Customer

FQDN

Optional

You must choose a new FQDN for the new cluster. Optionally, you can revert to the previous FQDN if needed.

Customer
Certificates

Not migrated

You must bring certificates as part of the new cluster installation.

Customer
Cluster configuration

Not migrated

You must generate the new input.json applicable to the target cluster type (AKS or EKS).
Customer
Custom alerts and dashboards created by users

Not migrated

You must reconfigure the custom alerts and dashboards post-migration.

Customer
Application logs / Prometheus streaming configuration created by users

Not migrated

You must reconfigure application log and Prometheus streaming.

Customer
Dynamic workloads

Depends on application

AI Center training jobs are lost; Skills are retained.

Skills (script needed to be executed after upgrade): UiPath®

Training jobs: Customer

Objectstore

External objectstore: Retained

For external objectstore, you have two options:

  1. Reuse the existing external object store and connect it to the new environment.

  2. Create a replica of your current object store and use this for the new setup.

Important: If you're using an in-cluster object store, you must perform a ceph-to-external migration before the upgrade.

Migrating from in-cluster to external objectstore: Customer

External objectstore: UiPath®

Insights

Retained

UiPath®

MongoDB data

Retained

MongoDB data is moved to the target SQL.

UiPath®

RabbitMQ

Not needed

UiPath®

Monitoring (data)

Not needed

Monitoring data does not apply to the new cluster.

N/A

Preparation

Preparing the cluster_config.json file

Note:

Do not modify the source cluster after starting the migration process.

To prepare the cluster_config.json file, take the following steps:
  1. Download the targeted version of uipathctl on the source cluster and generate the input.json file by running uipathctl manifest get-revision. For details, see the following diagram:
    docs image
  2. Based on the previously generated input.json file, modify the input.json file of the target cluster. For instructions, see Configuring input.json.

    You must transfer the Orchestrator-specific configuration that includes the encryption key per tenant and Azure/Amazon S3 storage buckets settings.

  3. Validate the prerequisites in the target cluster by running the following command:
    uipathctl prereq run input-target.json --kubeconfig kubeconfig.target --versions versions.jsonuipathctl prereq run input-target.json --kubeconfig kubeconfig.target --versions versions.json
  4. Clone the SQL databases from the source deployment to the target deployment.

Private registry without internet access requirements

The migration process requires the latest uipathcore Docker image tag to be available for both the source and target clusters. If your source cluster is offline, make the image available by taking the following steps:
  1. Follow the steps to hydrate the registry used by the target cluster with the offline bundle in Option B: Hydrating the registry with the offline bundle.
  2. Copy the uipathctl binary and versions.json file on a VM with access to the source cluster.
  3. Run the following command:
    jq -r '.[][] | select(.name=="uipath/uipathcore") | .ref + ":" + .version' "/path/to/versions.json" > images.txtjq -r '.[][] | select(.name=="uipath/uipathcore") | .ref + ":" + .version' "/path/to/versions.json" > images.txt
  4. Add the uipathcore image to your private registry using the uipathctl binary:
    ./uipathctl registry seed --tag-file ./images.txt \
                --source-registry "target.registry.fqdn.com" \
                --source-password "target-registry-username" \
                --source-username "target-registry-password" \
                --dest-registry "<source.registry.fqdn.com>" \
                --dest-username "<source-registry-username>" \
                --dest-password "<source-registry-password>"./uipathctl registry seed --tag-file ./images.txt \
                --source-registry "target.registry.fqdn.com" \
                --source-password "target-registry-username" \
                --source-username "target-registry-password" \
                --dest-registry "<source.registry.fqdn.com>" \
                --dest-username "<source-registry-username>" \
                --dest-password "<source-registry-password>"
    Note: Make sure to replace registry.fqdn, registry-username, and registry-password with the proper values for the private registry used by your source offline installation.

Private registry with internet access requirements

If you use a private registry, you must seed it. For instructions, seeConfiguring the OCI-compliant registry.

Offline with in-cluster registry requirements

If you use an in-cluster registry in your offline environment, take the following steps:

  1. Download as.tar.gz on the source cluster.
  2. Hydrate your registry by running the configureUiPathAS.sh script:
    cd /opt/UiPathAutomationSuite/{version}/installer
    
    ./configureUiPathAS.sh registry upload --offline-bundle /uipath/{version}/as.tar.gz --offline-tmp-folder /uipath/tmpcd /opt/UiPathAutomationSuite/{version}/installer
    
    ./configureUiPathAS.sh registry upload --offline-bundle /uipath/{version}/as.tar.gz --offline-tmp-folder /uipath/tmp

Execution

To migrate to Automation Suite on EKS/AKS, take the following steps:

  1. Execute the migration by running the following command:
    uipathctl cluster migration run input-target.json --kubeconfig kubeconfig.source --target-kubeconfig kubeconfig.target --versions versions-target.jsonuipathctl cluster migration run input-target.json --kubeconfig kubeconfig.source --target-kubeconfig kubeconfig.target --versions versions-target.json
  2. Complete the installation of Automation Suite on AKS/EKS on the target cluster by running the following command:
    uipathctl manifest apply input-target.json --kubeconfig kubeconfig.target --versions versions-target.jsonuipathctl manifest apply input-target.json --kubeconfig kubeconfig.target --versions versions-target.json

AI Center skill migration

The steps in this section are applicable only if you enabled AI Center on both the source and target clusters. Note that the instructions assume that AI Center on the target cluster points to the database containing the skill data for running the skills.

After completing the migration, you must sync the AI Center skills so that you can use them again.

Checking the skill migration status

To retrieve the status of the skills on the target Automation Suite on EKS/AKS cluster, take the following steps:
  1. Set up the variables for executing the next commands.
    aicJobsImage=$(kubectl -n uipath get configmap aic-jobs-config -o "jsonpath={.data['AIC_JOBS_IMAGE']}")
    podName="skillstatuspod"aicJobsImage=$(kubectl -n uipath get configmap aic-jobs-config -o "jsonpath={.data['AIC_JOBS_IMAGE']}")
    podName="skillstatuspod"
  2. Clean up any skillstatuspod that might be running before retrieving the skill status again. The following command deletes the pod from the previous iteration, so use it carefully.
    kubectl -n uipath delete pod "$podName" --force.kubectl -n uipath delete pod "$podName" --force. 
  3. Create the skillstatuspod to get the skill status. The pod may take some time to pull the image and run, typically less than 30 seconds.
    kubectl -n uipath run "$podName" --image="$aicJobsImage" --restart=Never --labels="app.kubernetes.io/component=aicenter" --overrides='{ "metadata": { "annotations": {"sidecar.istio.io/inject": "false"}}}' -- /bin/bash -c "curl -sSL -XPOST -H 'Content-Type: application/json' 'ai-deployer-svc.uipath.svc.cluster.local/ai-deployer/v1/system/mlskills:restore-status' | jq -r '([\"SKILL_ID\",\"SKILL_NAME\", \"STATUS\"] | (., map(length*\"-\"))), (.data[] | [.skillId, .skillName, .syncStatus]) | @tsv' | column -ts $'\t'; exit"kubectl -n uipath run "$podName" --image="$aicJobsImage" --restart=Never --labels="app.kubernetes.io/component=aicenter" --overrides='{ "metadata": { "annotations": {"sidecar.istio.io/inject": "false"}}}' -- /bin/bash -c "curl -sSL -XPOST -H 'Content-Type: application/json' 'ai-deployer-svc.uipath.svc.cluster.local/ai-deployer/v1/system/mlskills:restore-status' | jq -r '([\"SKILL_ID\",\"SKILL_NAME\", \"STATUS\"] | (., map(length*\"-\"))), (.data[] | [.skillId, .skillName, .syncStatus]) | @tsv' | column -ts $'\t'; exit"
  4. Check the output of the skill status.
    kubectl -n uipath logs -f "$podName" -c "$podName"kubectl -n uipath logs -f "$podName" -c "$podName"

Running the skill migration

To run the skill migration, take the following steps:

  1. Set up the variables for executing the next commands.
    aicJobsImage=$(kubectl -n uipath get configmap aic-jobs-config -o "jsonpath={.data['AIC_JOBS_IMAGE']}")
    podName="skillsyncpod"aicJobsImage=$(kubectl -n uipath get configmap aic-jobs-config -o "jsonpath={.data['AIC_JOBS_IMAGE']}")
    podName="skillsyncpod"
  2. Clean up any skillsyncpod that might be running before retrieving the skill status again. The following command deletes the pod from the previous iteration, so use it carefully.
    kubectl -n uipath delete pod "$podName" --forcekubectl -n uipath delete pod "$podName" --force
  3. Initiate the skill sync. The pod may take some time to pull image and run, typically less than 30 seconds.
    kubectl -n uipath run "$podName" --image="$aicJobsImage" --restart=Never --labels="app.kubernetes.io/component=aicenter" --overrides='{ "metadata": { "annotations": {"sidecar.istio.io/inject": "false"}}}' -- /bin/bash -c "curl -sSL -XPOST -H 'Content-Type: application/json' 'ai-deployer-svc.uipath.svc.cluster.local/ai-deployer/v1/system/mlskills:restore-all' -d "[skill_id1, skill_id2, .... ]; exit"kubectl -n uipath run "$podName" --image="$aicJobsImage" --restart=Never --labels="app.kubernetes.io/component=aicenter" --overrides='{ "metadata": { "annotations": {"sidecar.istio.io/inject": "false"}}}' -- /bin/bash -c "curl -sSL -XPOST -H 'Content-Type: application/json' 'ai-deployer-svc.uipath.svc.cluster.local/ai-deployer/v1/system/mlskills:restore-all' -d "[skill_id1, skill_id2, .... ]; exit"
  4. Check the output of the skill sync status.
    kubectl -n uipath logs -f "$podName" -c "$podName"kubectl -n uipath logs -f "$podName" -c "$podName"
  5. The operation may take a long time, depending on the number of skills to sync, so you can rely on the skill migration status to check it periodically until there is no skill in IN_PROGRESS state.
Note:
When checking the skill migration status or running the skill migration, you cover all skills at the same time. Alternatively, you can perform these operations only for select skills by passing -d "[skill_id1, skill_id2, .... ]" as an extra argument to curl in step 3.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.