Automation Suite
2021.10
false
Banner background image
Automation Suite Installation Guide
Last updated Apr 19, 2024

Migrating Longhorn physical disk to LVM

Note: This step is optional but highly recommended when upgrading Automation Suite.

Overview

In the 2021.10.0 release, you needed to bring a physical disk for block storage/datadisk. However, with a physical disk, the size of a volume/PVC that you could create was limited to the size of the underlying disk. Moreover, only vertical scaling was possible. That is why Longhorn strongly recommends using LVM to aggregate all data disks into a single partition. This allows block storage to be easily extended in the future Longhorn | Best Practices.

If you allocated 2TiB storage for Longhorn, and your storage requirements are low, we recommend migrating to LVM.

Prerequisites

  • Your cluster should be a multi-node HA-ready production cluster, i.e. the cluster should contain a minimum of three server nodes.
  • Make sure that none of the AI family workloads (AI Center, DU, TM) are running at the time of node rotation or else those workloads will fail abruptly.
  • You must upgrade Automation Suite to 2021.10.1.
  • While setting up the cluster under cluster_config.json as fixed_rke2_address, the LB URL is used instead of hardcoding IP or FQDN of the first machine.
  • Provision three standby machines that will replace your original server nodes. The hardware configuration of these machines should be the same as that of your existing server nodes. Machines should be placed under the same VPC, subnet, network security group, etc., and the number of disks attached and their size should also be the same.
  • Make sure that all ports are accessible on the machines. See Configuring the machines for details.
  • Do not create the disk partitions manually on new machines. Instead, use the disk partitioning script documented in Configuring the disk.
  • Make sure that the hostnames of the machines are identical. For example, if your old servers were named server0, server1, and server2, give the same hostnames to the new server nodes as well.
  • Copy the installer folder along with cluster_config.json from the existing first server to all three newly created machines.
  • Before proceeding with the server rotation, run this health check script from any of the existing servers. The script should not throw any errors, and should prompt you with the following message: All Deployments are Healthy.

Node rotation process

  • Server nodes should be rotated one by one. Note that the node rotation process does not apply to agent nodes.
  • Shut down old server-N node, so that workloads running on the node are gracefully deleted (N is the nth server node; e.g. server0).
  • Remove the server from the cluster by running the following command:

    #where N is the nth server node Ex: server0 
    kubectl delete node server-N#where N is the nth server node Ex: server0 
    kubectl delete node server-N
  • Remove server-N from the load balancer backend pool, i.e. from both server and node pool. See Configuring the load balancer for details.
  • On the new server-N node, install Kubernetes and configure the new node as a server. See Adding a new node to the cluster for details.
  • Once the Kubernetes installation is successful, run kubectl get nodes and verify the new node is indeed joined to the original cluster.
  • Run the health check script from the newly added node to monitor the health of the cluster. The script should display the following message: All Deployments are Healthy.
  • Once the health check script returns success, add the new server node to the server and node pools under the Load Balancer. See Configuring the load balancer for details.
  • Repeat the node rotation process for other server nodes, i.e. server1, server2, server-N.
  • Once all the server nodes are rotated, you can delete the older server nodes that are in shutdown state.
  • Overview
  • Prerequisites
  • Node rotation process

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.