Important!
Storage optimizations are only needed for the multi-node HA-ready production deployment. It is not required for the single-node evaluation deployment.
In versions prior to 2022.4, Ceph was using the replicated type of data pool, which takes 900 GiB of space across the cluster to store 50 GiB of objects. Opting for the erasure-coded pool instead of the replicated type reduces the storage space required to store the same 50 GiB of objects from 900 GiB to 450 GiB.
You can optimize Objectstore storage only if the Ceph version running within your Automation Suite cluster is 15.x.
The scenarios where your Automation Suite cluster hosts Ceph 15.x are the following:
- Fresh installation of Automation Suite 2022.4.0;
- Existing installations of Automation Suite 2021.10.0, 2021.10.1, 2021.10.2;
- Upgrade from Automation Suite 2021.10.0, 2021.10.1, 2021.10.2 to 2022.4.0.
Automation Suite 2021.10.3 and 2021.10.4 hosts Ceph 16.x, where you cannot take advantages of an optimized Objectstore storage. We are working on providing a solution for this.
Important!
If you use Ceph 15.x, storage optimization is done at the cost of less fault tolerance to data corruption. To take advantage of optimized storage, you can afford the corruption of one single storage replica running within Automation Suite at a time. If you lose more than one storage replica, there is a high chance of data loss. The only way to recover the storage replica is by restoring it from the backup data (provided that you have configured a backup).
To check the installed Ceph version, run the following command on any server node:
kubectl -n rook-ceph get deployment -l rook_cluster=rook-ceph -o jsonpath='{range .items[*]}{"ceph-version="}{.metadata.labels.ceph-version}{"\n"}{end}' | sort | uniq
The following sample output shows a Ceph version that is not supported for storage optimization:
ceph-version=16.2.7-0
To optimize the Objectstore storage, you need to migrate the replicated Ceph algorithm to the erasure encoding algorithm. The following table shows the ways you can migrate the data.
Migration Methods | Scenarios |
---|---|
Automated: Migrating Ceph data pool from replicated to erasure-coded type | Fresh installation of the Automation Suite cluster; |
Manual: Migrating Ceph data pool from replicated to erasure-coded type | Available storage in Ceph is less than 35%, and you can bring additional temporary SSD for migration. |
Updated about a month ago