Automation Suite
2022.4
false
- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Managing products
- Managing the cluster in ArgoCD
- Setting up the external NFS server
- Automated: Enabling the Backup on the Cluster
- Automated: Disabling the Backup on the Cluster
- Automated, Online: Restoring the Cluster
- Automated, Offline: Restoring the Cluster
- Manual: Enabling the Backup on the Cluster
- Manual: Disabling the Backup on the Cluster
- Manual, Online: Restoring the Cluster
- Manual, Offline: Restoring the Cluster
- Additional configuration
- Migrating objectstore from persistent volume to raw disks
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable NIC checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure After Certificate Update
- Automation Suite not working after OS upgrade
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Volume unable to mount due to not being ready for workloads
- RKE2 fails during installation and upgrade
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to Resize Objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Cannot Log in After Migration
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- After the Initial Install, ArgoCD App Went Into Progressing State
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unexpected Inconsistency; Run Fsck Manually
- Degraded MongoDB or Business Applications After Cluster Restore
- Missing Self-heal-operator and Sf-k8-utils Repo
- Unhealthy Services After Cluster Restore or Rollback
- RabbitMQ pod stuck in CrashLoopBackOff
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Backup failed due to TooManySnapshots error
Automation Suite Installation Guide
Last updated Apr 24, 2024
Backup failed due to TooManySnapshots error
During backup, Longhorn volumes are backed up by taking the volume snapshot and sending it to a remote location. If snapshot creation issues affect the Longhorn volume, with the snapshot count for the volume being higher than 248, the backup will not succeed.
You can verify if the backup failed due to the
TooManySnapshots
error in the following ways:
-
By checking the Velero logs:
kubectl logs -n velero -l app.kubernetes.io/name=velero -c velero |grep "Waiting for volumesnapshotcontents"
kubectl logs -n velero -l app.kubernetes.io/name=velero -c velero |grep "Waiting for volumesnapshotcontents"Sample output:
time="2023-12-15T08:15:59Z" level=info msg="Waiting for volumesnapshotcontents snapcontent-f073b88e-bd01-40a8-a645-ec929e276cef to have snapshot handle. Retrying in 5s" backup=velero/daily-2 cmd=/plugins/velero-plugin-for-csi logSource="/go/src/velero-plugin-for-csi/internal/util/util.go:182" pluginName=velero-plugin-for-csi time="2023-12-15T08:15:59Z" level=info msg="Waiting for volumesnapshotcontents snapcontent-f073b88e-bd01-40a8-a645-ec929e276cef to have snapshot handle. Retrying in 5s" backup=velero/daily-2 cmd=/plugins/velero-plugin-for-csi logSource="/go/src/velero-plugin-for-csi/internal/util/util.go:182" pluginName=velero-plugin-for-csi
time="2023-12-15T08:15:59Z" level=info msg="Waiting for volumesnapshotcontents snapcontent-f073b88e-bd01-40a8-a645-ec929e276cef to have snapshot handle. Retrying in 5s" backup=velero/daily-2 cmd=/plugins/velero-plugin-for-csi logSource="/go/src/velero-plugin-for-csi/internal/util/util.go:182" pluginName=velero-plugin-for-csi time="2023-12-15T08:15:59Z" level=info msg="Waiting for volumesnapshotcontents snapcontent-f073b88e-bd01-40a8-a645-ec929e276cef to have snapshot handle. Retrying in 5s" backup=velero/daily-2 cmd=/plugins/velero-plugin-for-csi logSource="/go/src/velero-plugin-for-csi/internal/util/util.go:182" pluginName=velero-plugin-for-csi -
By checking the Longhorn pod logs:
kubectl logs -n longhorn-system -l app=csi-snapshotter --tail=-1 |grep "too many snapshots created"
kubectl logs -n longhorn-system -l app=csi-snapshotter --tail=-1 |grep "too many snapshots created"Sample output:
I1215 08:39:41.707351 1 snapshot_controller.go:291] createSnapshotWrapper: CreateSnapshot for content snapcontent-f073b88e-bd01-40a8-a645-ec929e276cef returned error: rpc error: code = Internal desc = Bad response statusCode [500]. Status [500 Internal Server Error]. Body: [code=Server Error, detail=, message=failed to create snapshot: proxyServer=10.42.7.56:8501 destination=10.42.7.56:10004: failed to snapshot volume: rpc error: code = Unknown desc = failed to create snapshot snapshot-f073b88e-bd01-40a8-a645-ec929e276cef for volume 10.42.7.56:10004: rpc error: code = Unknown desc = too many snapshots created] from [http://longhorn-backend:9500/v1/volumes/pvc-7d89efa4-3d60-4837-a632-f190cd3cd9ed?action=snapshotCreate]
I1215 08:39:41.707351 1 snapshot_controller.go:291] createSnapshotWrapper: CreateSnapshot for content snapcontent-f073b88e-bd01-40a8-a645-ec929e276cef returned error: rpc error: code = Internal desc = Bad response statusCode [500]. Status [500 Internal Server Error]. Body: [code=Server Error, detail=, message=failed to create snapshot: proxyServer=10.42.7.56:8501 destination=10.42.7.56:10004: failed to snapshot volume: rpc error: code = Unknown desc = failed to create snapshot snapshot-f073b88e-bd01-40a8-a645-ec929e276cef for volume 10.42.7.56:10004: rpc error: code = Unknown desc = too many snapshots created] from [http://longhorn-backend:9500/v1/volumes/pvc-7d89efa4-3d60-4837-a632-f190cd3cd9ed?action=snapshotCreate]
If you encounter the
TooManySnapshots
error, you must clean up the snapshots for all the volumes by running the following script. For details on how to automate
this operation, see How to automatically clean up Longhorn snapshots.
#!/bin/bash
set -e
# longhorn backend URL
url=
# By default, snapshot older than 10 days will be deleted
days=10
function display_usage() {
echo "usage: $(basename "$0") [-h] -u longhorn-url -d days"
echo " -u Longhorn URL"
echo " -d Number of days(should be >0). By default, script will delete snapshot older than 10 days."
echo " -h Print help"
}
while getopts 'hd:u:' flag "$@"; do
case "${flag}" in
u)
url=${OPTARG}
;;
d)
days=${OPTARG}
[ "$days" ] && [ -z "${days//[0-9]}" ] || { echo "Invalid number of days=$days"; exit 1; }
;;
h)
display_usage
exit 0
;;
:)
echo "Invalid option: ${OPTARG} requires an argument."
exit 1
;;
*)
echo "Unexpected option ${flag}"
exit 1
;;
esac
done
[[ -z "$url" ]] && echo "Missing longhorn URL" && exit 1
# check if URL is valid
curl -s --connect-timeout 30 ${url}/v1 >> /dev/null || { echo "Unable to connect to longhorn backend"; exit 1; }
echo "Deleting snapshots older than $days days"
# Fetch list of longhorn volumes
vols=$( (curl -s -X GET ${url}/v1/volumes |jq -r '.data[].name') )
#delete given snapshot for given volume
function delete_snapshot() {
local vol=$1
local snap=$2
[[ -z "$vol" || -z "$snap" ]] && echo "Error: delete_snapshot: Empty argument" && return 1
curl -s -X POST ${url}/v1/volumes/${vol}?action=snapshotDelete -d '{"name": "'$snap'"}'
echo "Snapshot=$snap deleted for volume=$vol"
}
#perform cleanup for given volume
function cleanup_volume() {
local vol=$1
local deleted_snap=0
[[ -z "$vol" ]] && echo "Error: cleanup_volume: Empty argument" && return 1
# fetch list of snapshot
snaps=$( (curl -s -X POST ${url}/v1/volumes/${vol}?action=snapshotList | jq -r '.data[] | select(.usercreated==true) | .name' ) )
for i in ${snaps[@]}; do
if [[ $i == "volume-head" ]]; then
continue
fi
# calculate date difference for snapshot
snapTime=$(curl -s -X POST ${url}/v1/volumes/${vol}?action=snapshotGet -d '{"name":"'$i'"}' |jq -r '.created')
currentTime=$(date "+%s")
timeDiff=$(($currentTime - ($(date -d $snapTime "+%s")) / 86400))
if [[ $timeDiff -lt $days ]]; then
echo "Ignoring snapshot $i, since it is older than $timeDiff days"
continue
fi
#trigger deletion for snapshot
delete_snapshot $vol $i
deleted_snap=$((deleted_snap+1))
done
if [[ "$deleted_snap" -gt 0 ]]; then
#trigger purge for volume
curl -s -X POST ${url}/v1/volumes/${vol}?action=snapshotPurge >> /dev/null
fi
}
for i in ${vols[@]}; do
cleanup_volume $i
done
#!/bin/bash
set -e
# longhorn backend URL
url=
# By default, snapshot older than 10 days will be deleted
days=10
function display_usage() {
echo "usage: $(basename "$0") [-h] -u longhorn-url -d days"
echo " -u Longhorn URL"
echo " -d Number of days(should be >0). By default, script will delete snapshot older than 10 days."
echo " -h Print help"
}
while getopts 'hd:u:' flag "$@"; do
case "${flag}" in
u)
url=${OPTARG}
;;
d)
days=${OPTARG}
[ "$days" ] && [ -z "${days//[0-9]}" ] || { echo "Invalid number of days=$days"; exit 1; }
;;
h)
display_usage
exit 0
;;
:)
echo "Invalid option: ${OPTARG} requires an argument."
exit 1
;;
*)
echo "Unexpected option ${flag}"
exit 1
;;
esac
done
[[ -z "$url" ]] && echo "Missing longhorn URL" && exit 1
# check if URL is valid
curl -s --connect-timeout 30 ${url}/v1 >> /dev/null || { echo "Unable to connect to longhorn backend"; exit 1; }
echo "Deleting snapshots older than $days days"
# Fetch list of longhorn volumes
vols=$( (curl -s -X GET ${url}/v1/volumes |jq -r '.data[].name') )
#delete given snapshot for given volume
function delete_snapshot() {
local vol=$1
local snap=$2
[[ -z "$vol" || -z "$snap" ]] && echo "Error: delete_snapshot: Empty argument" && return 1
curl -s -X POST ${url}/v1/volumes/${vol}?action=snapshotDelete -d '{"name": "'$snap'"}'
echo "Snapshot=$snap deleted for volume=$vol"
}
#perform cleanup for given volume
function cleanup_volume() {
local vol=$1
local deleted_snap=0
[[ -z "$vol" ]] && echo "Error: cleanup_volume: Empty argument" && return 1
# fetch list of snapshot
snaps=$( (curl -s -X POST ${url}/v1/volumes/${vol}?action=snapshotList | jq -r '.data[] | select(.usercreated==true) | .name' ) )
for i in ${snaps[@]}; do
if [[ $i == "volume-head" ]]; then
continue
fi
# calculate date difference for snapshot
snapTime=$(curl -s -X POST ${url}/v1/volumes/${vol}?action=snapshotGet -d '{"name":"'$i'"}' |jq -r '.created')
currentTime=$(date "+%s")
timeDiff=$(($currentTime - ($(date -d $snapTime "+%s")) / 86400))
if [[ $timeDiff -lt $days ]]; then
echo "Ignoring snapshot $i, since it is older than $timeDiff days"
continue
fi
#trigger deletion for snapshot
delete_snapshot $vol $i
deleted_snap=$((deleted_snap+1))
done
if [[ "$deleted_snap" -gt 0 ]]; then
#trigger purge for volume
curl -s -X POST ${url}/v1/volumes/${vol}?action=snapshotPurge >> /dev/null
fi
}
for i in ${vols[@]}; do
cleanup_volume $i
done
When executing the aforementioned script, you must pass the following arguments:
-
-u
- The Longhorn backend URL. To get the Longhorn backend URL, run the following command:kubectl get svc -n longhorn-system longhorn-backend -o json | jq -r '.spec | (.clusterIP|tostring) + ":" + (.ports[0].port|tostring)' to fetch URL
kubectl get svc -n longhorn-system longhorn-backend -o json | jq -r '.spec | (.clusterIP|tostring) + ":" + (.ports[0].port|tostring)' to fetch URL -
-d
- The number of days before the snapshots are deleted.
Note:
To get a list of volumes affected by the
TooManySnapshots
error, run the following command:
kubectl get volumes -n longhorn-system -o json | jq -r '.items[] | select(([ .status.conditions[] | select(.type == "toomanysnapshots" and .status == "True") ] | length ) == 1 ) | .metadata.name'
kubectl get volumes -n longhorn-system -o json | jq -r '.items[] | select(([ .status.conditions[] | select(.type == "toomanysnapshots" and .status == "True") ] | length ) == 1 ) | .metadata.name'