automation-suite
2022.4
false
- Überblick
- Anforderungen
- Installation
- Fragen und Antworten: Bereitstellungsvorlagen
- Herunterladen der Installationspakete
- install-uipath.sh-Parameter
- Aktivieren eines High Availability Add-ons für den Cluster
- Document Understanding-Konfigurationsdatei
- Hinzufügen eines dedizierten Agent-Knotens mit GPU-Unterstützung
- Verbinden einer Task Mining-Anwendung
- Hinzufügen eines dedizierten Agent-Knotens für Task Mining
- Nach der Installation
- Clusterverwaltung
- Verwalten von Produkten
- Verwalten des Clusters in ArgoCD
- Einrichten des externen NFS-Servers
- Automatisiert: Aktivieren der Sicherung im Cluster
- Automatisiert: Deaktivieren der Clustersicherung
- Automatisiert, online: Wiederherstellen des Clusters
- Automatisiert, offline: Wiederherstellen des Clusters
- Manuell: Aktivieren der Clustersicherung
- Manuell: Deaktivieren der Clustersicherung
- Manuell, online: Wiederherstellen des Clusters
- Manuell, offline: Wiederherstellen des Clusters
- Zusätzliche Konfiguration
- Migrieren von Objectstore von persistentem Volume zu Raw-Festplatten
- Überwachung und Warnungen
- Migration und Upgrade
- Migrationsoptionen
- Schritt 1: Verschieben der Identitätsorganisationsdaten von einer eigenständigen in die Automation Suite
- Schritt 2: Wiederherstellen der eigenständigen Produktdatenbank
- Schritt 3: Sichern der Plattformdatenbank in der Automation Suite
- Schritt 4: Zusammenführen von Organisationen in der Automation Suite
- Schritt 5: Aktualisieren der migrierten Produktverbindungszeichenfolgen
- Schritt 6: Migrieren von eigenständigen Insights
- Schritt 7: Löschen des Standardmandanten
- B) Migration von einzelnen Mandanten
- Produktspezifische Konfiguration
- Best Practices und Wartung
- Fehlersuche und ‑behebung
- Fehlerbehebung bei Diensten während der Installation
- Deinstallieren des Clusters
- Löschen von Offline-Artefakten für mehr Speicherplatz
- So löschen Sie Redis-Daten
- So können Sie die Istio-Protokollierung aktivieren
- So werden Protokolle manuell bereinigt
- So löschen Sie alte Protokolle, die im sf-logs-Bucket gespeichert sind
- So deaktivieren Sie Streaming-Protokolle für das AI Center
- Fehlerbehebung bei fehlgeschlagenen Automation Suite-Installationen
- So löschen Sie Bilder aus dem alten Installationsprogramm nach dem Upgrade
- Automatisches Bereinigen von Longhorn-Snapshots
- Deaktivieren von TX-Prüfsummen-Offloading
- Umgang mit schwachen Verschlüsselungen in TLS 1.2
- Es kann keine Offlineinstallation auf RHEL 8.4 OS ausgeführt werden.
- Fehler beim Herunterladen des Pakets
- Die Offlineinstallation schlägt aufgrund fehlender binärer Dateien fehl
- Zertifikatproblem bei der Offlineinstallation
- Die erste Installation schlägt während des Longhorn-Setups fehl
- Validierungsfehler bei der SQL-Verbindungszeichenfolge
- Voraussetzungsprüfung für das Selinux-iscsid-Modul schlägt fehl
- Azure-Datenträger nicht als SSD markiert
- Fehler nach der Zertifikatsaktualisierung
- Automation Suite funktioniert nach Betriebssystem-Upgrade nicht
- Für die Automation Suite muss Backlog_wait_time festgelegt werden 1
- Volume nicht bereitstellbar, da es nicht für Workloads bereit ist
- RKE2 schlägt während der Installation und Aktualisierung fehl
- Fehler beim Hoch- oder Herunterladen von Daten im Objektspeicher
- Die Größenänderung eines PVC bewirkt keine Korrektur von Ceph
- Fehler beim Ändern der Größe von objectstore PVC
- Rook Ceph oder Looker-Pod hängen im Init-Status fest
- Fehler beim Anhängen eines StatefulSet-Volumes
- Fehler beim Erstellen persistenter Volumes
- Patch zur Rückgewinnung von Speicherplatz
- Sicherung aufgrund des Fehlers „TooManySnapshots“ fehlgeschlagen
- Alle Longhorn-Replikate sind fehlerhaft
- Festlegen eines Timeout-Intervalls für die Verwaltungsportale
- Aktualisieren Sie die zugrunde liegenden Verzeichnisverbindungen
- Anmeldung nach der Migration nicht mehr möglich
- Kinit: KDC kann für Realm <AD Domain> beim Abrufen der ersten Anmeldeinformationen nicht gefunden werden
- Kinit: Keytab enthält keine geeigneten Schlüssel für *** beim Abrufen der ersten Anmeldeinformationen
- Der GSSAPI-Vorgang ist mit Fehler fehlgeschlagen: Es wurde ein ungültiger Statuscode übermittelt (Die Anmeldeinformationen des Clients wurden widerrufen).
- Alarm für fehlgeschlagenen Kerberos-tgt-update-Auftrag empfangen
- SSPI-Anbieter: Server nicht in Kerberos-Datenbank gefunden
- Die Anmeldung ist für den Benutzer <ADDOMAIN><aduser> fehlgeschlagen. Grund: Das Konto ist deaktiviert.
- ArgoCD-Anmeldung fehlgeschlagen
- Fehler beim Abrufen des Sandbox-Abbilds
- Pods werden nicht in der ArgoCD-Benutzeroberfläche angezeigt
- Redis-Testfehler
- RKE2-Server kann nicht gestartet werden
- Secret nicht im UiPath-Namespace gefunden
- Nach der ersten Installation wechselte ArgoCD in den Status „Progressing“.
- MongoDB-Pods in „CrashLoopBackOff“ oder ausstehende PVC-Bereitstellung nach Löschung
- UNERWARTETE INKONSISTENZ; fsck MANUELL AUSFÜHREN
- Herabgestufte MongoDB- oder Geschäftsanwendungen nach der Clusterwiederherstellung
- Self-heal-operator und Sf-k8-utils-Repository fehlen
- Fehlerhafte Dienste nach Clusterwiederherstellung oder Rollback
- RabbitMQ-Pod bleibt in CrashLoopBackOff hängen
- Prometheus im Zustand „CrashloopBackoff“ mit OOM-Fehler (Out-of-Memory)
- Fehlende Ceph-rook-Metriken in Überwachungs-Dashboards
- Pods können nicht mit FQDN in einer Proxy-Umgebung kommunizieren
- Document Understanding erscheint nicht auf der linken Leiste der Automation Suite
- Fehlerstatus beim Erstellen einer Datenbeschriftungssitzung
- Fehlerstatus beim Versuch, eine ML-Fähigkeit bereitzustellen
- Migrationsauftrag schlägt in ArgoCD fehl
- Die Handschrifterkennung mit dem Intelligent Form Extractor funktioniert nicht oder arbeitet zu langsam
- Verwenden des Automation Suite-Diagnosetools
- Verwenden des Automation Suite-Supportpakets
- Erkunden von Protokollen
Herabgestufte MongoDB- oder Geschäftsanwendungen nach der Clusterwiederherstellung
Wichtig :
Bitte beachten Sie, dass dieser Inhalt teilweise mithilfe von maschineller Übersetzung lokalisiert wurde.
Es kann 1–2 Wochen dauern, bis die Lokalisierung neu veröffentlichter Inhalte verfügbar ist.
Automation Suite-Installationsanleitung
Letzte Aktualisierung 19. Dez. 2024
Herabgestufte MongoDB- oder Geschäftsanwendungen nach der Clusterwiederherstellung
Gelegentlich führt ein Problem nach der Clusterwiederherstellung/dem Rollback auf Version 2022.4.x oder 2021.10.x dazu, dass MongoDB- oder Geschäftsanwendungen (Apps)-Pods im ursprünglichen Zustand hängen bleiben. Dies liegt daran, dass das erforderliche Volume zum Anfügen des PVC an einen Pod fehlt.
-
Überprüfen Sie, ob das Problem tatsächlich mit dem Problem des MongoDB-Volumenanhangs zusammenhängt:
# fetch all mongodb pods kubectl -n mongodb get pods #describe pods stuck in init state #kubectl -n mongodb describe pods mongodb-replica-set-<replica index number> kubectl -n mongodb describe pods mongodb-replica-set-0
# fetch all mongodb pods kubectl -n mongodb get pods #describe pods stuck in init state #kubectl -n mongodb describe pods mongodb-replica-set-<replica index number> kubectl -n mongodb describe pods mongodb-replica-set-0Wenn das Problem mit dem MongoDB-Volume-Anhang zusammenhängt, werden die folgenden Ereignisse angezeigt:
Events: Type Reason Age From Message ---- ---- ---- Warning FailedAttachVolume 3m9s (x65 over 133m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-66897693-e52d-4b89-aac6-ca0cc5ae9e07" : rpc error: code = Aborted desc = volume pvc-66897693-e52d-4b89-aac6-ca0cc5ae9e07 is not ready for workloads Warning FailedMount 103s (x50 over 112m) kubelet (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[logs-volume], unattached volumes=[hooks mongodb-replica-set-keyfile tls-secret data-volume healthstatus tls-ca kube-api-access-45qcl agent-scripts logs-volume automation-config]: timed out waiting for the condition
Events: Type Reason Age From Message ---- ---- ---- Warning FailedAttachVolume 3m9s (x65 over 133m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-66897693-e52d-4b89-aac6-ca0cc5ae9e07" : rpc error: code = Aborted desc = volume pvc-66897693-e52d-4b89-aac6-ca0cc5ae9e07 is not ready for workloads Warning FailedMount 103s (x50 over 112m) kubelet (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[logs-volume], unattached volumes=[hooks mongodb-replica-set-keyfile tls-secret data-volume healthstatus tls-ca kube-api-access-45qcl agent-scripts logs-volume automation-config]: timed out waiting for the condition -
Beheben Sie die problematischen MongoDB-Pods, indem Sie das folgende Skript ausführen:
#!/bin/bash set -eu FAILED_PVC_LIST="" STORAGE_CLASS_SINGLE_REPLICA="longhorn-backup-single-replica" STORAGE_CLASS="longhorn-backup" LOCAL_RESTORE_PATH="restoredata" mkdir -p ${LOCAL_RESTORE_PATH} export LOCAL_RESTORE_PATH="${LOCAL_RESTORE_PATH}" function delete_pvc_resources(){ local pv_name=$1 local volumeattachment_name=$2 echo "deleting pv & volumes forcefully" delete_pv_forcefully "${pv_name}" sleep 2 delete_longhornvolume_forcefully "${pv_name}" sleep 2 if [ -n "${volumeattachment_name}" ]; then echo "deleting volumeattachments forcefully" delete_volumeattachment_forcefully "${volumeattachment_name}" sleep 5 fi } function delete_longhornvolume_forcefully() { local pv_name=$1 if ! kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}" >/dev/null 2>&1 ; then echo "Volume ${pv_name} not found, skip deletion." return fi kubectl -n longhorn-system delete volumes.longhorn.io "${pv_name}" --grace-period=0 --force & local success=0 local try=0 local maxtry=10 while (( try < maxtry )); do local result="" # update finaluzer field to null result=$(kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}" -o=json | jq '.metadata.finalizers = null' | kubectl apply -f -) || true echo "${result}" result=$(kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}")|| true echo "${result}" if [[ -z "${result}" ]]; then success=1 break; fi kubectl -n longhorn-system delete volumes.longhorn.io "${pv_name}" --grace-period=0 --force & echo "Waiting to delete volume ${pv_name} ${try}/${maxtry}..."; sleep 10 try=$(( try + 1 )) done if [ "${success}" -eq 0 ]; then echo "Failed to delete volume ${pv_name}." fi } function delete_pv_forcefully() { local pv_name=$1 kubectl delete pv "${pv_name}" --grace-period=0 --force & local success=0 local try=0 local maxtry=10 while (( try < maxtry )); do # update finaluzer field to null result=$(kubectl get pv "${pv_name}" -o=json | jq '.metadata.finalizers = null' | kubectl apply -f -) || true echo "${result}" result=$(kubectl get pv "${pv_name}")|| true echo "${result}" if [[ -z "${result}" ]]; then success=1 break; fi kubectl delete pv "${pv_name}" --grace-period=0 --force & echo "Waiting to delete pv ${pv_name} ${try}/${maxtry}..."; sleep 10 try=$(( try + 1 )) done if [ "${success}" -eq 0 ]; then echo "Failed to delete pv ${pv_name}." fi } function validate_pv_backup_available(){ local pv_name=$1 local validate_pv_backup_available_resp=0 local backup_resp="" local resp_status_code="" local resp_message="" local try=0 local maxtry=3 while (( try != maxtry )) ; do backup_resp=$(curl --noproxy "*" "${LONGHORN_URL}/v1/backupvolumes/${pv_name}?") || true echo "Backup Response: ${backup_resp}" if { [ -n "${backup_resp}" ] && [ "${backup_resp}" != " " ]; }; then resp_status_code=$(echo "${backup_resp}"| jq -c ".status") resp_message=$(echo "${backup_resp}"| jq -c ".message") if [[ -n "${resp_status_code}" && "${resp_status_code}" != " " && "${resp_status_code}" != "null" && (( resp_status_code -gt 200 )) ]] ; then validate_pv_backup_available_resp=0 else resp_message=$(echo "${backup_resp}"| jq -c ".messages.error") if { [ -z "${resp_message}" ] || [ "${resp_message}" = "null" ]; }; then echo "PVC Backup is available for ${pv_name}" # shellcheck disable=SC2034 validate_pv_backup_available_resp=1 break; fi fi fi try=$((try+1)) sleep 10 done export IS_BACKUP_AVAILABLE="${validate_pv_backup_available_resp}" } function store_pvc_resources(){ local pvc_name=$1 local namespace=$2 local pv_name=$3 local volumeattachment_name=$4 if [[ -n "${volumeattachment_name}" && "${volumeattachment_name}" != " " ]]; then result=$(kubectl get volumeattachments "${volumeattachment_name}" -o json > "${LOCAL_RESTORE_PATH}/${volumeattachment_name}".json) || true echo "${result}" fi kubectl get pv "${pv_name}" -o json > "${LOCAL_RESTORE_PATH}/${pv_name}"-pv.json kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}" -o json > "${LOCAL_RESTORE_PATH}/${pv_name}"-volume.json kubectl get pvc "${pvc_name}" -n "${namespace}" -o json > "${LOCAL_RESTORE_PATH}/${pvc_name}"-pvc.json } function wait_pvc_bound() { local namespace=$1 local pvc_name=$2 try=0 maxtry=30 while (( try < maxtry )); do local status="" status=$(kubectl -n "${namespace}" get pvc "${pvc_name}" -o json| jq -r ".status | select( has(\"phase\")).phase") if [[ "${status}" == "Bound" ]]; then echo "PVC ${pvc_name} Bouned successfully." break; fi echo "waiting for PVC to bound...${try}/${maxtry}"; sleep 30 try=$((try+1)) done } function create_volume() { local pv_name=$1 local backup_path=$2 local response create_volume_status="succeed" echo "Creating Volume with PV: ${pv_name} and BackupPath: ${backup_path}" local accessmode local replicacount accessmode=$(< "${LOCAL_RESTORE_PATH}/${pv_name}-volume.json" jq -r ".spec.accessMode") replicacount=$(< "${LOCAL_RESTORE_PATH}/${pv_name}-volume.json" jq -r ".spec.numberOfReplicas") # create volume from backup pv response=$(curl --noproxy "*" "${LONGHORN_URL}/v1/volumes" -H 'Accept: application/json' -H 'Content-Type: application/json;charset=UTF-8' --data-raw "{\"name\":\"${pv_name}\", \"accessMode\":\"${accessmode}\", \"fromBackup\": \"${backup_path}\", \"numberOfReplicas\": ${replicacount}}") sleep 5 if [[ -n "${response}" && "${response}" != "null" && "${response}" != " " && -n $(echo "${response}"|jq -r ".id") ]]; then # wait for volume to be detached , max wait 4hr local try=0 local maxtry=480 local success=0 while (( try < maxtry ));do status=$(kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}" -o json |jq -r ".status.state") || true echo "volume ${pv_name} status: ${status}" if [[ "${status}" == "detached" ]]; then # update label kubectl -n longhorn-system label volumes.longhorn.io/"${pv_name}" recurring-job-group.longhorn.io/uipath-backup=enabled success=1 echo "Volume ready to use" break fi echo "waiting for volume to be ready to use${try}/${maxtry}..."; sleep 30 restore_status_url="${LONGHORN_URL}/v1/volumes/${pv_name}?action=snapshotList" try=$(( try + 1 )) done if [ "${success}" -eq 0 ]; then create_volume_status="failed" echo "${pv_name} Volume is not ready to use with status ${status}" fi else create_volume_status="failed" kubectl create -f "${LOCAL_RESTORE_PATH}/${pv_name}"-volume.json || true echo "${pv_name} Volume creation failed ${response} " fi echo "${create_volume_status}" } function restore_with_backupvolume() { local pvc_name=$1 local namespace=$2 local pv_name=$3 local volumeattachment_name=$4 local backup_path=$5 create_volume_status="succeed" store_pvc_resources "${pvc_name}" "${namespace}" "${pv_name}" "${volumeattachment_name}" sleep 5 delete_pvc_resources "${pv_name}" "${volumeattachment_name}" sleep 5 create_volume "${pv_name}" "${backup_path}" sleep 10 kubectl create -f "${LOCAL_RESTORE_PATH}/${pv_name}"-pv.json if [[ -n "${create_volume_status}" && "${create_volume_status}" != "succeed" ]]; then echo "Backup volume restore failed for pvc ${pvc_name} in namespace ${namespace}" restore_pvc_status="failed" else local pvc_uid="" pvc_uid=$(kubectl -n "${namespace}" get pvc "${pvc_name}" -o json| jq -r ".metadata.uid") # update pv with pvc uid kubectl patch pv "${pv_name}" --type json -p "[{\"op\": \"add\", \"path\": \"/spec/claimRef/uid\", \"value\": \"${pvc_uid}\"}]" wait_pvc_bound "${namespace}" "${pvc_name}" echo "${result}" fi echo ${restore_pvc_status} } function restore_pvc(){ local pvc_name=$1 local namespace=$2 local pv_name=$3 local volumeattachment_name=$4 local backup_path=$5 restore_pvc_status="succeed" restore_with_backupvolume "${pvc_name}" "${namespace}" "${pv_name}" "${volumeattachment_name}" "${backup_path}" echo ${restore_pvc_status} # attach_volume "${pv_name}" } function get_backup_list_by_pvname(){ local pv_name=$1 local get_backup_list_by_pvname_resp="" local backup_list_response="" local try=0 local maxtry=3 while (( try != maxtry )) ; do backup_list_response=$(curl --noproxy "*" "${LONGHORN_URL}/v1/backupvolumes/${pv_name}?action=backupList" -X 'POST' -H 'Content-Length: 0' -H 'Accept: application/json') if [[ -n "${backup_list_response}" && "${backup_list_response}" != " " && -n $( echo "${backup_list_response}"|jq ".data" ) && -n $( echo "${backup_list_response}"|jq ".data[]" ) ]]; then echo "Backup List Response: ${backup_list_response}" # pick first backup data with url non empty # shellcheck disable=SC2034 get_backup_list_by_pvname_resp=$(echo "${backup_list_response}"|jq -c '[.data[]|select(.url | . != null and . != "")][0]') break; fi try=$((try+1)) sleep 10 done export PV_BACKUP_PATH="${get_backup_list_by_pvname_resp}" } function get_pvc_resources() { get_pvc_resources_resp="" local PVC_NAME=$1 local PVC_NAMESPACE=$2 # PV have one to one mapping with PVC PV_NAME=$(kubectl -n "${PVC_NAMESPACE}" get pvc "${PVC_NAME}" -o json|jq -r ".spec.volumeName") VOLUME_ATTACHMENT_LIST=$(kubectl get volumeattachments -n "${PVC_NAMESPACE}" -o=json|jq -c ".items[]|\ {name: .metadata.name, pvClaimName:.spec.source| select( has (\"persistentVolumeName\")).persistentVolumeName}") VOLUME_ATTACHMENT_NAME="" for VOLUME_ATTACHMENT in ${VOLUME_ATTACHMENT_LIST}; do PV_CLAIM_NAME=$(echo "${VOLUME_ATTACHMENT}"|jq -r ".pvClaimName") if [[ "${PV_NAME}" = "${PV_CLAIM_NAME}" ]]; then VOLUME_ATTACHMENT_NAME=$(echo "${VOLUME_ATTACHMENT}"|jq -r ".name") break; fi done BACKUP_PATH="" validate_pv_backup_available "${PV_NAME}" local is_backup_available=${IS_BACKUP_AVAILABLE:- 0} unset IS_BACKUP_AVAILABLE if [ "${is_backup_available}" -eq 1 ]; then echo "Backup is available for PVC ${PVC_NAME}" get_backup_list_by_pvname "${PV_NAME}" BACKUP_PATH=$(echo "${PV_BACKUP_PATH}"| jq -r ".url") unset PV_BACKUP_PATH fi get_pvc_resources_resp="{\"pvc_name\": \"${PVC_NAME}\", \"pv_name\": \"${PV_NAME}\", \"volumeattachment_name\": \"${VOLUME_ATTACHMENT_NAME}\", \"backup_path\": \"${BACKUP_PATH}\"}" echo "${get_pvc_resources_resp}" } function scale_ownerreferences() { local ownerReferences=$1 local namespace=$2 local replicas=$3 # no operation required if [[ -z "${ownerReferences}" || "${ownerReferences}" == "null" ]]; then return fi ownerReferences=$(echo "${ownerReferences}"| jq -c ".[]") for ownerReference in ${ownerReferences}; do echo "Owner: ${ownerReference}" local resourceKind local resourceName resourceKind=$(echo "${ownerReference}"| jq -r ".kind") resourceName=$(echo "${ownerReference}"| jq -r ".name") if kubectl -n "${namespace}" get "${resourceKind}" "${resourceName}" >/dev/null 2>&1; then # scale replicas kubectl -n "${namespace}" patch "${resourceKind}" "${resourceName}" --type json -p "[{\"op\": \"replace\", \"path\": \"/spec/members\", \"value\": ${replicas} }]" fi done } function scale_down_statefulset() { local statefulset_name=$1 local namespace=$2 local ownerReferences=$3 echo "Start Scale Down statefulset ${statefulset_name} under namespace ${namespace}..." # validate and scale down ownerreference scale_ownerreferences "${ownerReferences}" "${namespace}" 0 local try=0 local maxtry=30 success=0 while (( try != maxtry )) ; do result=$(kubectl scale statefulset "${statefulset_name}" --replicas=0 -n "${namespace}") || true echo "${result}" scaledown=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}"|grep 0/0) || true if { [ -n "${scaledown}" ] && [ "${scaledown}" != " " ]; }; then echo "Statefulset scaled down successfully." success=1 break else try=$((try+1)) echo "waiting for the statefulset ${statefulset_name} to scale down...${try}/${maxtry}";sleep 30 fi done if [ ${success} -eq 0 ]; then echo "Statefulset ${statefulset_name} scaled down failed" fi } function scale_up_statefulset() { local statefulset_name=$1 local namespace=$2 local replica=$3 local ownerReferences=$4 # Scale up statefulsets using PVCs echo "Start Scale Up statefulset ${statefulset_name}..." # validate and scale up ownerreference scale_ownerreferences "${ownerReferences}" "${namespace}" "${replica}" echo "Waiting to scale up statefulset..." local try=1 local maxtry=15 local success=0 while (( try != maxtry )) ; do kubectl scale statefulset "${statefulset_name}" --replicas="${replica}" -n "${namespace}" kubectl get statefulset "${statefulset_name}" -n "${namespace}" scaleup=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}"|grep "${replica}"/"${replica}") || true if ! { [ -n "${scaleup}" ] && [ "${scaleup}" != " " ]; }; then try=$((try+1)) echo "waiting for the statefulset ${statefulset_name} to scale up...${try}/${maxtry}"; sleep 30 else echo "Statefulset scaled up successfully." success=1 break fi done if [ ${success} -eq 0 ]; then echo "Statefulset scaled up failed ${statefulset_name}." fi } function restore_pvc_attached_to_statefulsets() { local namespace namespace="${1}" local statefulset_list # list of all statefulset using PVC statefulset_list=$(kubectl get statefulset -n "${namespace}" -o=json | jq -r ".items[] | select(.spec.volumeClaimTemplates).metadata.name") for statefulset_name in ${statefulset_list}; do local replica local ownerReferences local pvc_restore_failed local try=0 local maxtry=5 local status="notready" pvc_restore_failed="" restore_pvc_status="" # check if statefulset is reday while [[ "${status}" == "notready" ]] && (( try < maxtry )); do echo "fetch statefulset ${statefulset_name} metadata... ${try}/${maxtry}" try=$(( try + 1 )) replica=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}" -o=json | jq -c ".spec.replicas") if [[ "${replica}" != 0 ]]; then status="ready" else echo "statefulset ${statefulset_name} replica is not ready. Wait and retry"; sleep 30 fi done if [[ "${status}" != "ready" ]]; then echo "Failed to restore pvc for Statefulset ${statefulset_name} in namespace ${namespace}. Please retrigger volume restore step." fi # Fetch ownerReferences and claim name ownerReferences=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}" -o=json | jq -c ".metadata.ownerReferences") claimTemplatesName=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}" -o=json | jq -c ".spec | select( has (\"volumeClaimTemplates\") ).volumeClaimTemplates[].metadata.name " | xargs) echo "Scaling down Statefulset ${statefulset_name} with ${replica} under namespace ${namespace}" scale_down_statefulset "${statefulset_name}" "${namespace}" "${ownerReferences}" for name in ${claimTemplatesName}; do local pvc_prefix pvc_prefix="${name}-${statefulset_name}" for((i=0;i<"${replica}";i++)); do local pvc_name pvc_name="${pvc_prefix}-${i}" pvc_exist=$(kubectl -n "${namespace}" get pvc "${pvc_name}") || true if [[ -z "${pvc_exist}" || "${pvc_exist}" == " " ]]; then echo "PVC not available for the statefulset ${statefulset_name}, skipping restore." continue; fi local pvc_storageclass pvc_storageclass=$(kubectl -n "${namespace}" get pvc "${pvc_name}" -o json| jq -r ".spec.storageClassName") if [[ ! ( "${pvc_storageclass}" == "${STORAGE_CLASS}" || "${pvc_storageclass}" == "${STORAGE_CLASS_SINGLE_REPLICA}" ) ]]; then echo "backup not available for pvc ${pvc_name}, storageclass: ${pvc_storageclass} " continue; fi # get pv, volumeattachments for pvc get_pvc_resources_resp="" get_pvc_resources "${pvc_name}" "${namespace}" local pv_name local volumeattachment_name local backup_path pv_name=$(echo "${get_pvc_resources_resp}"| jq -r ".pv_name") volumeattachment_name=$(echo "${get_pvc_resources_resp}"| jq -r ".volumeattachment_name") backup_path=$(echo "${get_pvc_resources_resp}"| jq -r ".backup_path") if [[ -z "${backup_path}" || "${backup_path}" == " " || "${backup_path}" == "null" ]]; then pvc_restore_failed="error" FAILED_PVC_LIST="${FAILED_PVC_LIST},${pv_name}" continue; fi restore_pvc_status="succeed" restore_pvc "${pvc_name}" "${namespace}" "${pv_name}" "${volumeattachment_name}" "${backup_path}" if [[ -n "${restore_pvc_status}" && "${restore_pvc_status}" != "succeed" ]]; then pvc_restore_failed="error" FAILED_PVC_LIST="${FAILED_PVC_LIST},${pv_name}" continue; fi done done sleep 10 scale_up_statefulset "${statefulset_name}" "${namespace}" "${replica}" "${ownerReferences}" sleep 5 if [[ -n "${pvc_restore_failed}" && "${pvc_restore_failed}" == "error" ]]; then echo "Failed to restore pvc for Statefulset ${statefulset_name} in namespace ${namespace}." fi done } LONGHORN_URL=$(kubectl -n longhorn-system get svc longhorn-backend -o jsonpath="{.spec.clusterIP}"):9500 restore_pvc_attached_to_statefulsets "mongodb"
#!/bin/bash set -eu FAILED_PVC_LIST="" STORAGE_CLASS_SINGLE_REPLICA="longhorn-backup-single-replica" STORAGE_CLASS="longhorn-backup" LOCAL_RESTORE_PATH="restoredata" mkdir -p ${LOCAL_RESTORE_PATH} export LOCAL_RESTORE_PATH="${LOCAL_RESTORE_PATH}" function delete_pvc_resources(){ local pv_name=$1 local volumeattachment_name=$2 echo "deleting pv & volumes forcefully" delete_pv_forcefully "${pv_name}" sleep 2 delete_longhornvolume_forcefully "${pv_name}" sleep 2 if [ -n "${volumeattachment_name}" ]; then echo "deleting volumeattachments forcefully" delete_volumeattachment_forcefully "${volumeattachment_name}" sleep 5 fi } function delete_longhornvolume_forcefully() { local pv_name=$1 if ! kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}" >/dev/null 2>&1 ; then echo "Volume ${pv_name} not found, skip deletion." return fi kubectl -n longhorn-system delete volumes.longhorn.io "${pv_name}" --grace-period=0 --force & local success=0 local try=0 local maxtry=10 while (( try < maxtry )); do local result="" # update finaluzer field to null result=$(kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}" -o=json | jq '.metadata.finalizers = null' | kubectl apply -f -) || true echo "${result}" result=$(kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}")|| true echo "${result}" if [[ -z "${result}" ]]; then success=1 break; fi kubectl -n longhorn-system delete volumes.longhorn.io "${pv_name}" --grace-period=0 --force & echo "Waiting to delete volume ${pv_name} ${try}/${maxtry}..."; sleep 10 try=$(( try + 1 )) done if [ "${success}" -eq 0 ]; then echo "Failed to delete volume ${pv_name}." fi } function delete_pv_forcefully() { local pv_name=$1 kubectl delete pv "${pv_name}" --grace-period=0 --force & local success=0 local try=0 local maxtry=10 while (( try < maxtry )); do # update finaluzer field to null result=$(kubectl get pv "${pv_name}" -o=json | jq '.metadata.finalizers = null' | kubectl apply -f -) || true echo "${result}" result=$(kubectl get pv "${pv_name}")|| true echo "${result}" if [[ -z "${result}" ]]; then success=1 break; fi kubectl delete pv "${pv_name}" --grace-period=0 --force & echo "Waiting to delete pv ${pv_name} ${try}/${maxtry}..."; sleep 10 try=$(( try + 1 )) done if [ "${success}" -eq 0 ]; then echo "Failed to delete pv ${pv_name}." fi } function validate_pv_backup_available(){ local pv_name=$1 local validate_pv_backup_available_resp=0 local backup_resp="" local resp_status_code="" local resp_message="" local try=0 local maxtry=3 while (( try != maxtry )) ; do backup_resp=$(curl --noproxy "*" "${LONGHORN_URL}/v1/backupvolumes/${pv_name}?") || true echo "Backup Response: ${backup_resp}" if { [ -n "${backup_resp}" ] && [ "${backup_resp}" != " " ]; }; then resp_status_code=$(echo "${backup_resp}"| jq -c ".status") resp_message=$(echo "${backup_resp}"| jq -c ".message") if [[ -n "${resp_status_code}" && "${resp_status_code}" != " " && "${resp_status_code}" != "null" && (( resp_status_code -gt 200 )) ]] ; then validate_pv_backup_available_resp=0 else resp_message=$(echo "${backup_resp}"| jq -c ".messages.error") if { [ -z "${resp_message}" ] || [ "${resp_message}" = "null" ]; }; then echo "PVC Backup is available for ${pv_name}" # shellcheck disable=SC2034 validate_pv_backup_available_resp=1 break; fi fi fi try=$((try+1)) sleep 10 done export IS_BACKUP_AVAILABLE="${validate_pv_backup_available_resp}" } function store_pvc_resources(){ local pvc_name=$1 local namespace=$2 local pv_name=$3 local volumeattachment_name=$4 if [[ -n "${volumeattachment_name}" && "${volumeattachment_name}" != " " ]]; then result=$(kubectl get volumeattachments "${volumeattachment_name}" -o json > "${LOCAL_RESTORE_PATH}/${volumeattachment_name}".json) || true echo "${result}" fi kubectl get pv "${pv_name}" -o json > "${LOCAL_RESTORE_PATH}/${pv_name}"-pv.json kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}" -o json > "${LOCAL_RESTORE_PATH}/${pv_name}"-volume.json kubectl get pvc "${pvc_name}" -n "${namespace}" -o json > "${LOCAL_RESTORE_PATH}/${pvc_name}"-pvc.json } function wait_pvc_bound() { local namespace=$1 local pvc_name=$2 try=0 maxtry=30 while (( try < maxtry )); do local status="" status=$(kubectl -n "${namespace}" get pvc "${pvc_name}" -o json| jq -r ".status | select( has(\"phase\")).phase") if [[ "${status}" == "Bound" ]]; then echo "PVC ${pvc_name} Bouned successfully." break; fi echo "waiting for PVC to bound...${try}/${maxtry}"; sleep 30 try=$((try+1)) done } function create_volume() { local pv_name=$1 local backup_path=$2 local response create_volume_status="succeed" echo "Creating Volume with PV: ${pv_name} and BackupPath: ${backup_path}" local accessmode local replicacount accessmode=$(< "${LOCAL_RESTORE_PATH}/${pv_name}-volume.json" jq -r ".spec.accessMode") replicacount=$(< "${LOCAL_RESTORE_PATH}/${pv_name}-volume.json" jq -r ".spec.numberOfReplicas") # create volume from backup pv response=$(curl --noproxy "*" "${LONGHORN_URL}/v1/volumes" -H 'Accept: application/json' -H 'Content-Type: application/json;charset=UTF-8' --data-raw "{\"name\":\"${pv_name}\", \"accessMode\":\"${accessmode}\", \"fromBackup\": \"${backup_path}\", \"numberOfReplicas\": ${replicacount}}") sleep 5 if [[ -n "${response}" && "${response}" != "null" && "${response}" != " " && -n $(echo "${response}"|jq -r ".id") ]]; then # wait for volume to be detached , max wait 4hr local try=0 local maxtry=480 local success=0 while (( try < maxtry ));do status=$(kubectl -n longhorn-system get volumes.longhorn.io "${pv_name}" -o json |jq -r ".status.state") || true echo "volume ${pv_name} status: ${status}" if [[ "${status}" == "detached" ]]; then # update label kubectl -n longhorn-system label volumes.longhorn.io/"${pv_name}" recurring-job-group.longhorn.io/uipath-backup=enabled success=1 echo "Volume ready to use" break fi echo "waiting for volume to be ready to use${try}/${maxtry}..."; sleep 30 restore_status_url="${LONGHORN_URL}/v1/volumes/${pv_name}?action=snapshotList" try=$(( try + 1 )) done if [ "${success}" -eq 0 ]; then create_volume_status="failed" echo "${pv_name} Volume is not ready to use with status ${status}" fi else create_volume_status="failed" kubectl create -f "${LOCAL_RESTORE_PATH}/${pv_name}"-volume.json || true echo "${pv_name} Volume creation failed ${response} " fi echo "${create_volume_status}" } function restore_with_backupvolume() { local pvc_name=$1 local namespace=$2 local pv_name=$3 local volumeattachment_name=$4 local backup_path=$5 create_volume_status="succeed" store_pvc_resources "${pvc_name}" "${namespace}" "${pv_name}" "${volumeattachment_name}" sleep 5 delete_pvc_resources "${pv_name}" "${volumeattachment_name}" sleep 5 create_volume "${pv_name}" "${backup_path}" sleep 10 kubectl create -f "${LOCAL_RESTORE_PATH}/${pv_name}"-pv.json if [[ -n "${create_volume_status}" && "${create_volume_status}" != "succeed" ]]; then echo "Backup volume restore failed for pvc ${pvc_name} in namespace ${namespace}" restore_pvc_status="failed" else local pvc_uid="" pvc_uid=$(kubectl -n "${namespace}" get pvc "${pvc_name}" -o json| jq -r ".metadata.uid") # update pv with pvc uid kubectl patch pv "${pv_name}" --type json -p "[{\"op\": \"add\", \"path\": \"/spec/claimRef/uid\", \"value\": \"${pvc_uid}\"}]" wait_pvc_bound "${namespace}" "${pvc_name}" echo "${result}" fi echo ${restore_pvc_status} } function restore_pvc(){ local pvc_name=$1 local namespace=$2 local pv_name=$3 local volumeattachment_name=$4 local backup_path=$5 restore_pvc_status="succeed" restore_with_backupvolume "${pvc_name}" "${namespace}" "${pv_name}" "${volumeattachment_name}" "${backup_path}" echo ${restore_pvc_status} # attach_volume "${pv_name}" } function get_backup_list_by_pvname(){ local pv_name=$1 local get_backup_list_by_pvname_resp="" local backup_list_response="" local try=0 local maxtry=3 while (( try != maxtry )) ; do backup_list_response=$(curl --noproxy "*" "${LONGHORN_URL}/v1/backupvolumes/${pv_name}?action=backupList" -X 'POST' -H 'Content-Length: 0' -H 'Accept: application/json') if [[ -n "${backup_list_response}" && "${backup_list_response}" != " " && -n $( echo "${backup_list_response}"|jq ".data" ) && -n $( echo "${backup_list_response}"|jq ".data[]" ) ]]; then echo "Backup List Response: ${backup_list_response}" # pick first backup data with url non empty # shellcheck disable=SC2034 get_backup_list_by_pvname_resp=$(echo "${backup_list_response}"|jq -c '[.data[]|select(.url | . != null and . != "")][0]') break; fi try=$((try+1)) sleep 10 done export PV_BACKUP_PATH="${get_backup_list_by_pvname_resp}" } function get_pvc_resources() { get_pvc_resources_resp="" local PVC_NAME=$1 local PVC_NAMESPACE=$2 # PV have one to one mapping with PVC PV_NAME=$(kubectl -n "${PVC_NAMESPACE}" get pvc "${PVC_NAME}" -o json|jq -r ".spec.volumeName") VOLUME_ATTACHMENT_LIST=$(kubectl get volumeattachments -n "${PVC_NAMESPACE}" -o=json|jq -c ".items[]|\ {name: .metadata.name, pvClaimName:.spec.source| select( has (\"persistentVolumeName\")).persistentVolumeName}") VOLUME_ATTACHMENT_NAME="" for VOLUME_ATTACHMENT in ${VOLUME_ATTACHMENT_LIST}; do PV_CLAIM_NAME=$(echo "${VOLUME_ATTACHMENT}"|jq -r ".pvClaimName") if [[ "${PV_NAME}" = "${PV_CLAIM_NAME}" ]]; then VOLUME_ATTACHMENT_NAME=$(echo "${VOLUME_ATTACHMENT}"|jq -r ".name") break; fi done BACKUP_PATH="" validate_pv_backup_available "${PV_NAME}" local is_backup_available=${IS_BACKUP_AVAILABLE:- 0} unset IS_BACKUP_AVAILABLE if [ "${is_backup_available}" -eq 1 ]; then echo "Backup is available for PVC ${PVC_NAME}" get_backup_list_by_pvname "${PV_NAME}" BACKUP_PATH=$(echo "${PV_BACKUP_PATH}"| jq -r ".url") unset PV_BACKUP_PATH fi get_pvc_resources_resp="{\"pvc_name\": \"${PVC_NAME}\", \"pv_name\": \"${PV_NAME}\", \"volumeattachment_name\": \"${VOLUME_ATTACHMENT_NAME}\", \"backup_path\": \"${BACKUP_PATH}\"}" echo "${get_pvc_resources_resp}" } function scale_ownerreferences() { local ownerReferences=$1 local namespace=$2 local replicas=$3 # no operation required if [[ -z "${ownerReferences}" || "${ownerReferences}" == "null" ]]; then return fi ownerReferences=$(echo "${ownerReferences}"| jq -c ".[]") for ownerReference in ${ownerReferences}; do echo "Owner: ${ownerReference}" local resourceKind local resourceName resourceKind=$(echo "${ownerReference}"| jq -r ".kind") resourceName=$(echo "${ownerReference}"| jq -r ".name") if kubectl -n "${namespace}" get "${resourceKind}" "${resourceName}" >/dev/null 2>&1; then # scale replicas kubectl -n "${namespace}" patch "${resourceKind}" "${resourceName}" --type json -p "[{\"op\": \"replace\", \"path\": \"/spec/members\", \"value\": ${replicas} }]" fi done } function scale_down_statefulset() { local statefulset_name=$1 local namespace=$2 local ownerReferences=$3 echo "Start Scale Down statefulset ${statefulset_name} under namespace ${namespace}..." # validate and scale down ownerreference scale_ownerreferences "${ownerReferences}" "${namespace}" 0 local try=0 local maxtry=30 success=0 while (( try != maxtry )) ; do result=$(kubectl scale statefulset "${statefulset_name}" --replicas=0 -n "${namespace}") || true echo "${result}" scaledown=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}"|grep 0/0) || true if { [ -n "${scaledown}" ] && [ "${scaledown}" != " " ]; }; then echo "Statefulset scaled down successfully." success=1 break else try=$((try+1)) echo "waiting for the statefulset ${statefulset_name} to scale down...${try}/${maxtry}";sleep 30 fi done if [ ${success} -eq 0 ]; then echo "Statefulset ${statefulset_name} scaled down failed" fi } function scale_up_statefulset() { local statefulset_name=$1 local namespace=$2 local replica=$3 local ownerReferences=$4 # Scale up statefulsets using PVCs echo "Start Scale Up statefulset ${statefulset_name}..." # validate and scale up ownerreference scale_ownerreferences "${ownerReferences}" "${namespace}" "${replica}" echo "Waiting to scale up statefulset..." local try=1 local maxtry=15 local success=0 while (( try != maxtry )) ; do kubectl scale statefulset "${statefulset_name}" --replicas="${replica}" -n "${namespace}" kubectl get statefulset "${statefulset_name}" -n "${namespace}" scaleup=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}"|grep "${replica}"/"${replica}") || true if ! { [ -n "${scaleup}" ] && [ "${scaleup}" != " " ]; }; then try=$((try+1)) echo "waiting for the statefulset ${statefulset_name} to scale up...${try}/${maxtry}"; sleep 30 else echo "Statefulset scaled up successfully." success=1 break fi done if [ ${success} -eq 0 ]; then echo "Statefulset scaled up failed ${statefulset_name}." fi } function restore_pvc_attached_to_statefulsets() { local namespace namespace="${1}" local statefulset_list # list of all statefulset using PVC statefulset_list=$(kubectl get statefulset -n "${namespace}" -o=json | jq -r ".items[] | select(.spec.volumeClaimTemplates).metadata.name") for statefulset_name in ${statefulset_list}; do local replica local ownerReferences local pvc_restore_failed local try=0 local maxtry=5 local status="notready" pvc_restore_failed="" restore_pvc_status="" # check if statefulset is reday while [[ "${status}" == "notready" ]] && (( try < maxtry )); do echo "fetch statefulset ${statefulset_name} metadata... ${try}/${maxtry}" try=$(( try + 1 )) replica=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}" -o=json | jq -c ".spec.replicas") if [[ "${replica}" != 0 ]]; then status="ready" else echo "statefulset ${statefulset_name} replica is not ready. Wait and retry"; sleep 30 fi done if [[ "${status}" != "ready" ]]; then echo "Failed to restore pvc for Statefulset ${statefulset_name} in namespace ${namespace}. Please retrigger volume restore step." fi # Fetch ownerReferences and claim name ownerReferences=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}" -o=json | jq -c ".metadata.ownerReferences") claimTemplatesName=$(kubectl get statefulset "${statefulset_name}" -n "${namespace}" -o=json | jq -c ".spec | select( has (\"volumeClaimTemplates\") ).volumeClaimTemplates[].metadata.name " | xargs) echo "Scaling down Statefulset ${statefulset_name} with ${replica} under namespace ${namespace}" scale_down_statefulset "${statefulset_name}" "${namespace}" "${ownerReferences}" for name in ${claimTemplatesName}; do local pvc_prefix pvc_prefix="${name}-${statefulset_name}" for((i=0;i<"${replica}";i++)); do local pvc_name pvc_name="${pvc_prefix}-${i}" pvc_exist=$(kubectl -n "${namespace}" get pvc "${pvc_name}") || true if [[ -z "${pvc_exist}" || "${pvc_exist}" == " " ]]; then echo "PVC not available for the statefulset ${statefulset_name}, skipping restore." continue; fi local pvc_storageclass pvc_storageclass=$(kubectl -n "${namespace}" get pvc "${pvc_name}" -o json| jq -r ".spec.storageClassName") if [[ ! ( "${pvc_storageclass}" == "${STORAGE_CLASS}" || "${pvc_storageclass}" == "${STORAGE_CLASS_SINGLE_REPLICA}" ) ]]; then echo "backup not available for pvc ${pvc_name}, storageclass: ${pvc_storageclass} " continue; fi # get pv, volumeattachments for pvc get_pvc_resources_resp="" get_pvc_resources "${pvc_name}" "${namespace}" local pv_name local volumeattachment_name local backup_path pv_name=$(echo "${get_pvc_resources_resp}"| jq -r ".pv_name") volumeattachment_name=$(echo "${get_pvc_resources_resp}"| jq -r ".volumeattachment_name") backup_path=$(echo "${get_pvc_resources_resp}"| jq -r ".backup_path") if [[ -z "${backup_path}" || "${backup_path}" == " " || "${backup_path}" == "null" ]]; then pvc_restore_failed="error" FAILED_PVC_LIST="${FAILED_PVC_LIST},${pv_name}" continue; fi restore_pvc_status="succeed" restore_pvc "${pvc_name}" "${namespace}" "${pv_name}" "${volumeattachment_name}" "${backup_path}" if [[ -n "${restore_pvc_status}" && "${restore_pvc_status}" != "succeed" ]]; then pvc_restore_failed="error" FAILED_PVC_LIST="${FAILED_PVC_LIST},${pv_name}" continue; fi done done sleep 10 scale_up_statefulset "${statefulset_name}" "${namespace}" "${replica}" "${ownerReferences}" sleep 5 if [[ -n "${pvc_restore_failed}" && "${pvc_restore_failed}" == "error" ]]; then echo "Failed to restore pvc for Statefulset ${statefulset_name} in namespace ${namespace}." fi done } LONGHORN_URL=$(kubectl -n longhorn-system get svc longhorn-backend -o jsonpath="{.spec.clusterIP}"):9500 restore_pvc_attached_to_statefulsets "mongodb"