ai-center
2022.4
true
UiPath logo, featuring letters U and I in white

AI Center 安装指南

Automation CloudAutomation SuiteStandalone
上次更新日期 2024年6月6日

配置 GPU

注意:GPU 只能安装在代理节点上,而不能安装在服务器节点上。不要使用或修改 cluster_config.json 中的 gpu_support 标记。相反,请按照以下说明将具有 GPU 支持的专用代理节点添加到集群。

目前,Automation Suite 仅支持 NVIDIA GPU 驱动程序。请参阅支持 GPU 的操作系统列表。

您可以在此处找到节点的特定于云的实例类型:

请按照将新节点添加到集群中的步骤操作,以确保正确添加了代理节点。

有关如何在 GPU 上部署 NVIDIA CUDA 的更多示例,请查看此 页面

安装 GPU 驱动程序

  1. 运行以下命令,在代理节点上安装 GPU 驱动程序:
    sudo yum install kernel kernel-tools kernel-headers kernel-devel 
    sudo reboot
    sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
    sudo sed 's/$releasever/8/g' -i /etc/yum.repos.d/epel.repo
    sudo sed 's/$releasever/8/g' -i /etc/yum.repos.d/epel-modular.repo
    sudo yum config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo
    sudo yum install cudasudo yum install kernel kernel-tools kernel-headers kernel-devel 
    sudo reboot
    sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
    sudo sed 's/$releasever/8/g' -i /etc/yum.repos.d/epel.repo
    sudo sed 's/$releasever/8/g' -i /etc/yum.repos.d/epel-modular.repo
    sudo yum config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo
    sudo yum install cuda
  2. 运行以下命令以安装容器工具包:
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \\
              && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
    sudo dnf clean expire-cache
    sudo yum install -y nvidia-container-runtime.x86_64distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \\
              && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
    sudo dnf clean expire-cache
    sudo yum install -y nvidia-container-runtime.x86_64

验证驱动程序是否已正确安装

在节点上运行 sudo nvidia-smi 命令以验证驱动程序是否已正确安装。


注意:配置集群后,需要执行其他步骤来配置已配置的 GPU。

此时,GPU 驱动程序已安装,并且 GPU 节点已添加到集群中。

将 GPU 添加到代理节点

运行以下两个命令以更新代理节点的 contianerd 配置。
cat <<EOF > gpu_containerd.sh
if ! nvidia-smi &>/dev/null;
then
  echo "GPU Drivers are not installed on the VM. Please refer the documentation."
  exit 0
fi
if ! which nvidia-container-runtime &>/dev/null;
then
  echo "Nvidia container runtime is not installed on the VM. Please refer the documentation."
  exit 0 
fi
grep "nvidia-container-runtime" /var/lib/rancher/rke2/agent/etc/containerd/config.toml &>/dev/null && info "GPU containerd changes already applied" && exit 0
awk '1;/plugins.cri.containerd]/{print "  default_runtime_name = \\"nvidia-container-runtime\\""}' /var/lib/rancher/rke2/agent/etc/containerd/config.toml > /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
echo -e '\
[plugins.linux]\
  runtime = "nvidia-container-runtime"' >> /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
echo -e '\
[plugins.cri.containerd.runtimes.nvidia-container-runtime]\
  runtime_type = "io.containerd.runc.v2"\
  [plugins.cri.containerd.runtimes.nvidia-container-runtime.options]\
    BinaryName = "nvidia-container-runtime"' >> /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
EOFsudo bash gpu_containerd.shcat <<EOF > gpu_containerd.sh
if ! nvidia-smi &>/dev/null;
then
  echo "GPU Drivers are not installed on the VM. Please refer the documentation."
  exit 0
fi
if ! which nvidia-container-runtime &>/dev/null;
then
  echo "Nvidia container runtime is not installed on the VM. Please refer the documentation."
  exit 0 
fi
grep "nvidia-container-runtime" /var/lib/rancher/rke2/agent/etc/containerd/config.toml &>/dev/null && info "GPU containerd changes already applied" && exit 0
awk '1;/plugins.cri.containerd]/{print "  default_runtime_name = \\"nvidia-container-runtime\\""}' /var/lib/rancher/rke2/agent/etc/containerd/config.toml > /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
echo -e '\
[plugins.linux]\
  runtime = "nvidia-container-runtime"' >> /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
echo -e '\
[plugins.cri.containerd.runtimes.nvidia-container-runtime]\
  runtime_type = "io.containerd.runc.v2"\
  [plugins.cri.containerd.runtimes.nvidia-container-runtime.options]\
    BinaryName = "nvidia-container-runtime"' >> /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl
EOFsudo bash gpu_containerd.sh
现在运行以下命令以重新启动 rke2-agent
[[ "$(sudo systemctl is-enabled rke2-server 2>/dev/null)" == "enabled" ]] && systemctl restart rke2-server
[[ "$(sudo systemctl is-enabled rke2-agent 2>/dev/null)" == "enabled" ]] && systemctl restart rke2-agent[[ "$(sudo systemctl is-enabled rke2-server 2>/dev/null)" == "enabled" ]] && systemctl restart rke2-server
[[ "$(sudo systemctl is-enabled rke2-agent 2>/dev/null)" == "enabled" ]] && systemctl restart rke2-agent

在安装后启用 GPU 驱动程序

从任何主服务器节点运行以下命令。

导航到 UiPathAutomationSuite 文件夹。
cd /opt/UiPathAutomationSuitecd /opt/UiPathAutomationSuite

在在线安装中启用

DOCKER_REGISTRY_URL=$(cat defaults.json | jq -er ".registries.docker.url")
sed -i "s/REGISTRY_PLACEHOLDER/${DOCKER_REGISTRY_URL}/g" ./Infra_Installer/gpu_plugin/nvidia-device-plugin.yaml
kubectl apply -f ./Infra_Installer/gpu_plugin/nvidia-device-plugin.yaml
kubectl -n kube-system rollout restart daemonset nvidia-device-plugin-daemonsetDOCKER_REGISTRY_URL=$(cat defaults.json | jq -er ".registries.docker.url")
sed -i "s/REGISTRY_PLACEHOLDER/${DOCKER_REGISTRY_URL}/g" ./Infra_Installer/gpu_plugin/nvidia-device-plugin.yaml
kubectl apply -f ./Infra_Installer/gpu_plugin/nvidia-device-plugin.yaml
kubectl -n kube-system rollout restart daemonset nvidia-device-plugin-daemonset

在离线安装中启用

DOCKER_REGISTRY_URL=localhost:30071
sed -i "s/REGISTRY_PLACEHOLDER/${DOCKER_REGISTRY_URL}/g" ./Infra_Installer/gpu_plugin/nvidia-device-plugin.yaml
kubectl apply -f ./Infra_Installer/gpu_plugin/nvidia-device-plugin.yaml
kubectl -n kube-system rollout restart daemonset nvidia-device-plugin-daemonsetDOCKER_REGISTRY_URL=localhost:30071
sed -i "s/REGISTRY_PLACEHOLDER/${DOCKER_REGISTRY_URL}/g" ./Infra_Installer/gpu_plugin/nvidia-device-plugin.yaml
kubectl apply -f ./Infra_Installer/gpu_plugin/nvidia-device-plugin.yaml
kubectl -n kube-system rollout restart daemonset nvidia-device-plugin-daemonset

GPU 污点

当有工作负载请求时,GPU 工作负载会自动安排在 GPU 节点上。但是,正常的 CPU 工作负载也可能被安排在这些节点上,从而保留容量。如果您希望只有 GPU 工作负载被安排在这些节点上,则可以使用第一个节点中的以下命令将污点添加到这些节点。

  • nvidia.com/gpu=present:NoSchedule - 除非明确指定,否则不会在此节点上安排非 GPU 工作负载
  • nvidia.com/gpu=present:PreferNoSchedule - 这使其成为首选条件,而不是像第一个选项那样困难的条件
在以下命令中将 <node-name> 替换为集群中相应的 GPU 节点名称,并将 <taint-name> 替换为上述 2 个选项之一
kubectl taint node <node-name> <taint-name>kubectl taint node <node-name> <taint-name>

验证 GPU 节点配置

为确保已成功添加 GPU 节点,请在终端中运行以下命令。输出应显示 nvidia.com/gpu,以及 CPU 和 RAM 资源。
kubectl describe node <node-name>kubectl describe node <node-name>

此页面有帮助吗?

获取您需要的帮助
了解 RPA - 自动化课程
UiPath Community 论坛
Uipath Logo White
信任与安全
© 2005-2024 UiPath。保留所有权利。