AI Center
2020.10
false
Banner background image
AI Center
Last updated Mar 11, 2024

4. Run the AI Fabric Infrastructure Installer

Note: For AKS Installation, move to Step 5. Run the AI Fabric Application Installer

Run the AI Fabric infrastructure installer. Completing this installer will produce the Kots admin console where you are able to manage application updates, application configuration, resource usage (CPU/mem pressure), and download support bundles to troubleshoot any issues.

Important: Do not kill the process or shutdown the machine while this step is running. This step will complete in 15-25 minutes. If you accidentally cause the process to terminate in the middle, the machine needs to be re-provisioned and brand new disks need to be attached.

Online installation

First step is to download installer zip file here and move it to AI Fabric server. Alternatively, you can download it directly from the machine using following command

Important:

The script will download some files locally as part of installation process, please make sure you have 4GB available on the directory where you are executing script.

By default Azure RHEL VMs have only 1 GB available on home directory which is default directory.

wget https://download.uipath.com/aifabric/online-installer/v2020.10.5/aifabric-installer-v20.10.5.tar.gzwget https://download.uipath.com/aifabric/online-installer/v2020.10.5/aifabric-installer-v20.10.5.tar.gz

Then untarthe file and go inside main folder using following command:

tar -xvf aifabric-installer-v20.10.5.tar.gz
cd ./aifabric-installer-v20.10.5tar -xvf aifabric-installer-v20.10.5.tar.gz
cd ./aifabric-installer-v20.10.5

You can then run AI Fabric installer by running:

./setup.sh./setup.sh

The first step is to accept license agreement by pressing Y. The script will then ask you what type of platform you want to install, enter onebox and press enter as on image below:



You will then be asked if a GPU is available for your setup and Y or N depending of your hardware. Make sure that drivers are already installed.



Important: Only NVIDIA GPU's are supported and drivers need to be installed prior AI Fabric installation.

Depending of your system you might be asked to press Y few times for the installation to complete.

This step will take between 15-25 minutes to complete. Upon completion, you will see on the terminal output a message Installation Complete.

Airgapped installation

On local machine with access to a browser (e.g. a Windows server) download bundle install using link provided by your account manager.

Extract the contents of the downloaded file using 7zip from a windows file explorer or tar -zxvf aifabric-installer-v2020.10.5.tar.gz from a machine that supports tar.

This will create two folders:

  • aif_infra_20.10.5.tar.gz containing infrastructure components (about 3.6 GB)
  • ai-fabric-v2020.10.5.airgap containing application components (about 8.7 GB). This will be uploaded to the UI in step 5. Run the AI Fabric Application Installer.
Copy the folder aif_infra_20.10.5.tar.gzto the airgapped AI Fabric machine.

Then run following command to start infrastructure installer:

tar -zxvf aif_infra_20.10.5.tar.gz
cd aif_infra_20.10.5
sudo ./setup.shtar -zxvf aif_infra_20.10.5.tar.gz
cd aif_infra_20.10.5
sudo ./setup.sh

Admin console access

In both case, successfull installation will ouput address and password of KotsAdmin Ui

...
Install Successful:
configmap/kurl-config created
                Installation
                  Complete ✔
Kotsadm: http://13.59.108.17:8800
Login with password (will not be shown again): NNqKCY82S
The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 
30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:msDX5VZ9m .
To access the cluster with kubectl, reload your shell:
    bash -l
    
......
Install Successful:
configmap/kurl-config created
                Installation
                  Complete ✔
Kotsadm: http://13.59.108.17:8800
Login with password (will not be shown again): NNqKCY82S
The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 
30902 and 30903 respectively.
To access Grafana use the generated user:password of admin:msDX5VZ9m .
To access the cluster with kubectl, reload your shell:
    bash -l
    
...
Note: of the address of the kotsadm UI, this is on <machine-ip>:8800. In some case it ma display internal IP instead of public IP, make sure you are using public IP if you are accessing it from outside.
Note: on the line below you will see the password login. Make a note of this password. You can regenerate this password if it is lost or if you would like to reset it:
bash -l
kubectl kots reset-password -n defaultbash -l
kubectl kots reset-password -n default

Adding GPU after installation

If the GPU was not available during installation but is later added to the machine, you need to complete the following steps to make it accessible to AI Fabric.
  • Check if the GPU drivers are correctly installed by running the following command:
    nvidia-sminvidia-smi

    If the GPU drivers are installed correctly, your GPU information should be displayed. If an error occurs, it means that the GPU is not accessible or the drivers are not installed correctly. This issue must be fixed before proceeding.

  • Check if NVIDIA Runtime Container is correctly installed by running the following command:
    /usr/bin/nvidia-container-runtime/usr/bin/nvidia-container-runtime
  1. Download the two available scripts for adding the GPU from the following link: GPU scripts.
  2. Run a script to add the GPU to the cluster so that Pipelines and ML Skills can use it. Depending on your installation, choose one of the following options:
    • In case of online installation, run the following script:
      <h1>navigate to where you untar installer (or redo it if you have removed it)
      cd ./aicenter-installer-v21.4.0/infra/common/scripts
      ./attach_gpu_drivers.sh</h1><h1>navigate to where you untar installer (or redo it if you have removed it)
      cd ./aicenter-installer-v21.4.0/infra/common/scripts
      ./attach_gpu_drivers.sh</h1>
    • In case of airgapped, first you need to create the file in the aif_infra directory, making sure nvidia-device-plugin.yaml is located in the same folder.
      To create the file, paste the content from the attach_gpu_drivers.sh file downloaded at Step 1. Run the following script:
      ./attach_gpu_drivers.sh./attach_gpu_drivers.sh

Troubleshooting

The infrastructure installer is not idempotent. This means that running the installer again (after you have already run it once) will not work. If this installer fails, you will need to reprovision a new machine with fresh disks.

The most common sources of error are that the bootdisk becomes full during the install or that the external data disks are mounted/formatted. Remember to only attach the disks, not format them.

If the installation fails with unformatted disks and a sufficiently large boot risk, contact our support team and include in your email a support bundle. A support bundle can be generated by running this command:

curl https://krew.sh/support-bundle | bash
kubectl support-bundle https://kots.iocurl https://krew.sh/support-bundle | bash
kubectl support-bundle https://kots.io

Alternatively if you don't have access to the internet you can create file support-bundle.yaml with following text:

apiVersion: troubleshoot.replicated.com/v1beta1
kind: Collector
metadata:
  name: collector-sample
spec:
  collectors:
    - clusterInfo: {}
    - clusterResources: {}
    - exec:
        args:
          - "-U"
          - kotsadm
        collectorName: kotsadm-postgres-db
        command:
          - pg_dump
        containerName: kotsadm-postgres
        name: kots/admin_console
        selector:
          - app=kotsadm-postgres
        timeout: 10s
    - logs:
        collectorName: kotsadm-postgres-db
        name: kots/admin_console
        selector:
          - app=kotsadm-postgres
    - logs:
        collectorName: kotsadm-api
        name: kots/admin_console
        selector:
          - app=kotsadm-api
    - logs:
        collectorName: kotsadm-operator
        name: kots/admin_console
        selector:
          - app=kotsadm-operator
    - logs:
        collectorName: kotsadm
        name: kots/admin_console
        selector:
          - app=kotsadm
    - logs:
        collectorName: kurl-proxy-kotsadm
        name: kots/admin_console
        selector:
          - app=kurl-proxy-kotsadm
    - secret:
        collectorName: kotsadm-replicated-registry
        includeValue: false
        key: .dockerconfigjson
        name: kotsadm-replicated-registry
    - logs:
        collectorName: rook-ceph-agent
        selector:
          - app=rook-ceph-agent
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-mgr
        selector:
          - app=rook-ceph-mgr
        namespace: rook-ceph
        name: kots/rook
- logs:
        collectorName: rook-ceph-mon
        selector:
          - app=rook-ceph-mon
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-operator
        selector:
          - app=rook-ceph-operator
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-osd
        selector:
          - app=rook-ceph-osd
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-osd-prepare
        selector:
          - app=rook-ceph-osd-prepare
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-rgw
        selector:
          - app=rook-ceph-rgw
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-discover
        selector:
          - app=rook-discover
        namespace: rook-ceph
        name: kots/rookapiVersion: troubleshoot.replicated.com/v1beta1
kind: Collector
metadata:
  name: collector-sample
spec:
  collectors:
    - clusterInfo: {}
    - clusterResources: {}
    - exec:
        args:
          - "-U"
          - kotsadm
        collectorName: kotsadm-postgres-db
        command:
          - pg_dump
        containerName: kotsadm-postgres
        name: kots/admin_console
        selector:
          - app=kotsadm-postgres
        timeout: 10s
    - logs:
        collectorName: kotsadm-postgres-db
        name: kots/admin_console
        selector:
          - app=kotsadm-postgres
    - logs:
        collectorName: kotsadm-api
        name: kots/admin_console
        selector:
          - app=kotsadm-api
    - logs:
        collectorName: kotsadm-operator
        name: kots/admin_console
        selector:
          - app=kotsadm-operator
    - logs:
        collectorName: kotsadm
        name: kots/admin_console
        selector:
          - app=kotsadm
    - logs:
        collectorName: kurl-proxy-kotsadm
        name: kots/admin_console
        selector:
          - app=kurl-proxy-kotsadm
    - secret:
        collectorName: kotsadm-replicated-registry
        includeValue: false
        key: .dockerconfigjson
        name: kotsadm-replicated-registry
    - logs:
        collectorName: rook-ceph-agent
        selector:
          - app=rook-ceph-agent
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-mgr
        selector:
          - app=rook-ceph-mgr
        namespace: rook-ceph
        name: kots/rook
- logs:
        collectorName: rook-ceph-mon
        selector:
          - app=rook-ceph-mon
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-operator
        selector:
          - app=rook-ceph-operator
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-osd
        selector:
          - app=rook-ceph-osd
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-osd-prepare
        selector:
          - app=rook-ceph-osd-prepare
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-ceph-rgw
        selector:
          - app=rook-ceph-rgw
        namespace: rook-ceph
        name: kots/rook
    - logs:
        collectorName: rook-discover
        selector:
          - app=rook-discover
        namespace: rook-ceph
        name: kots/rook

And then create support-bundle file using following command:

kubectl support-bundle support-bundle.yamlkubectl support-bundle support-bundle.yaml

This will create a file called supportbundle.tar.gz which you can upload when raising a support ticket.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.