- Getting started
- Data security and compliance
- Organizations
- Authentication and security
- Licensing
- Tenants and services
- Accounts and roles
- AI Trust Layer
- External applications
- Notifications
- Logging
- Troubleshooting
- Migrating to Automation Cloud™
Managing AI Trust Layer
The Settings tab on the AI Trust Layer page allows you to manage third party AI models usage for your organizations, tenants, or user groups through AI Trust Layer policies. This helps you control user access to Generative AI features and ensures appropriate governance across your organization.
You get an overview of all active policies and their current statuses. In addition to this, you can view and manage policies and their deployments, as follows:
-
When you select the policy name, you are redirected to the respective AI Trust Layer policy in Automation Ops™ > Governance. Here you can view the policy details and, if necessary, make changes. For details, refer to Settings for AI Trust Layer Policies.
-
When you select Manage Deployments, you are redirected to Automation Ops™ > Governance > Deployment, where you can review all your policy deployments. For details, refer to Deploy Policies at Tenant Level.
The Context Grounding tab on the AI Trust Layer page allows you to manage and govern the data you use as context with UiPath® GenAI features. For more details about context grounding, see About Context Grounding.
You can check the Context Grounding indexes available across specific tenants. Data about the index data source, update time, and last query time is also available.
Moreover, you can perform the following operations:
-
Add a new index. For details, refer to Adding a new index.
-
View details for the index attributes, using the See more actions menu.
-
Permanently delete the index, including all embeddings and data representation within the index. You can delete the index form the See more actions menu.
Note:If GenAI Activities or Autopilot for Everyone uses an index that is deleted, you must reconfigure them with a new index, otherwise they will fail.
-
Sync the index to update it with the most recent data from the data source. The sync operation overwrites the embeddings and captures only the data currently available in the data source. You can sync the index from the See more actions menu.
The Audit tab on the AI Trust Layer page offers a comprehensive view of AI-related operations, with details about requests and actions, the products and features initiating requests, as well as the used models and their location. You can monitor all AI-related operations and ensure their compliance with your established guidelines and policies. Note that you can view log entries created in the last 60 days.
The audit data is displayed as a table, with each of its columns providing a specific information about the AI-related operations:
-
Date (UTC): This displays the exact date and time,when each operation was requested. It allows for accurate tracking of requests according to their chronological order, facilitating timely audits.
-
Product: The specific product that initiated each operation. This visibility allows tracing any operation back to its originating product for enhanced understanding and accountability.
-
Feature: The specific product feature that initiated the operation, facilitating issue traceability to particular features, if any occurred.
-
Tenant: The specific tenant within your organization that initiated the operation. This insight enables a more detailed overview and helps recognize patterns or issues at the tenant level.
-
User: The individual user within the tenant who initiated the operation. It allows for tracing activities at a granular user level, enhancing the oversight capabilities.
-
Model Used: The specific AI model employed to process each operation. This insight provides a better understanding of which models are handling which types of requests.
-
Model Location: The location where the used model is hosted. This information can assist potential troubleshooting or audit requirements that could arise from model performance in specific locations.
-
Status: The status of each operation—showing if it was successful, failed, or blocked. This quick way of identifying operational issues is crucial for maintaining a smooth, efficient environment.
Additionally, the filtering capability allows you to narrow down your audit based on criteria such as the date, product, used model, or status.
Furthermore, when you select an entry from the Audit table, you can access a Details section for a more in-depth review.
The Usage Summary tab on the AI Trust Layer page provides an overview of the model usage and restrictions across different regions. It represents the historical data from your audit log and reflects the settings of your governance policies.
You can view data displayed on the following criteria:
-
Total LLM Actions per Status: Enables you to monitor the status of different models across regions. To customize the data vizualization, you can filter by region, model, and status.
-
Total LLM Actions per Product: Allows you to monitor the AI feature adoption within your organization. To customize the data visualization, you can filter by tenant and product.
This Autopilot for everyone tab on the AI Trust Layer page allows you to manage Autopilot for everyone usage across your organization.
You can perform the following actions: