- Release Notes
- Getting Started
- Governance
- Logging

Automation Ops user guide
The AI Trust Layer policy governs the use of third-party AI models across products. The policy sets rules for your organization at tenant, group, or user level.
Select the Product Toggles tab to enable or disable the use third-party generative AI features at the product level.
There is a five-minute cache for the governance policies. This means that any modifications made to the policies, including all toggles, will only be in effect after the five-minute period.
For the AI Trust Layer policy, the following settings are available:
- Enable calls to AI models through AI Trust Layer – By default, this option is enabled. When disabled, it impacts all products that use third-party generative AI models, with the exception of Communications Mining™, where you need to disable third-party generative AI features at the product level.
- Enable Coded Agents – By default, this option is set to Yes. When set to No, it disables LLM functionality for coded agents.
- Enable Autopilot for Everyone – By default, this option is set to Yes. When set to No, it prevents you from conversing with Autopilot for Everyone.
- Enable Test Manager
features – By default, this option is set to Yes, to leverage the
AI-powered testing capabilities of Test Manager. When set to No, it
disables the following:
- the ability to automatically generate test cases from a requirement;
- the ability to generate concise insights on the test execution results.
-
Enable UiPath GenAI activities – By default, this option is set to Yes. When set to No, it disables any calls to third-party AI models initiated by GenAI activities.
AI Trust Layer ensures that your data is never stored outside of UiPath®, nor is it used to train third-party models.
The Feature Toggles tab allows admins to control how Gen AI interactions are handled within a policy, specifically around auditability and privacy protection.
Input/output saving for audit
This setting determines whether prompt inputs and LLM-generated outputs are saved and displayed in the Audit tab of the AI Trust Layer. By default, this toggle is set to Yes, meaning all Gen AI interactions governed by the policy are stored for auditing purposes.
This setting is particularly important for organizations that need to demonstrate compliance or investigate model behavior over time.