- Getting started
- Data security and compliance
- Organizations
- Authentication and security
- Licensing
- About licensing
- Unified Pricing: Licensing plan framework
- Flex: Licensing plan framework
- Activating your Enterprise license
- Upgrading and downgrading licenses
- Requesting a service trial
- Assigning licenses to tenants
- Assigning user licenses
- Deallocating user licenses
- Monitoring license allocation
- License overallocation
- Licensing notifications
- User license management
- Tenants and services
- Accounts and roles
- Testing in your organization
- AI Trust Layer
- External applications
- Notifications
- Logging
- Troubleshooting
- Migrating to Automation Cloud

Automation Cloud admin guide
The AI Trust Layer is UiPath’s governance and control framework for all generative AI activity across products and services. It provides centralized control, security, and observability for how both UiPath-managed and third-party LLMs are used.
AI Trust Layer ensures every generative AI request follows organization-wide security, privacy, and usage rules. It applies these policies consistently across products by acting as a shared control point for all LLM traffic.
The AI Trust Layer governs GenAI usage across the following:
- UiPath products and services such as AutopilotTM, Studio, Test Manager, Process Mining, and more.
- UiPath-managed LLMs and third-party LLMs, including bring-your-own-LLM configurations.
- Model selection, access controls, data privacy, and usage tracking.
- LLM gateway: LLM
gateway is the interface between UiPath services and third-party LLM
providers. It serves as the central entry point for all LLM traffic,
applying governance and configuration logic before requests reach external
models. Capabilities specific to LLM gateway involve:
- Managing AI Trust Layer policies: Check existing AI Trust Layer policies in Automation Ops > Governance, and their deployments.
- LLM configuration: Integrate your existing AI subscriptions while maintaining governance framework provided by UiPath, by replacing the UiPath LLM subscription or bringing your own LLM.
- Context Grounding
(also referred to as Enterprise Context Service, or ECS): Context Grounding
enhances prompts with relevant context before they are sent to an LLM. You
can use Context Grounding to create and manage data indexes used by
UiPath GenAI features and agents.
Index management is available in Orchestrator.
- LLM observability: LLM
observability is responsible for monitoring, usage visibility, and audit
logging for all LLM interactions across UiPath services. Capabilities
specific to LLM observability involve:
- Usage summary: Tracks LLM usage per tenant, service, and model.
- Audit logging: Captures details of LLM interactions for traceability, compliance, and debugging.
- PII masking: Ensures personally identifiable information is pseudonymized before reaching LLMs used in generative AI features.
While AI Trust Layer includes sub-services, it also governs flows involving other services and components, including:
- UiPath generative AI features that rely on accessing LLMs, such as features provided by AutopilotTM, IXP, Autopilot for Everyone, Agents
- Specialized LLMs, such as CommPath and DocPath
- GenAI Connectors
These services interact with AI Trust Layer, but may have their own configuration spaces. For example, Autopilot for Everyone is governed by AI Trust Layer, in terms of the LLMs used, but is managed from a separate tab at the organization and tenant level.