- Notes de publication
- Septembre 2025
- Août 2025
- Juillet 2025
- Mai 2025
- Avril 2025
- Mars 2025

Notes de version des agents
Septembre 2025
linkSeptember 25, 2025
linkIntroducing file support in Agents
UiPath Agents now support native file handling through the new Analyze Attachments built-in tool. This enables agents to process files, like images and documents, directly in their workflows.
With this capability, agents can accept files as input arguments and leverage LLMs to analyze their content. Based on natural language instructions, agents can extract information and interpret visual elements, returning structured, context-aware responses back into the agent's context. The following file types are currently supported: GIF, JPE, JPEG, PDF, PNG, WEBP.
File support unlocks a variety of new use cases, including image comparison (e.g., spotting differences in marketing assets or product images) and signature verification (e.g., assisting in fraud detection by comparing scanned signatures), among many others.
This feature is currently in public preview.
For details, refer to Built-in tools.
September 24, 2025
linkVolet Exécutions et évaluations pour les agents
Nouveau panneau inférieur pour le temps de conception des agents dans Studio Web
Utilisez le panneau inférieur pour tester et déboguer lors de la conception, afficher les traçages en temps réel et évaluer les agents avec des ensembles de données créés à partir de vos exécutions de test ou d'ensembles spécialement conçus.
Récupération de traçages de runtime dans des ensembles d'évaluation
You can now fetch runtime traces directly into evaluation sets, making it easy to turn production feedback into actionable test cases. After running an agent and reviewing traces in Orchestrator or the Agent Instance Management page, use the Fetch runtime traces option in Evaluations to pull those runs into a set. From there, you can edit the inputs and expected outputs, save them, and immediately use them for ongoing evaluation. Once added, these traces are clearly labeled as runtime runs, so you can distinguish them from design-time tests. They also contribute to your agent’s overall evaluation score, giving you instant visibility into how real-world feedback impacts performance. For details, refer to Evaluations.