- Notas relacionadas
- Septiembre de 2025
- Agosto de 2025
- Julio de 2025
- Mayo de 2025
- Abril de 2025
- Marzo de 2025

Notas de la versión de los agentes
Septiembre de 2025
linkSeptember 25, 2025
linkIntroducing file support in Agents
UiPath Agents now support native file handling through the new Analyze Attachments built-in tool. This enables agents to process files, like images and documents, directly in their workflows.
With this capability, agents can accept files as input arguments and leverage LLMs to analyze their content. Based on natural language instructions, agents can extract information and interpret visual elements, returning structured, context-aware responses back into the agent's context. The following file types are currently supported: GIF, JPE, JPEG, PDF, PNG, WEBP.
File support unlocks a variety of new use cases, including image comparison (e.g., spotting differences in marketing assets or product images) and signature verification (e.g., assisting in fraud detection by comparing scanned signatures), among many others.
Esta característica está actualmente en vista previa pública.
For details, refer to Built-in tools.
September 24, 2025
linkPanel de ejecuciones de diseño y evaluaciones para agentes
Nuevo panel inferior para tiempo de diseño de los agentes en Studio Web
Usa el panel inferior para probar y depurar en tiempo de diseño, ver seguimientos en vivo y evaluar los agentes con conjuntos de datos creados a partir de tus ejecuciones de prueba o conjuntos creados a propósito.
Obtener seguimientos de runtime en conjuntos de evaluación
You can now fetch runtime traces directly into evaluation sets, making it easy to turn production feedback into actionable test cases. After running an agent and reviewing traces in Orchestrator or the Agent Instance Management page, use the Fetch runtime traces option in Evaluations to pull those runs into a set. From there, you can edit the inputs and expected outputs, save them, and immediately use them for ongoing evaluation. Once added, these traces are clearly labeled as runtime runs, so you can distinguish them from design-time tests. They also contribute to your agent’s overall evaluation score, giving you instant visibility into how real-world feedback impacts performance. For details, refer to Evaluations.