agents
latest
false
  • Primeros pasos
    • Información general
    • Licencia
  • Instalación
    • Instalación de ScreenPlay
    • Mejores prácticas
    • Recopilación de datos
    • Running and inspecting the execution results
    • ScreenPlay Variable Security
Importante :
La localización de contenidos recién publicados puede tardar entre una y dos semanas en estar disponible.
UiPath logo, featuring letters U and I in white

Guía del usuario de ScreenPlay

Última actualización 24 de nov. de 2025

Running and inspecting the execution results

After configuring your ScreenPlay prompt, take the following steps:

  1. Run the automation in Debug mode.
  2. Once the run completes, you can review how the agent interpreted and executed your prompt:
    • Go to Debug, then select Open Logs in Studio.
    • Open the ScreenPlay folder.
    • Inspect the most recent .html files.

Each HTML file contains an execution trace that details how the ScreenPlay agent reasoned about the prompt, identified UI targets, and executed each action. This trace provides visibility into the decision-making process and helps validate or troubleshoot ScreenPlay behavior.

Running and inspecting execution results

The ScreenPlay Execution Trace HTML file

The ScreenPlay Execution Trace HTML file provides a complete visual and diagnostic record of a ScreenPlay automation run. It captures prompt input, AI reasoning, on-screen actions, and timing metrics in a structured, interactive format.

You can use this file to inspect and validate how ScreenPlay interpreted a prompt, which UI elements were targeted, and how each automation step executed across the interface.

The file is automatically generated by default. You can change this setting in Project Settings, then select UIAutomation Modern, then go to ScreenPlay, where you can also set the number of days the trace files are being stored for.

Enable trace files

Información general

When you run a ScreenPlay automation, the execution trace is automatically generated as an .html file. This file combines:

  • Natural language prompt data.
  • Step-by-step UI snapshots with bounding boxes.
  • Timing and token metrics.
  • Diagnostic sections for reasoning and errors.

You can open the HTML file in any modern web browser to review or share the complete execution sequence.

File structure

Each ScreenPlay trace file follows a consistent internal structure with the following top-level sections.

SecciónDescripción
EncabezadoDisplays the prompt, trace ID, and timestamp of execution.
Overall Metrics (Grand Totals)Summarizes total runtime, processing times, and token usage.
Player ContainerContains the visual replay component — screenshots, highlights, and step navigation.
Iteration BlocksEach iteration (step) in the agent’s reasoning or execution cycle. Includes screenshots, reasoning, and metrics.
Diagnostic SectionsOptional panels for AI reasoning, activity data, and error messages.

Header fields

The following tables describes the header fields and examples:

CampoEjemploDescripción
Promptcreate a random RPA supplierThe natural language instruction that initiated the automation.
Trace ID3b97584d-7fc0-43f6-830b-fc45c21811b3A unique identifier for this execution trace, used for reference or comparison.

Overall metrics

The Grand Totals section summarizes key performance data from the execution.

The following table describes the overall metrics:

MétricaDescripción
Duración totalThe total time taken to complete the execution, including perception, reasoning, and action phases.
Cache / DOM / Server / Actions (tooltip)Breakdown of elapsed time in each subsystem (e.g., cached responses, DOM scanning, reasoning, and UI actions).
Total tokensTracks token input/output if the trace includes AI language model reasoning. Useful for debugging LLM usage.

Step frames

Each step-frame, or execution step, represents one iteration of reasoning and action by the ScreenPlay agent.

The following table describes each element and their purpose.

ElementoPropósito
Step number (data-step="1")Identifies the sequence order of the step.
Captura de pantallaA captured image of the application window or desktop at the moment of execution.
Canvas coordinates (data-coordinates)JSON-encoded bounding boxes for detected or interacted elements.
Iteration headerDisplays the step title, preview of reasoning, and duration summary.
Iteration contentContains expanded detail, including reasoning text and any execution metadata.

Diagnostic sections

Each iteration may include one or more expandable sections, as described in the following table:

Section namePropósito
ThinkingDisplays AI reasoning and intent interpretation (if reasoning is enabled).
Step infoShows contextual information about the element targeted, selector, or detected UI control.
ErrorIndicates a failure, with visual cues in red; includes error messages, exception traces, or fallback actions.
Activity dataDisplays structured execution data, including activity type, arguments, and targeted applications.

Screenshot viewer and player controls

ScreenPlay traces include a built-in player for navigating through each captured frame.

The following table describes each control.

ControlDescripción
Next / Previous buttonsNavigate between sequential steps.
Step range sliderJump directly to a step in the sequence.
Toggle highlightsOverlay bounding boxes defined in the data-coordinates field to visualize clicked or typed areas.
Screenshot containerDisplays the rendered image or placeholder if no image is available.

Visual overlays

Each step can include overlays to indicate UI interaction types, as follows:

  • Click – Rectangle drawn over a button or clickable area.
  • Type – Highlight around input fields.
  • Hover / Drag – Outline showing movement or cursor position.
  • Error marker – Red overlay on a failed element action.

¿Te ha resultado útil esta página?

Obtén la ayuda que necesitas
RPA para el aprendizaje - Cursos de automatización
Foro de la comunidad UiPath
Uipath Logo
Confianza y seguridad
© 2005-2025 UiPath. Todos los derechos reservados.