Perform dry run. Open a performance scenario and select Dry run.
提示:
A dry run executes each load group with a single robot to validate automation stability or detect infrastructure misconfigurations. The dry run calculates the required resources before full execution.
Run a full execution. Open a performance scenario for which you already performed a dry run. Select Full execution. The execution screen. is opened automatically.
Monitor the dashboard in real time and check the execution status. The progress bar displays four sequential phases.
Loading test configuration - The system validates the scenario setup and loads the configuration details (test cases, load groups, thresholds, and data sources).
Provisioning resources - The required execution resources are allocated.
For cloud robots, this means provisioning serverless robots and consuming Platform Units.
For on-premises robots, this means the correct machines and runtimes are available.
Preparing virtual users - Virtual users are initialized based on the defined load group settings, which includes connecting robots, assigning test cases, and preparing the execution environment.
Full execution - The actual performance test runs according to the configured load profile (ramp-up, peak, ramp-down). Real-time monitoring of metrics (response times, error rates, infrastructure usage) becomes available at this stage.
Consult the execution overview. The dashboard shows the summary of a performance test execution.
Load groups: Active load groups currently executing in parallel.
Virtual users: Currently active virtual users for the entire scenario.
Errors: Errors occurred during the run until now (HTTP, automation errors) over all groups.
Average response time: Average and maximum detected response time response across all groups.
Graph: Load profile with a visual representation of the progress.
Consult the metrics. The histogram represents the overall average response time for the currently selected load group. You can resize and move the highlighted bar to zoom into a specific time range. Several charts are also provided.
The Load Profile chart section shows how many virtual users were active at a given time. This reflects the configured ramp-up, peak, and ramp-down phases.
The HTTP Response Time (ms) chart section tracks the average response time of HTTP requests over the selected period. Compare against thresholds (e.g., 1,000 ms) to see where performance degrades.
The HTTP Errors chart section displays the percentage of HTTP-level errors (e.g., 404, 503). This helps identify if server or network issues are causing instability.
The Automation Step Duration (ms) chart section measures how long individual automation steps take to execute. Spikes may indicate inefficiencies or issues in the automation design.
The Automation Errors (%) chart section shows the percentage of automation-level errors (e.g., failed selectors, exceptions). This helps differentiate system errors from automation issues.
The Infrastructure – Executing Robots CPU (%) chart section monitors CPU usage of the robots executing the load. High or sustained CPU usage can indicate a resource bottleneck.
The Infrastructure – Executing Robots Memory (%) chart section tracks memory consumption of executing robots. This is useful for spotting memory leaks or excessive usage over time.
Use percentile metrics such as P50, P90, or P95 are shown to help you understand the distribution of response times and identify outliers that may impact user experience. These are available for metrics like: HTTP response time, HTTP Errors, Automation Step Duration, Automation Errors.
Monitor issues during execution. Check the application log and the severity levels, on the right side of the execution screen.
Info – general information, such as resource allocation
Warning – threshold breaches or potential risk conditions