- Primeros pasos
- Gestión de proyecto
- Documentos
- Trabajo con el análisis de impacto de cambios
- Creación de casos de prueba
- Assigning test cases to requirements
- Clonación de casos de prueba
- Exportar casos de prueba
- Linking test cases in Studio to Test Manager
- Delete test cases
- Casos de prueba manuales
- Importar casos de prueba manuales
- Documentar casos de prueba con Task Capture
- Parámetros
- Aplicar filtros y vistas
- Importar conjuntos de pruebas de Orchestrator
- Creating test sets
- Añadir casos de prueba a un conjunto de pruebas
- Asignar usuarios predeterminados en la ejecución del conjunto de pruebas
- Habilitación de la cobertura de actividad
- Habilitar Healing Agent
- Configurar conjuntos de pruebas para carpetas de ejecución y robots específicos
- Anular parámetros
- Clonación de conjuntos de pruebas
- Exportar conjuntos de pruebas
- Aplicar filtros y vistas
- Ejecución de pruebas manuales
- Ejecución de pruebas automatizadas
- Ejecutar casos de prueba sin un conjunto de pruebas
- Ejecutar pruebas mixtas
- Crear ejecuciones pendientes
- Aplicar una orden de ejecución
- Volver a ejecutar ejecuciones de prueba
- Programar ejecuciones
- Solución de problemas de ejecuciones automatizadas
- Preguntas frecuentes: paridad de características: Test Manager frente a Orchestrator
- Accessibility testing for Test Cloud
- Buscar con Autopilot
- Operaciones y utilidades del proyecto
- Configuración de Test Manager
- Configuración del nivel de tenant
- Gestión de acceso de usuario y grupo
- Búsqueda de Autopilot
- Campos personalizados
- Biblioteca de solicitudes
- Configuración general del proyecto
- Configuración del proyecto de automatización
- Mis notificaciones
- Cifrado de claves administradas por el cliente
- Registros de auditoría
- Integración de herramientas de ALM
- Integración de API
- Solución de problemas

Guía de usuario de Test Manager
Use Autopilot Search directly inside the chat, by typing full, partial or fuzzy terms of what you are looking for. The agent retrieves all the matching objects. If a typo is made, the chat auto-corrects common mistakes.
After results appear, you can expand them or open the filtered view in the artifact table, without leaving the chat.
Query examples:
- Find all requirements where custom field Sprint is set to 123.
- Find all the test cases that failed in the last 5 days.
Use the Autopilot Charts functionality to generate a visual representation of your data in the form of bar charts, line charts, and pie charts.
Query examples:
- Show me attachment distribution by file types.
- How many requirements are fully, partially, or not tested?
- Show failed test cases grouped by requirement.
- Show failed tests without linked defects grouped by requirements.
- Show test case distribution by label.
- Show distribution of requirements with custom field 'Sprint'.
- Which test sets take the longest to execute?
- Show me trends of testcases that haven’t been executed recently.
Use the Evaluate Quality functionality to assess the clarity, completeness, and testability of requirements, ensuring higher-quality inputs before test design begins.
To invoke a requirement evaluation from Autopilot Chat, reference the requirement by name or ID. Autopilot Chat understands the context, opens the Requirement Evaluation interface, and displays a detailed analysis with improvement suggestions.
After the requirement evaluation is completed, review the outcome, check the identified issues, and refine the requirement without leaving the chat.
Query examples:
- Evaluate the quality of the ‘Submit Loan’ requirement.
- Evaluate the quality of UIB:24.
- Enter a query indicating which requirement needs to be evaluated.
Figure 3. Autopilot Chat - Evaluate Requirement query
- Select Configure and edit the fields: add any documents or select which documents to be included in the analysis, add or edit a prompt, select an AI model, and then either Accept or Reject the operation.
Figure 4. Autopilot Chat - Evaluate Requirement configuration
- If you select Reject, the operation is not performed. If you select Accept, Autopilot works behind the scenes to provide the results.
Figure 5. Autopilot Chat - Evaluate Requirements result review
- (Optional) Enter a query about the most immediate action that needs to be performed next.
Use the Generate Test Cases functionality to automatically create high-quality, structured test cases based on requirement details, user documents, RAG or user prompts.
To invoke Generate Test Cases from Autopilot Chat, reference a requirement by name or ID. Autopilot Chat interprets your intent, identifies the relevant requirement, and launches the Generate Test Cases tool where you can provide more context (labels, custom fields).
After the tests are generated, review the generated tests.
Query examples:
- Generate test cases for the ‘Submit Loan’ requirement.
- Generate tests cases for UIB:24.
- Enter a query which generates tests for a specific requirement.
Figure 6. Autopilot Chat - Test Generation query
- Enter a query indicating which requirement needs to be evaluated.
- Select Configure and edit the fields: add any documents or select which documents to be included in the analysis, add or edit a prompt, select an AI model, and then either Accept or Reject the operation.
- If you select Reject, the operation is not performed. If you select Accept, review the results.
Figure 7. Autopilot Chat - Generate Test result review
Ask questions in natural-language questions to learn how to use Test Manager. Autopilot Chat retrieves the information directly from the official documentation, complete with source links for further reading.
This makes onboarding new teams much faster and eliminates the need to switch between documentation and product screens.
Query examples:
- How do I create a requirement?
- How do I execute a test set?
Use the Find Obsolete Tests functionality to maintain a clean, up-to-date test repository by automatically identifying the outdated or redundant test cases linked to requirements.
To invoke Find Obsolete Tests from Autopilot Chat, reference a requirement by either a name or ID. Autopilot Chat interprets your intent, identifies the relevant requirement, and launches the Find Obsolete Tests tool where you can provide more context.
Autopilot Chat analyzes the relationship between requirements and their associated test cases to detect obsolete test cases caused by:
- Updated or deprecated requirements
- Redundant coverage of the same functionality
- Outdated test environments or dependencies
- Misaligned or unsupported test steps
Finding obsolete test cases allows testers to focus only on relevant, executable test assets, improving both test accuracy and maintenance efficiency.
When a query falls outside the scope of documentation or Autopilot Search, Autopilot Chat automatically switches to web search mode, but only within the context of Test Automation.
Autopilot Chat searches trusted public sources for topics related to testing frameworks, automation strategies, or QA methodologies, and then summarizes the most relevant insights.
Users receive contextual, automation-focused guidance while staying within the Test Manager domain.
Query examples:
- What are the best practices for writing automated regression tests?
- How should I design data-driven test cases in automation?