- Getting Started
- Administration
- Manage Sources and Datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing Data for .CSV Upload
- Model Training and Maintenance
- Understanding labels, entities and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and labelling best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to 'Refine'
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using 'Check label' and 'Missed label'
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using 'Rebalance'
- When to stop training your model
- Using Analytics & Monitoring
- Automations and Communications Mining
- FAQs and More
Validation
The Validation page shows users detailed information on the performance of their model, for both labels and entities.
In the 'Labels' tab users can see their overall label Model Rating, including a detailed breakdown of the factors that make up their rating, and other metrics on their dataset and the performance of individual labels.
In the 'Entities' tab, users can see statistics on the performance of entity predictions for all of the entities enabled in the dataset.
Labels
The 'Factors' tab (as shown above) shows:
- The four key factors that contribute to the Model Rating: balance, coverage, average label performance, and the performance of the worst performing labels
- For each factor it provides a score, and a breakdown of the contributors to the factor's score
- Clickable recommended next best actions to improve the score of each factor
The 'Metrics' tab (as shown below) shows:
- The training set size – i.e. the number of messages on which the model was trained
- The test set size – i.e. the number of messages on which the model was evaluated
- Number of labels – i.e. the total number of labels in your taxonomy
- Mean precision at recall – a graph showing the average precision at a given recall value across all labels
- Mean average precision – a statistic showing the average precision across all labels
- A chart showing, across all labels, the average precision per label vs. training set size
The Validation page also allows users to select individual labels from their taxonomy to drill-down into their performance.
After selecting a label, users can see the average precision for that label, as well as the precision vs. recall for that label based on a given confidence threshold (which users can adjust themselves).
To understand more about how Validation for labels actually works and how to use it, see here.
Entities
The 'Entities' tab (as shown above) shows:
- The number of entities in the train set – i.e. the number of annotated entities on which the validation model was trained
- The number of entities in the test set – i.e. the number of annotated entities on which the validation model was evaluated
- The number of messages in the train set – i.e. the number of messages that have annotated entities in the train set
- The number of messages in the test set – i.e. the number of messages that have annotated entities in the test set
- Average precision - the average precision score across all entities
- Average recall - the average recall score across all entities
- Average F1 score - the average F1 score across all entities (the F1 score is the harmonic mean of precision and recall, and weights them equally)
- The same statistics but for each individual entity
To understand more about how Validation for entities actually works and how to use it, see here.