- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Validation
The Validation page shows users detailed information on the performance of their model, for both labels and general fields.
In the 'Labels' tab users can see their overall label Model Rating, including a detailed breakdown of the factors that make up their rating, and other metrics on their dataset and the performance of individual labels.
In the General fields tab, users can see statistics on the performance of general field predictions for all of the general fields enabled in the dataset.
Labels
The 'Factors' tab (as shown above) shows:
- The four key factors that contribute to the Model Rating: balance, coverage, average label performance, and the performance of the worst performing labels
- For each factor it provides a score, and a breakdown of the contributors to the factor's score
- Clickable recommended next best actions to improve the score of each factor
The 'Metrics' tab (as shown below) shows:
- The training set size – i.e. the number of messages on which the model was trained
- The test set size – i.e. the number of messages on which the model was evaluated
- Number of labels – i.e. the total number of labels in your taxonomy
- Mean precision at recall – a graph showing the average precision at a given recall value across all labels
- Mean average precision – a statistic showing the average precision across all labels
- A chart showing, across all labels, the average precision per label vs. training set size
The Validation page also allows users to select individual labels from their taxonomy to drill-down into their performance.
After selecting a label, users can see the average precision for that label, as well as the precision vs. recall for that label based on a given confidence threshold (which users can adjust themselves).
To understand more about how Validation for labels actually works and how to use it, see here.
General Fields
The General Fields tab (as shown above) shows:
- The number of general fields in the train set – i.e. the number of annotated general fields on which the validation model was trained
- The number of general fields in the test set – i.e. the number of annotated general fields on which the validation model was evaluated
- The number of messages in the train set – i.e. the number of messages that have annotated general fields in the train set
- The number of messages in the test set – i.e. the number of messages that have annotated general fields in the test set
- Average precision - the average precision score across all general fields
- Average recall - the average recall score across all general fields
- Average F1 score - the average F1 score across all general fields (the F1 score is the harmonic mean of precision and recall, and weights them equally)
- The same statistics but for each individual general field
To understand more about how Validation for general fields actually works and how to use it, see here.