- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Communications Mining User Guide
Training using Check label and Missed label
User permissions required: 'View Sources' AND 'Review and annotate'.
Previously, the 'Teach' function, when filtered to reviewed messages, would show messages where the platform thought that the selected label may have either been misapplied, or missed. 'Check label' and 'Missed label' split these into two separate views, with 'Check label' displaying messages with the label potentially misapplied, and 'Missed label' showing messages that may be missing the selected label.
Introduction to using 'Check label' and 'Missed label'
Using the 'Check label' and 'Missed label' training modes is the part of the Refine phase where you try to identify any inconsistencies or missed labels in the messages that have already been reviewed. This is different to the 'Teach label' step, which focuses on unreviewed messages that have predictions made by the platform, rather than assigned labels.
'Check label' shows you messages where the platform thinks the selected label may have been misapplied - i.e. it potentially should not have been applied.
'Missed label' shows you messages that the platform thinks may be missing the selected label - i.e. it potentially should have been applied but wasn't. Here the selected label will typically appear as a suggestion, as shown in the image below.
The suggestions from the platform in either mode are not necessarily correct, these are just the instances where the platform is unsure based on the training that's been completed so far. You can choose to ignore them if you disagree with the platform's suggestions after reviewing them.
Using these training modes is a very effective way of finding occurrences where the user may have not been consistent in applying labels. By using them you are able to correct these occasions and therefore improve the performance of the label.
When to use 'Check label' and 'Missed label'?
The simplest answer of when to use either training mode is when they are one of the 'Recommended actions' in the Model Rating section or specific label view in the Validation page (see here).
As a rule of thumb, any label that has a significant number of pinned examples but has low average precision (which can be indicated by red label warnings in the Validation page or in the label filter bars) will likely benefit from some corrective training in either 'Check label' and 'Missed label' mode.
When validating the performance of a model, the platform will determine whether it thinks a label has often been applied incorrectly, or where it thinks it's been regularly missed, and will prioritise whichever corrective action it thinks would be most beneficial for improving a label's performance.
'Missed label' is also a very useful tool if you've added a new label to an existing taxonomy with lots of reviewed examples. Once you've provided some initial examples for the new label concept, 'Missed label' can quickly help you identify any examples in the previously reviewed messages where it should also apply. See here for more detail.
How to use 'Check label' and 'Missed label':
To reach either of these training modes, there are two main options:
-
If it is a recommended action in Validation for a label, the action card acts as a link that takes you directly to that training mode for the selected label
- Alternatively, you can select either training mode from the dropdown menu at the top of the page in Explore, and then select a label to sort by (see image above for example)
Please Note: You must first select a label before either 'Check label' or 'Missed label' will appear in the dropdown menu. Both of these modes also disable the ability to filter between reviewed and unreviewed messages, as they are exclusively for reviewed messages)
In each mode, the platform will show you up to 20 examples per page of reviewed messages where it thinks the selected label may have been applied incorrectly ('Check label') or may be missing the selected label ('Missed label').
'Check label'
In 'Check label' review each of the examples on the page to confirm that they are genuine examples of the selected label. If they are, move on without taking action. If they are not, remove the label (by clicking the 'X' when hovering over it) and ensure you apply the correct label(s) instead.
Review as many pages of reviewed messages as necessary to identify any inconsistencies in the reviewed set and improve the model's understanding of the label. Correcting labels added in error can have a major impact on the performance of a label, by ensuring that the model has correct and consistent examples from which to make predictions for that label.
'Missed label'
In 'Missed label' review each of the examples on the page to see whether the selected label has in fact been missed. If it has, click the label suggestion (as shown in the image above) to apply the label. If it has not, ignore the suggestion and move on.
Just because the platform is 'suggesting' a label on a reviewed message, does not mean the model considers it to be a prediction, nor will it count towards any statistics on the number of labels in a dataset. If a suggestion is wrong, you can just ignore it.
Review as many pages of reviewed messages as necessary to identify any examples in the reviewed set that should have the selected label but do not. Partially annotated messages can be very detrimental to the model's ability to predict a label, as when you do not apply a label to a message, you essentially tell the model 'this is not an example of this label concept'. If it is in fact a correct example, this can be very confusing for the model, particularly if there are other very similar examples that DO have the label applied.
Adding labels that have been missed can therefore have a major impact on the performance of a label, by ensuring that the model has correct and consistent examples from which to make predictions for that label.
Once the model has had time to retrain after your corrective training in these modes, you can check back in Validation to see the positive impact your actions have had on the Model Rating and the performance of the specific labels you've trained.