- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Pinning and tagging a model version
- Deleting a pinned model
- Adding new labels to existing taxonomies
- Maintaining a model in production
- Model rollback
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Adding new labels to existing taxonomies
User permissions required: 'View Sources' AND 'Review and annotate'.
If you have a pre-existing mature taxonomy, with many reviewed messages, adding a new label requires some additional training to bring it in line with the rest of the labels in the taxonomy.
When adding a new label to a well-trained taxonomy, you need to make sure to apply it to previously reviewed messages if the label is relevant to them.
If you do not, the model will have effectively been taught that the new label should not apply to them and will struggle to predict the new label confidently.
The more reviewed examples there are in the dataset, the more training this will require when adding a new label (unless it's an entirely new concept that you won't find in older data, but will find in much more recent data).
Key steps:
Create the new label when you find an example where it should apply
Find other examples where it should apply using a few different methods:
- You can search for key terms or phrases using search function in Discover to find similar instances - this way you apply the label in bulk if there are lots of similar examples in the search results
- Or you can search for key terms or phrases in Explore - this is potentially a better method as you can filter to 'Reviewed' messages, and searching in Explore returns an approximate count of the number of messages that match your search terms
- You can also select labels that you think might often appear alongside your new label, and review the pinned examples for that label to find examples where your new label should be applied
- Once you have a few pinned examples, see if it starts to get predicted in 'Label' mode - if it does, add more examples using this mode
- Lastly, if you're annotating in a sentiment enabled dataset, and your new label is typically either positive or negative you can also choose between positive and negative sentiment when looking at reviewed examples (though at present you cannot combine 'text search' with the 'reviewed' filter AND a sentiment filter)
Then use 'Missed label' to find more messages where the platform thinks the new label should have been applied:
- Once you have annotated quite a few examples using the methods above and the model has had time to retrain, use the 'Missed label' functionality in Explore by selecting your label and then select 'Missed label' from the dropdown menu
- This will show you reviewed messages where the model thinks the select label may have been missed in the previously reviewed examples
- In these instances, the model will show the label as a suggestion (as shown in the example below)
- Apply the label to all of the messages that the model correctly thinks the label should have been applied to
- Keep training in this page until you have annotated all of the correct examples, and this mode no longer shows you examples where the label should actually apply
Then check how the new label performs in the Validation page (once the model has had time to retrain and calculate the new validation statistics) and see if more training is required.