- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Training using Teach label (Refine)
User permissions required: 'View Sources' AND 'Review and annotate'
For more details, check the Check label and Missed label page.
If you have a label that is struggling to predict accurately, and you're happy with the consistency of the already pinned examples (as discussed in the previous article), then it is likely that you need to provide the model with more varied (and consistent) training examples.
The platform will typically suggest this mode as a recommended action for labels that would benefit from it the most under the Model Rating factors, as well as in the recommended actions for specific labels that you can select in Validation.
The best method for training the platform on the instances where it struggles to predict whether a label applies or not, is using 'Teach' for unreviewed messages.
As this mode shows you predictions for a label with confidence scores ranging outwards from 50% (or 66% in the case of a sentiment-enabled dataset), accepting or correcting these predictions sends much more powerful training signals to the model than if you were to accept predictions with confidence scores of 90% or more. In this way, you can quickly improve the performance of a label by providing varied training examples that the platform was previously unsure about.
The actual process of annotating in this mode is discussed in the Explore phase .