- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Reviewing label predictions
User permissions required: 'View Sources' AND 'Review and annotate'.
After the Discover phase, the model will start making predictions for many of the labels in your taxonomy.
The purpose of Explore phase is to review predictions for each label, confirming whether they are correct and correcting them where they aren't, and thereby providing many more training examples for the model.
There are two key actions when reviewing label predictions:
- Where the predictions are correct, you should confirm/accept them by simply clicking on them
- Where they are incorrect, you should either dismiss/ignore them or alternatively add the correct label(s) that does apply. To add a different label, click the ‘+’ button and type it in. This is the way to correct wrong predictions, by adding the correct one and not clicking on any incorrectly predicted labels
The images below show how predictions look in Communications Mining for data with and without sentiment. Hovering your mouse over the label will also show the confidence the model has that the specific label applies.
The transparency of the predicted label provides a visual indicator of the model's confidence. The darker the colour, the higher the confidence, and vice versa:
To delete a label you applied in error you can hover over it and an ‘X’ will appear. Click this to remove the label.