- Getting Started
- Administration
- Manage Sources and Datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing Data for .CSV Upload
- Model Training and Maintenance
- Understanding labels, entities and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and labelling best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to 'Refine'
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using 'Check label' and 'Missed label'
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using 'Rebalance'
- When to stop training your model
- Using Analytics & Monitoring
- Automations and Communications Mining
- FAQs and More
Improving entity performance
User permissions required: 'Review and label'.
Like training labels, training entities is the process by which a user teaches the platform which entities apply on a given message using various training modes.
Like with labels, the ‘Teach’, ’Check’, and ’Missed’ modes are available to help train and improve the performance of entities and can be accessed either 1) on the Explore page using the training dropdown, or 2) by following the recommended actions on the Entity tab of the Validation page.
If a specific entity has a performance warning, the platform recommends the next best action that it thinks will help address that warning, listed in order of priority. This will be shown when you select a specific entity from the taxonomy or the 'All Entity' chart.
The next best actions suggestions act as links that you can click to take you direct to the training view that the platform suggests in order to improve the entity's performance. The suggestions are intelligently ordered with the highest priority action to improve the entity listed first.
This is the most important tool to help you understand the performance of your entities, and should regularly be used as a guide when trying to improve entity performance.
The following table summarises when the platform recommends each entity training mode:
Teach Entity | Check Entity | Missed Entity |
- Show predictions for a label where the model is most confused if it applies or not - For training entities on unreviewed messages |
- Shows messages where the platform thinks the entity may have been misapplied - For training entities on reviewed messages to try to find and correct any inconsistencies |
- Shows messages that the platform thinks may be missing the selected entity - For training entities on reviewed messages to try to find and correct any inconsistencies |
Using Teach Entity boosts entity performance, because the model is being given new information on messages it is unsure about, as opposed to ones that it already has highly confident predictions for.
The platform recommends 'Teach Entity' when:
- There is a performance warning next to an entity (as seen below – when the min. 25 examples has not been provided)
- The F1 score on a given entity is low
- There may not always be obvious context within the text for an entity, or there is lots of variation within the entity values for a given type
Using check entity helps identify inconsistencies in the reviewed set, while improving the model's understanding of the entity, by ensuring that the model has correct and consistent examples to make predictions. This will improve the recall of an entity.
The platform recommends 'Check Entity' when:
- There is low recall, but high precision
- The predictions the platform makes are very accurate, but a lot of the time where the entity has been applied, it doesn’t catch these examples
(For more details on calculations for entity validation, please see here)
Using missed entity helps find examples in the reviewed set that should have the selected entity but do not. It will also help identify partially labelled messages which can be detrimental to the model's ability to predict an entity. This will improve the precision of an entity and ensure the model has correct and consistent examples to make predictions from.
The platform recommends 'Missed Entity' when:
- There is high recall, but low precision
- We’re incorrectly predicting entities a lot, but when we do predict them correctly -we catch many of the examples that should be there
(For more details on calculations for entity validation, please see here)