Communications Mining
latest
false
- Getting Started
- Administration
- Manage Sources and Datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing Data for .CSV Upload
- Model Training and Maintenance
- Understanding labels, entities and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and labelling best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to 'Refine'
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using 'Check label' and 'Missed label'
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using 'Rebalance'
- When to stop training your model
- Using Analytics & Monitoring
- Automations and Communications Mining
- FAQs and More
Model Rating
Communications Mining User Guide
Last updated Apr 18, 2024
Model Rating
The platform helps users training models by calculating a holistic Model Rating, which assesses the overall health and performance of their model by considering a number of key contributing factors.
This rating is a proprietary score created to ensure that our users create models that perform well in all of the most important areas.
The four main factors that the rating takes into account are:
- Balance - this factor assesses whether the training data is a balanced representative of the dataset as a whole
- Underperforming Labels - assesses the performance of the 10% of labels that have the most significant warnings
- Coverage - assesses how well the dataset as a whole is covered by predictions for informative labels
- All Labels - assesses the average performance of labels by looking at every label in the taxonomy
Example Model Rating in Validation