- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Pinning and tagging a model version
- Deleting a pinned model
- Adding new labels to existing taxonomies
- Maintaining a model in production
- Model rollback
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Communications Mining User Guide
Pinning and tagging a model version
User permissions required: ‘View Sources’ AND ‘View Labels’.
Every time you train the platform on your data (i.e. annotating any messages), a new version of the model associated with your dataset is created. As these models are large and complex, previous versions are not automatically stored in our databases, purely as the storage requirements would be incredibly large.
The latest version of the model will always be readily available, but users are able to ‘pin’ a specific model version that they would like to save. They can also choose to 'tag' pinned models with a 'Live' or 'Staging' tag.
There are a couple of reasons for pinning a model version:
- Pinning a model gives you determinism over predictions, particularly for when you are using Streams. This means that you can be confident of precision and recall scores for this version of the model, and future training events will not alter them (for better or worse)
- In the Validation page, users can see the validation scores for previous pinned model versions, allowing you to compare the scores over time and see how your training has improved your model
To pin a model version:
- Navigate to the models page via the top navigation bar
- Use the 'pin' toggle to save the current model version
To update the tag for a model version:
- Click the arrow next to 'Tags' on any pinned model
- Select 'Live' or 'Staging', depending on the status of the pinned model in any downstream deployments