- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Training using Shuffle
User permission required: 'View sources' AND 'Review and label'.
'Shuffle' is the first step in Explore and its purpose is to provide users with a random selection of messages for them to review. In shuffle mode, the platform will show you messages that have predictions covering all labels (and where there are none) so the Shuffle step differs from the others in Explore in that it doesn’t focus on a specific label to train but covers them all.
Why is training using 'Shuffle' mode so important?
It is very important to use Shuffle mode to ensure that you provide your model with sufficient training examples that are representative of the dataset as a whole, and are not biased by focusing only on very specific areas of the data.
Overall, at least 10% of the training you complete in your dataset should be in Shuffle mode.
Annotating in Shuffle mode essentially helps ensure that your taxonomy covers the data within your dataset well, and prevents you from creating a model that can very accurately make predictions on only a small fraction of the data within the dataset.
Looking through messages in Shuffle mode is therefore an easy way to get a sense of how the overall model is doing, and can be referred to throughout the training process. In a well-trained taxonomy, you should be able to go through any unreviewed messages on Shuffle and just accept predictions to further train the model. If you find lots of the predictions are incorrect, you can see which labels require more training.
Going through multiple pages on Shuffle later on in the training process is also a good way to check if there are intents or concepts that have not been captured by your taxonomy and should have been. You can then add existing labels where required, or create new ones if needed.
- Select 'Shuffle' from the drop-down menu to be presented with 20 random messages
- Filter to unreviewed messages
- Review each message and any associated predictions
- If there are predictions, you should either confirm or reject these. Confirm by clicking on the ones that apply
- Remember you should also add all other additional labels that apply
- If you reject the prediction(s) you should apply all of the correct label(s) - don’t leave the message with no labels applied
- You can also hit the refresh button to get a new set of messages, or click to the next page (at the bottom)
We'd recommend annotating at least 10 pages worth of messages in Shuffle as a minimum. In large datasets with lots of training examples, this could be much more.
You should aim to complete approximately 10% or more of all training in Shuffle mode.