- Getting Started
- Administration
- Manage Sources and Datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing Data for .CSV Upload
- Model Training and Maintenance
- Understanding labels, entities and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and labelling best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to 'Refine'
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using 'Check label' and 'Missed label'
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using 'Rebalance'
- When to stop training your model
- Using Analytics & Monitoring
- Automations and Communications Mining
- FAQs and More
Concept drift
In predictive analytics and machine learning, the term 'concept drift' (or 'data drift') means that the properties of the target variables (i.e. the themes and concepts underlying each of the labels), that the model is trying to predict, change over time in unforeseen ways.
Essentially, more recent data coming into the dataset will, over time, become increasingly different to the original data on which the model was trained.
This causes problems because the predictions become less accurate as time passes and the variables that the model is trying to predict become increasingly different to the training data.
Concept drift is one of the key reasons why it's important to properly maintain Models used in production use-cases, e.g. automations, by doing a small amount of exception training on a scheduled basis.