- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Communications Mining User Guide
Analytics vs. automation use cases
Each use case will typically fall into one of two categories, based on the intended outcomes (i.e. objectives): analytics and monitoring or automation (though sometimes it can be both).
These intended outcomes dictate how we annotate our data and structure our taxonomies.
If your objective is to get detailed analytics for a communication channel, this can significantly influence how you structure and train your model, compared to if your objective was to auto-route inbound requests into different workflow queues.
Before building a taxonomy to meet either analytics or automation focused objectives, it's worth understanding a bit about the differences between the two kinds:
Objectives
- The objective of an analytics and monitoring dataset is usually to gain a detailed understanding of the various processes, issues, and sentiments within one or more communication channels.
- These datasets provide initial insights once the model is trained, and an ongoing ability to monitor changes and trends within the dataset over time.
- They continuously help to identify, quantify, and prioritise opportunities to make improvements within the communications channel - whether to improve efficiency, customer experience, or control.
- They also reduce the risk of not delivering expected ROI of change investment by effectively quantifying opportunities.
Examples
- Accurately identify the most valuable change opportunities, driving tighter ROI for specific initiatives and reduce risk of not delivering expected benefits.
- Improves customer / client satisfaction and service quality by identifying and driving impactful improvements in products and services.
- Reduces client-impacting issues and internal cost-to-serve.
- Better target potential customers and enable proactive customer retention by measuring CLTV drivers.
- Increase visibility and control of risks hidden in communication channels through monitoring and alerts, ensuring participants receive data they need when they need it and enable proactive remediation.
- Provide quality assurance across customer support teams, monitoring effective agent resolution.
- Empower managers to address performance issues proactively.
Labelling
- Given their purpose, they typically have detailed, extensive taxonomies.
- Despite higher numbers of labels, they usually have fewer pinned examples per label than automation focused datasets.
- As they are intended to capture more specific labels across an entire dataset, they typically sacrifice a bit of accuracy in their predictions in order to achieve detailed coverage across a broad range of topics.
Objectives
- Common objectives and success measures are to make efficiency gains, free up FTE capacity for value-add work, and improve CX by reducing processing times & error rates.
- Additional objectives and benefits can be to bring control, visibility and standardisation to processes.
Examples
- Reduces FTE effort by 5-10% through auto-triaging.
- Reduce turnaround time for automated tasks by 100%.
- Eliminate process issues due to incorrect classification, prioritisation, and misrouting.
- Eliminate capacity constraints and volume sensitivity.
- Enable expansion to end-to-end automation of processes / queries.
- Reduces risk around business processes through increased controls.
- Improve client satisfaction (CSAT or NPS) and service quality through reduced process latency.
Labelling
- These typically have small taxonomies with higher numbers of pinned examples for every label.
- More examples are needed per label to ensure high precision and recall and to catch various edge cases in the dataset.
- Each label involved in an automation should seek to maximise precision and recall (depending on the use case, you might optimise one slightly over the other), though it is not typically possible for both precision and recall to reach 100%. There will almost always be some exceptions and it's important to have a proper exception process in place for any automation use case.
It's important to remember - datasets that are trained to meet automation objectives can still provide a lot of analytical insights! They just may not be as granular as datasets trained to focus on answering more detailed questions.
To see how to turn your objectives, whether for analytical or automation purposes, into labels and an appropriate taxonomy, see the following articles.