- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Communications Mining User Guide
Turning your objectives into labels
Once you have defined your objectives, you can start turning them into labels. Labels should contain all the concepts and intents you want to capture in the dataset to meet your specific objectives. Typical groups of labels that you may include are:
These are typical labels used by our customers, regardless of their use case or industry. Not all of them may be applicable to your model, and you may have other types of labels that are important to meet your objectives.
Each of these types of labels, including what they capture and what they help to answer, are covered in more detail in this section.
Label Type | What does it capture? | What does it help answer? |
---|---|---|
Processes / Request types | These typically capture the core processes or inbound requests that a team has to handle. Often it matches directly to a ‘service catalogue’ of tasks owned by the team, and is arranged in a hierarchy capturing added levels of specificity for sub-processes / requests. |
These are foundational labels for your model, helping provide insight, monitoring and action across the entire channel. To help identify process improvement opportunities, or make processes more efficient by enabling automation, the platform needs to be able to identify the processes themselves. For analytics, they’re typically combined with all the other label types to generate insights focused on root causes, sentiments, quality of service, etc. Segmenting the data further using metadata helps further understand the nature and source of these requests. For automation, they’re crucial for auto-routing and automating processes end-to-end. |
Root cause & Exceptions | These labels are intended to capture the root causes of problems, or types of exceptions, that drive teams or customers to get in contact, e.g. ‘Missing trade details’ for a financial service operations team. | These are fundamental to identifying process improvement opportunities. Mapping root cause labels to process / request type labels provides a clear picture of problems existing in the communication channel. |
Quality of service / Failure demand | These capture concepts relating the level of service within a communication channel, or demand generated by failures in process or service, e.g. ‘Chaser’ and ‘Escalation’. |
These help answer questions such as: “Where are customers experiencing the worst pain points?”, “What processes do we repeatedly make mistakes or miss SLAs on?”, “What areas of the channel are driving the most negative customer sentiment?” Inversely they can also identify areas of strength and strong performance. Importantly, they can also be used within the platform's Quality of Service monitoring feature - a powerful analytics tool that helps aggregate channel performance into a single QoS metric, track it over time, and allow it to be benchmarked and compared against other channels / teams. |
Sentiments | If training a model without sentiment analysis enabled (the recommendation for B2B comms channels), you can use labels that capture the sentiments expressed in the comms instead, e.g. ‘customer frustration’, or ‘customer delight’. |
These are typically targeted at providing insights relating to client, customer and even employee experience. By mapping the sentiments expressed to the other concepts predicted, you can find key pain points in processes and customer journeys that have the greatest negative (and positive) impacts. |
Customer / Client experiences | These relate to specific experiences had by clients / customers, and often go hand in hand with labels capturing inbound request types, e.g. ‘Item never arrived’ for a B2C retail company. |
These are ultimate drivers of why clients / customers are contacting a business, and therefore provide powerful insights. They may overlap with ‘root cause’ related labels, though they’re focused on the experience of the sender, and potentially not the upstream root cause. |
Products | These capture the different products that a team / channel deals with, whether as a customer, servicer, or seller, e.g. ‘ETFs’ or ‘Property Insurance’ | These labels can be combined in analytics with other label kinds to provide deeper insights on which products relate to which process / request type, or root causes / exceptions. |
Systems & Data | Every team interacts with a number of systems and data sources during their day-to-day, not just Outlook. These labels capture references to these, e.g. ‘Salesforce’ or ‘SAP’. | Like products above, these can typically be combined with other labels to provide more granular insights. Combining systems and data related labels with processes and exception types can help identify priority improvement opportunities upstream. |
Once you’ve defined your labels and your target taxonomy structure, it’s important to define the key data points (i.e. general fields) you want to extract from your comms data. These are typically used to facilitate downstream automation, but can also be useful for analytics. For guidance on defining and setting up your general fields correctly, please see our training guide here.