- Getting Started
- Administration
- Manage Sources and Datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing Data for .CSV Upload
- Model Training and Maintenance
- Understanding labels, entities and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and labelling best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to 'Refine'
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using 'Check label' and 'Missed label'
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using 'Rebalance'
- When to stop training your model
- Using Analytics & Monitoring
- Automations and Communications Mining
- FAQs and More
Defining and setting up your entities
It’s important to define the key data points (i.e. entities) you want to extract from your comms data. These are typically used to facilitate downstream automation, but can also be useful for analytics - particularly in assessing the potential success rate and benefit of automation opportunities.
Ultimately, entity predictions, combined with labels, can facilitate automation by providing the structured data points needed to complete a specific task or process. It’s much more time-efficient to train entities in your dataset in conjunction with labels, rather than focusing on one and then the other (i.e. training entities after training a full taxonomy of labels).
For example:
If we’re looking to automate ‘Address Change’ requests, a label would be used to capture the request type, whilst entities would capture the various components of the address (i.e. Address Line, City, Postcode / Zip Code, etc.). Each prediction is made available via the API enabling every message to be acted upon.
Once set up and trained to a suitable level of performance, they can help generate important insights on request types that could be in scope for automation.
To understand how, let’s continue the same example: ‘Address Change’
We’ve identified that ‘Address Change’ requests are a high-volume, transactional, and highly-manual task, and want to understand the proportion of them that we could automate.
To do so, we need to know that the label for identifying the request can perform well. We also need to understand the proportion of the address change requests received that contain the necessary data points (i.e. the entities) required to process the change.
In this instance, this could be ‘Address Line 1’, ‘Town / City’, ‘Zip Code’, ‘State’. Within the platform we can easily assess the proportion of ‘Address Change’ requests that contain all or some of the required entities using combined filters. This helps us understand the proportion that could be successfully automated end-to-end, and which would require more information or a human in the loop to complete.
If 80% of our address change requests contain the required entities, we know this is a great candidate for automation. If only 20% contain the entities we need, this may be a less significant opportunity (depending on overall volumes).
Please Note: It’s important for entities to be performing well before assessing these, as otherwise the platform could miss plenty of requests that could be automated E2E, purely through lack of training.
The example above illustrates how the platform can be used to better understand any automation opportunity within your communications channels. By pulling this data from the platform, and feeding it into your automation opportunity pipeline, you can effectively identify and prioritise the opportunities that have the biggest potential success rate, and ultimately the highest ROI.