- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
- Access & user management FAQs
- Data upload & management FAQs
- Model training FAQs
- Analytics FAQs
- Automation FAQs
Communications Mining User Guide
Data upload & management FAQs
The platform is able to support multiple forms of conversational data, i.e. when a person is talking to another person in a digitally mediated channel. Examples include emails, case management tickets, chats, call transcripts, surveys, reviews, case notes, amongst others.
The platform interprets the core conversational contents of a conversation. For email conversations, subjects, body text and the thread are all considered but the contents of the attachments are not. The platform is able to identify when emails have attachments and their names, file types, and size. The names of the attachments can be displayed in the UI and can form part of the body of text from which the platform's models train.
The objective of training a model is to create a set of training data that is as representative as possible of the dataset as a whole, so that the platform can accurately and confidently predict the relevant labels and general fields for each message. The labels and general fields within a dataset should be intrinsically linked to the overall objectives of the use case and provide significant business value.
Yes, if you have sufficient permissions you can use our APIs to add data to the platform, or you can add data to a source via CSV upload.
The storage of data in the platform can be scaled to suit the needs of our clients, and allowed volume usage is dependent on agreed licence terms. Usage within the maximum volume agreed in the license is completely acceptable. Exceeding the maximum volume will require a discussion and may incur additional cost.
The platform will not automatically delete historical data. Older data can be removed by your Communications Mining Administrator if required.
Users can export their data from the platform via CSV or using the platform's APIs. Detailed explanations of how this can be done are shown in our how-to guides as well as our API documentation. The platform will not automatically delete historical data. Older data can be removed by your Communications Mining administrator if required.
Once you have logged in you will be taken to the Datasets page where you can create your own dataset, if you have the associated permissions to do so. See here for a detailed explanation on how to do this.
You can access our API documentation here.
- What forms of communications do you handle?
- How do you handle communications with attachments?
- Can I upload data to the platform myself?
- What volumes of data can the platform support and is there a limit?
- How long does the platform store my data for?
- How can I export my data from the platform so that I can use elsewhere?
- How do I create my own datasets?
- How can I connect to the API?