CN115039116A - Method and system for active customer relationship analysis - Google Patents

Method and system for active customer relationship analysis Download PDF

Info

Publication number
CN115039116A
CN115039116A CN202080079645.2A CN202080079645A CN115039116A CN 115039116 A CN115039116 A CN 115039116A CN 202080079645 A CN202080079645 A CN 202080079645A CN 115039116 A CN115039116 A CN 115039116A
Authority
CN
China
Prior art keywords
data
customer
vector
case
implemented method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080079645.2A
Other languages
Chinese (zh)
Inventor
P·萨尼
B·斯莱普科
C·麦克勒斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rimini Street Co
Original Assignee
Rimini Street Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rimini Street Co filed Critical Rimini Street Co
Publication of CN115039116A publication Critical patent/CN115039116A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063114Status monitoring or status determination for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services

Abstract

A service provider system receives case data for a customer from a customer service system. Vector data is collected from case data by integration and aggregation. Anomaly or opinion signals are detected from the integrated and aggregated vector data by machine learning. These signals are validated, integrated and associated with case, contact and customer object types. The user interface presents the validated and integrated signal to the user, who takes an active action based on the signal. The user interface includes a dashboard, notifications, and indicators.

Description

Method and system for active customer relationship analysis
Background
Outsourcing of non-core services to third parties is a standard component of most modern business and organizational models. Thus, many organizations utilize third party service providers to perform various functions and businesses for the organizations.
As a specific illustrative example, many organizations rely on multiple software systems in their daily business. In many cases, these organizations use one or more enterprise application software systems (EAS), also referred to simply as "enterprise software. EAS systems are intended to provide software capabilities to address the overall needs of an organization or enterprise, rather than the individual needs within an organization. Therefore, EAS systems are typically highly complex systems. Given the complexity of these software systems, organizations often turn to software service providers to provide support for the various software systems used by the organizations. Typically, these organizations or customers of the software service provider rely on the supported software to create revenue and manage costs. Thus, the services provided by software service providers are often critical to their customers, and the implementation, maintenance, and problem-solving services provided often need to be performed very quickly and correctly.
In order to establish and maintain trust of their customers, service providers typically utilize one or more customer service systems or customer relationship management systems to track and collect data regarding the various jobs performed by the service providers. One specific example of a customer service system is Salesforce TM . Data collected by existing customer service systems allows an agent (typically a human agent) of a service provider to track work progress so that the agent can efficiently and effectively manage relationships with customers.
With existing customer service systems, cases are created based on customer work or issues. Each case is tracked and data is collected until the case is resolved or completed. When tracking a case, various case performance data is typically generated throughout the life cycle of the case. Using existing customer service systems, case performance data can then be analyzed to determine the performance of the service provider in handling cases for customers using various methods.
Existing customer service systems can be quite efficient and are powerful tools for tracking customer work and maintaining customer relationships. However, existing customer service systems are primarily reactive systems that alert agents of the service provider to a problem only when the problem has occurred or at least becomes a serious or escalated problem. For example, with existing customer service systems, alerts are issued to agents only when a customer reports a problem or a significant deviation in the data indicates that a problem exists. In other cases, using existing customer service systems, managers are typically only aware of and react to the performance of agents while monitoring historical analysis of the activities performed by the agents for which they are responsible. For example, the administrator may study the average length of time required for an agent to resolve a problem and react when the administrator notices that the average resolution time of an agent exceeds the average time of other agents. In this example, the reaction of the manager may be to provide additional training for such agents, or to transfer the case to a different agent if the problem is not resolved in time. As another example, an administrator may perceive a customer as dissatisfied when the customer submits an evaluation of a service or an agent performing the service.
This type of reactive operation of existing customer service systems is a serious problem because with existing customer service systems, problems often occur, or at least escalate to a point where serious customer dissatisfaction occurs, even before the service provider becomes aware of the problem or the relationship with the customer deteriorates. Thus, the trust and confidence of the customer may be severely compromised before the problem can be resolved. Since, as mentioned above, in many cases such trust and confidence is critical to the customer/service provider relationship, the reactive nature of existing customer service systems often leads to this relationship being compromised. This therefore also often leads to the customer selecting a different service provider.
Thus, while existing customer management systems may effectively discover historical problems with customer support performed by service providers, these problems may have caused such severe damage to customer relationships that the damage is irreversible and there is no time to correct the problems. Thus, in these cases, historical analysis about the attrition customer becomes meaningless, at least for that customer.
Therefore, a technical solution is highly needed to solve the long-standing technical problem: providing the customer management system with the ability to identify or predict customer problems early before they become significant; allowing the service provider to proactively resolve the problem before the problem adversely affects the service provider/customer relationship.
Disclosure of Invention
Embodiments of the present disclosure provide a technical solution to the technical problem of providing a more predictive and proactive customer management system.
In one embodiment, the disclosed solution includes collecting historical case data from one or more customer service systems and using the historical case data to train one or more machine learning anomaly detection models to detect anomalies in the case data indicating potential customer dissatisfaction. Once the one or more machine-learned anomaly detection models are trained, the current case data is provided to the trained one or more machine-learned anomaly detection models, and any anomalies in the current case data are identified. When one or more anomalies are detected in the case data for a particular case, that case and the detected anomalies are brought to the attention of the service provider agent or manager, who can then take proactive action to resolve or correct the anomalies before the customer is dissatisfied with the upgrade.
In one embodiment, the disclosed technical solution includes collecting case data from a customer service system, including unstructured conversation data representing communications between a customer and a service provider. Machine learning methods (e.g., natural language processing methods) are then used to identify customer opinions in communications between the customer and the service provider. The detected opinion may indicate customer satisfaction or dissatisfaction with case treatment or urgency for the need to intervene with the case. Once one or more opinions are detected in the case data of a particular case, the case and the detected opinions may be brought to the attention of the service provider agent or manager, who may then take active action to resolve the detected opinions before the customer is dissatisfied with the upgrade.
In one embodiment, the disclosed solution includes collecting historical case data from a customer service system and using the historical case data to train one or more machine learning anomaly detection models to detect anomalies in the case data that represent potential customer dissatisfaction. Once the one or more machine-learned anomaly detection models are trained, the current case data is provided to the trained one or more machine-learned anomaly detection models, and any anomalies in the current case data are identified. Additionally, current case data, including unstructured conversation data representing communications between customers and service providers, is also processed using machine learning language processing methods to identify any customer opinions in the conversation data.
Once the current case data is processed using the machine-learned anomaly detection model and the machine-learned language processing methods, any anomalies detected in case-specific data and/or any opinions detected in case-specific data are collected in reports and provided as notifications, reminders, signals, user interface displays, and other report formats as discussed herein or known in the art at the time of delivery or developed or made available after the time of delivery. The detected anomalies and opinions draw the attention of the service provider agents or managers who may then take proactive action to resolve the detected anomalies and/or opinions before the customer is dissatisfied with the upgrade. It should be understood that the exceptions and/or comments collected in the report may be collected specifically, as a compilation over a period of time or at other time intervals, depending on the application.
Using the disclosed embodiments, a machine learning approach is used to monitor the customer's satisfaction with the support provided by the service provider in relatively real-time, such that any required corrective action may be taken before the customer's level of dissatisfaction rises to an unacceptable level, as may be defined by key performance indicators or other organizational indicators. Accordingly, the disclosed embodiments represent a technical solution to the long-standing technical problem of providing a customer management system capable of identifying or predicting customer problems prior to their escalation. Thus, using the disclosed embodiments, a service provider can proactively address a problem before the problem adversely affects the service provider/customer relationship.
Drawings
FIG. 1 is a high-level block diagram of an application environment for implementing an active customer relationship analysis system.
FIG. 2A is a block diagram of an application environment for active customer relationship analysis, including a more detailed block diagram of a customer service system and a vector collector module.
FIG. 2B shows an illustrative and non-exhaustive example user interface for case data for active customer relationship analysis.
Fig. 2C and 2D together show an illustrative and non-exhaustive list of vector data for active customer relationship analysis.
FIG. 3 is a block diagram of an application environment for active customer relationship analysis, including a more detailed block diagram of a signal processor module.
FIG. 4 is a block diagram of an application environment for active customer relationship analysis, including a more detailed block diagram of a verification and integration module.
FIG. 5A is a block diagram of an application environment for active customer relationship analysis, including a more detailed block diagram of a user interface module.
FIG. 5B shows an illustrative example of a signal report generated by the user interface module of FIG. 5A.
FIG. 5C shows an illustrative example of a signal report generated by the user interface module of FIG. 5A.
FIG. 5D shows an illustrative and non-exhaustive example user interface for opinion report data identified based on opinion signals and used for proactive customer relationship analysis.
Fig. 6 shows an illustrative and non-exhaustive example user interface for signal reporting data for active customer relationship analysis.
FIG. 7 is a vector control example table for active customer relationship analysis.
FIG. 8 is a flow chart of a process for active customer anomaly detection.
FIG. 9 is a flow chart of a process for proactive customer opinion detection.
FIG. 10 is a flow chart of a process for proactive customer relationship analysis.
Common reference numerals are used throughout the figures and the detailed description to indicate like elements. Those skilled in the art will readily recognize that the above-described figures are examples, and that other architectures, modes of operation, orders of operation, elements and functions can be provided and implemented without departing from the characteristics and features of the invention, as set forth in the claims.
Detailed Description
Embodiments will now be discussed with reference to the accompanying drawings, in which one or more exemplary embodiments are described. Embodiments may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, shown in the drawings, and/or described below. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the principles of the invention to those skilled in the art, and are set forth in the appended claims.
As discussed in more detail below, embodiments of the present disclosure represent a technical solution to the technical problem of providing a more predictive and proactive customer management system. To this end, the disclosed embodiments include active customer signal detection by analyzing vector data generated from customer support system data using a machine learning model to detect customer signals. In one embodiment, the vector data represents a list of data fields on which vector algebraic operations can be performed. The vector data may be associated with interrelated objects such as, but not limited to, a case, a customer associated with the case, and a customer's contact associated with the case (e.g., a customer's contact representative). It should be understood that the vector data may be associated with other objects as discussed herein or other objects known in the art at the time of delivery or other objects developed or made available after the time of delivery.
As discussed in more detail below, the customer signal may be a negative signal or a positive signal, according to various embodiments. It should be appreciated that when performing active analysis, it is advantageous not only to actively detect negative signals to prevent impairment of customer relations, but also to actively detect positive signals to enhance customer relations. For example, a detected negative signal may indicate that an agent or manager needs attention. The positive detected signal may indicate that the best practices have been found that can be shared with other agents and managers. It should be appreciated that the active detection of negative and positive signals in accordance with the disclosed embodiments allows managers to assess the overall condition of the service provider organization in order to continue to strategically seek the organization's development.
As discussed in more detail below, in various embodiments, a signal may be detected from an anomaly within the structured data. In one embodiment, the anomaly is a discovered result that is different from the expected result. In one embodiment, the anomaly may be a point anomaly, a trend anomaly, or other anomaly type as discussed herein or known in the art at the time of the submission or developed or made available after the time of the submission. According to the disclosed embodiments, trend anomalies are discovered by analyzing historical data as it is discovered that the historical data changes along a trend line over time. For example, if a customer records thirty cases per week on average, and then the customer records three hundred cases in the week, trend anomalies may be detected from changes in data trends. According to the disclosed embodiments, point anomalies are discovered by a single data point deviating from the statistics (e.g., average) of other data points. For example, the customer may provide a two-fifths survey score, which may deviate from the previous four-fifths score.
As discussed in more detail below, in accordance with the disclosed embodiments, signals may be detected from opinions within unstructured data. In one embodiment, the unstructured data is dialog data representing communications between the client and the agent. For example, the conversation data may represent text messages sent to and by the client and the agent. In this example, each text message may be a piece of text that represents unstructured data. Such unstructured data contains opinions found based on polarity scores of positive one to negative one.
As discussed in more detail below, positive polarity scores represent positive opinions. For example, positive opinions can be found from the word "happy" in unstructured data. Negative polarity scores represent negative opinions. For example, a negative opinion can be found from the phrase "very frustrating" in unstructured data. In accordance with the disclosed embodiments, words and phrases within unstructured data form a corpus. In one embodiment, the corpus is a body of text that can be analyzed in a natural language processing context, as is known in the art. In one embodiment, the opinion may be an urgency opinion, where unstructured data is determined to contain an indication that a case needs to be upgraded to a more experienced agent, such as the phrase "vital to our production environment". Such a corpus is associated with the urgency of a customer to raise issues that indicate a need for upgrades in the future. Thus, cases can be classified according to the urgency of the defined corpus.
As discussed in more detail below, embodiments of the present disclosure actively detect signals of case data associated with cases currently serviced by a service provider. The service provider records the work done to solve the problem as case data in the service provider's customer service system. Case data is received by a service provider's detection management system. Control data is generated that provides instructions for processing the received case data into vector data and instructions for validating signal data generated from the vector data. The received case data is processed into vector data based on instructions for processing of control data in preparation for analyzing the vector data by a machine learning based technique.
As discussed in more detail below, in one embodiment, the anomaly signal processor module detects anomaly data within the vector data and the opinion signal processor module detects opinion data within the vector data. The anomaly data and opinion data are validated based on control data containing validation rules. The validated signal data is displayed to a user, such as a service provider's administrator, via a user interface.
For purposes of illustration, specific examples are provided herein in which the customer is an enterprise or organization that utilizes one or more Enterprise Application Software (EAS) systems, i.e., an enterprise software customer, and the service provider is a software service provider responsible for implementing and maintaining the EAS system for the customer. However, one of ordinary skill in the art will readily recognize that the disclosed embodiments may be utilized and utilized with other types of customer/service provider relationships. Accordingly, the specific illustrative examples of enterprise software customer/software service provider relationships are not limited to the scope of the invention as described in the claims.
In one embodiment, a customer of a service provider is utilizing enterprise software to manage the customer's business. In one embodiment, the customer employs a service provider with experience rich in enterprise software support to address the customer's issue. Because customers rely on enterprise software that runs without errors, customers' expectations of service providers of being able to resolve problems in a timely manner, which is typically measured in days due to the complexity of enterprise software, are high. To help service providers track all of the problems for customers they service, service providers typically utilize customer service systems to track all of the cases for customers. In a typical scenario, when a service provider is assigned a job, or is alerted to a problem with the customer's enterprise software, a case is recorded in the customer service system and one or more agents (typically people) of the service provider are assigned to resolve the case.
When an agent development work completes a case, new data is added to the case data of the customer service system. The case data may be in a structured format to fill in data fields. By way of non-limiting example, the structured data has a defined length and format that can be organized and stored in a database (such as a relational database). Due to the consistent format of structured data, such structured data can be computationally analyzed in general and machine learning-based anomaly detection analysis in particular.
As discussed in more detail below, the structured data associated with a given case may include data representing various case information, such as, but not limited to: the number of communications between the customer and the service provider regarding the case; the life cycle of the case; case-related response times; the number of documents or other requests that a customer has formulated in a given case; the number of times the customer requests an update in the case; the average update time of the case; the number of changes to the service provider agent that processed a given case; the upgrade history of the customer associated with the case, i.e., the frequency with which the customer upgrades cases due to dissatisfaction; a lower and upper limit for the range of evaluations submitted by the customer associated with the case; the lower limit and the upper limit of the evaluation range submitted by the specific contact of the client in the case; a lower and upper limit of the range of customer evaluations of case-related service provider agents; the priority of the case; time to renew service contract for the customer associated with the case; a start time of a customer service associated with the case; strategic value and ability of the customer associated with the case; and any other case-related structured data as discussed herein or known in the art at the time of or developed or made available after the time of delivery, which is considered likely to indicate customer satisfaction with the services provided by the service provider.
As discussed in more detail below, in addition to structured data, information about cases may be in an unstructured format to populate comment fields. An example of unstructured data is any text-based conversation between a customer and an agent of a service provider. Typical customer service systems enable conversational interactions between customers and agents in the form of text, email, transcription recordings, and the like. These dialog communications generate unstructured dialog data. As discussed in more detail below, using the disclosed embodiments, one or more types of Natural Language Processing (NLP) machine learning systems may be used to analyze the unstructured dialog data to detect customer opinion and escalation urgency associated with a case.
As discussed in more detail below, in accordance with the disclosed embodiments, a detection management system is used to analyze active or current case data for a customer service system. In one embodiment, the detection management system may include an anomaly detection module and/or an NLP module to determine whether a customer is getting qualified service from a service provider. Accordingly, the customer service system transmits case data to the service provider's inspection management system. In one embodiment, the inspection management system processes and formats case data into vector data for analysis by a machine learning model through integration and aggregation.
As discussed in more detail below, the detection management system may include an anomaly detector machine learning model for analyzing anomalies for structured data, and an opinion detector machine learning model for analyzing unstructured data about opinions. An anomaly is associated with unexpected structured data relative to other structured data. An opinion is a textual expression that indicates whether a service provided by a service provider is positive (because it is usually helpful) or negative (because it is usually not helpful).
The detection management system disclosed herein proactively determines whether a customer is becoming happy or unhappy before significant harm is done to the customer/service provider relationship. This allows an administrator or other service provider agent to take proactive action based on the determination.
Thus, the detection management system disclosed herein actively determines whether a customer becomes happy or unhappy as early as possible before significant damage is done to the customer/service provider relationship. This allows an administrator or other service provider agent to take proactive action based on this determination.
FIG. 1 is a high-level block diagram of an application environment 100 for active customer relationship analysis. It should be understood that the schematic diagram of fig. 1 is for exemplary purposes and is not intended to be limiting. In FIG. 1, application environment 100 includes a service provider computing environment 110 that includes a detection management system 111. In one embodiment, the application environment 100 is a production environment. In other embodiments, the application environment 100 is a development environment, a quality assurance environment, a combination of the foregoing, and any other environment as discussed herein or known in the art at the time of submission or developed or made available after the time of submission. The detection management system 111 includes: vector collector module 120, signal processor module 130, verification and integration module 140, and user interface module 150.
In fig. 1, detection management system 111 includes a processor 115 and a memory 116. Memory 116 includes a detection management database 190 that stores data associated with services provided to customers. The detection management database 190 includes training data 191, control data 192, vector data 193, and signal data 194. The memory 116 includes instructions stored therein that, when executed by the processor 115, perform processes for proactive customer relationship analysis.
The application environment 100 includes instructions representing, among other things, the processes of the vector collector module 120, the signal processor module 130, the verification and integration module 140, and the user interface module 150. As previously described, some embodiments of the present invention may be performed using a training environment, a testing environment, or a development environment instead of a production environment, depending on the desired opinion and anomaly detection of the signals used to determine case, customer, and contact objects.
In one embodiment, training data 191 includes historical case data from one or more customer service systems. In one embodiment, historical case data is used to train one or more machine-learned anomaly detection models to detect anomalies in the case data that indicate potential customer dissatisfaction.
Any of various known anomaly detection models, or any other known supervision model, may be used as the machine learning-based anomaly detection model. As a specific illustrative example, the machine learning-based anomaly detection model may be one or more of a gaussian distribution, an quartering distance (IQR), or a Support Vector Machine (SVM) machine learning-based anomaly detection model. In other cases, the machine learning-based anomaly detection model may be any anomaly detection model as discussed herein or known in the art at the time of the submission or becomes known after the time of the submission.
In one embodiment, training data 191 includes corpus data used to train machine learning-based natural language processing models. As discussed in more detail below, such corpus data includes data representing keywords, phrases, or stems used to detect positive opinions and negative opinions. As described below, in one embodiment, the user interface module 150 enables an agent or manager of the service provider to provide feedback to refine and retrain the machine learning model in a feedback loop to obtain more accurate results in anomaly and/or opinion detection.
The machine learning models discussed herein may be trained using supervised learning methods (e.g., classification and/or regression), unsupervised learning methods (e.g., clustering and/or association), semi-supervised learning methods (e.g., supervised and unsupervised), and other learning methods.
As discussed in more detail below with respect to FIG. 2A, the vector collector module 120 collects case data from one or more case data sources (e.g., customer service system 180). It should be appreciated that although one customer service system 180 is depicted in FIG. 1, any number of customer service systems 180 may be coupled to detection management system 111 via one or more communication channels (e.g., communication channel 118). Customer service system 180 includes case data regarding cases or jobs handled by service providers on behalf of customers, as well as various communications that agents of service providers have with customers in order to resolve issues.
As is known in the art, the customer service system 180, sometimes referred to as a Customer Relationship Management (CRM) system, is typically a Software As A Service (SAAS) provided in the cloud using cloud computing technology. Thus, case data may be used by all service provider users of customer service system 180. Thus, coordinated case upgrades may be provided within an organization that provides agents that support services. Additionally, case data may be used in a processed format (e.g., summary data) or an unprocessed format (e.g., raw data) for inspection management system 111.
As discussed in more detail below with respect to fig. 3, the signal processor module 130 includes a machine learning based model or algorithm for processing vector data collected by the vector collector module 120. In one embodiment, the model or algorithm of the signal processor module 130 is used to detect anomalous data in the signal data, such as current case data. In one embodiment, if several anomalies are detected for objects such as cases, contacts, or customers, the signal processor module 130 ranks and/or normalizes the anomalies.
As discussed in more detail below with respect to fig. 3, in one embodiment, the model or algorithm of the signal processor module 130 includes a machine learning based language/text processing model or algorithm. In one embodiment, a machine learning based language/text processing model or algorithm is used to detect signal data, such as customer opinion data, in unstructured conversational data representing communications between a customer and a service provider.
As discussed in more detail below with respect to fig. 3, in one embodiment, the model or algorithm of the signal processor module 130 includes a machine learning based language/text processing model or algorithm and a machine learning based language/text processing model or algorithm to detect both anomalous signal data and customer opinion signal data.
As discussed in more detail below with respect to fig. 4, in one embodiment, the verification and integration module 140 verifies and integrates the detected anomalies and/or customer opinion signal data. In one embodiment, if an anomaly is determined by the signal processor module 130, the detected anomaly is verified by the verification and integration module 140 to be a true anomaly. For example, if the case is managed on a twenty-four hour basis, the verification and integration module 140 will verify: it is appropriate to assign the case to each agent because it is operated all the time; and it does not migrate between too many agents.
After the signal data 194 is processed by the verification and integration module 140, the signal data 194 is presented to the user through the user interface module 150.
As discussed in more detail below with respect to FIGS. 5A-5C, in one embodiment, the user interface module 150 provides a variety of data and/or reports to agents and managers of the service provider, including, but not limited to, data and/or reports indicating abnormal conditions detected in case data for a particular case and data and/or reports indicating any opinions detected in case data for a particular case. Further, as described above, in one embodiment, the user interface module 150 enables an agent or manager of the service provider to provide feedback to improve and retrain the machine learning model in a feedback loop to obtain more accurate results in anomaly and opinion detection.
As discussed in more detail below with respect to fig. 5A-5C, in one embodiment, the user interface module 150 includes a dashboard module 510 of signal information within the customer service system 180 of signal information, a notification module 520 of signal information, and an indicator module 530.
FIG. 2A illustrates a more detailed block diagram of the application environment 100 for active customer relationship analysis, including a customer service system 180 and a vector collector module 120 of the detection management system 111. It should be understood that the chart of FIG. 2A is for exemplary purposes and is not intended to be limiting. Referring to both fig. 1 and 2A, the application environment 100 includes a service provider computing environment 110, the service provider computing environment 110 including a customer service system 180 and a detection management system 111.
The customer service system 180 includes a customer service module 281, which customer service module 281 allows for the creation of case data 282. Case data 282 includes field data about the customer, field data about the case, field data about a customer survey, text data about a customer survey, and session data representing a session between the customer and an agent of the service provider. Case data 282 includes structured data and unstructured data. The structured data includes data fields for classifying case characteristics. The structured data also includes information for categorizing characteristics of customers, such as customers who provide positive references to service providers in the marketplace, the extent of cross-country, cross-regional implementation of the customers' EAS systems, and the module and contract values associated with particular customers. Unstructured data includes textual comments associated with the case, such as textual conversations, customer survey comments, and other textual comments as discussed herein or known in the art at the time of submission or developed or made available after the submission.
FIG. 2B shows an illustrative and non-exhaustive example user interface 240 for case data 282 for proactive customer relationship analysis.
As shown in FIG. 2B, the user interface 240 depicts case data 282, which includes contact information 241, case information 242, and conversation information 243. Those of ordinary skill in the art will readily recognize that FIG. 2B is merely a specific illustrative example of case data 282, and that many other types and arrangements of such data are possible and contemplated by the inventors. Therefore, specific illustrative examples of the type and arrangement of case data 282 of FIG. 2B should not be construed as limiting the embodiments set forth in the claims.
Returning to FIG. 2A, case data 282 is received by the vector collector module 120. In one embodiment, the customer service system 180 transmits the case data 282 in raw format. In another embodiment, the customer service system 180 transmits the case data 282 in an integrated format. For example, case data 282 in syndicated format may be an average of multiple values of a structured field, such as an average of the past 6 months. As another example, case data 282 in syndicated format may be an average of the number of documents or customer survey scores associated with a case. In this example, the detection management system 111 does not receive the document itself. It should be appreciated that case data 282 may be received from any system that includes information about a customer.
The data collection module 230 of the vector collector module 120 collects case data 282 from the customer service system 180. The data acquisition module 230 uses control data 192, which control data 192 includes instructions for receiving the case data 282 in a desired format. Structured case data 282 can contain point anomalies for a single instance of a data point that is too far from its remainder, contextual anomalies for data that typically has context in time series data, collective anomalies that collectively indicate problems such as a large number of cases recorded by a customer, and other anomalies that are discussed herein or modeled in a manner known in the art at the time of submission or developed or made available after the submission. For example, in various instances, case data 282 may be received as raw data, analyzed data, calculated data, and other types of data as discussed herein or known in the art at the time of submission or developed or made available after submission. It should be appreciated that the format of the received case data 282 is determined based on the desired signal. In one embodiment, case data 282 is selected from customer service system 180 or received as raw data and may be used as historical case vector data and current case vector data, respectively.
The vector collector module 120 also includes a vector data integrator module 210 that integrates the collected case data 282 and a vector data aggregator module 220 that analyzes the integrated collected case data 282 for use as vector data 193. The vector collector module 120 also includes a control configuration module 235 that allows the control data 192 to be modified for machine learning model training, as described further below. It should be understood that while the control configuration module 235 is depicted in the vector collector module 120, it may also be included in the signal processor module 130 and/or the verification and integration module 140.
The vector data integrator module 210 receives the formatted case data 282 through the data collection module 230. The vector data integrator module 210 integrates the case data 282 into an integrated format. Examples of integrations performed by the vector data integrator module 210 using hypertext preprocessor (PHP) scripts are case history data integration, case owner change data integration, case attachment data integration, case review count data integration, case review data integration, case status history data integration, the last three consecutive customer survey report data for a case, and other integrations as discussed herein or known in the art at the time of submission or developed or made available after submission. Such integration may be performed hourly, daily, and at other predetermined times as desired or as discussed herein or known in the art at the time of or as available after the time of delivery.
The vector data aggregator module 220 receives the integrated case data 282 from the vector data integrator module 210 and aggregates the integrated case data 282 to generate vector data 193. Examples of aggregations performed by the vector data aggregator module 220 using programs (processes), such as SQL programs and other query language programs as non-limiting examples, are vector aggregation processes (processes) and other aggregations as discussed herein or known in the art at the time of submission or developed or made available after submission. Such polymerization may be performed hourly, daily, and at other predetermined times as discussed herein or known in the art at the time of delivery or developed or made available after the time of delivery. It should be understood that the vector data 193 may be generated from only the vector data integrator module 210, only the vector data aggregator module 220, and both the vector data integrator module 210 and the vector data aggregator module 220.
Fig. 2C and 2D simultaneously show a specific illustrative and non-exhaustive list 250 of vector data 193 for active customer relationship analysis.
As shown in fig. 2C and 2D, the list 250 of vector data 193 includes a vector name 251, a data type 252, an exception type 253, an upgrade impact 254, and a description 255.
The vector 261 has a vector name 251 of "case opinion-general", a data type 252 of "unstructured/NLP", an exception type 253 of "point exception", an escalation impact 254 of "both", and a description 255 of "measure negative-positive opinion (NLP)".
The vector 262 has a vector name 251 of "case opinion-urgent/escalation", a data type 252 of "unstructured/NLP", an exception type 253 of "point exception", an escalation impact 254 of "both", and a description 255 of "measure negative-positive opinion (NLP)".
Vector 263 has a vector name 251 of "case-to-case (Back-Forth), number of updates from client and agent", a data type 252 of "structured/number count", an exception type 253 of "contextual exceptions", an upgrade impact 254 of "both", and a description 255 of "count of cases-to-case between agent and client". Vector 263 represents a count of the number of communications to and from the client and the agent assigned to the client case. For example, for complex support cases for enterprise customers, a typical round trip count may be 50. In this example, if the case's round trip count is 175, an exception signal handler module (discussed below) may proactively determine that there is an informed exception that the administrator may resolve.
The vector 264 has a vector name 251 of "expedited (repeated update requests)," data type 252 of unstructured/NLP, "exception type 253 of" contextual exceptions, "upgrade impact 254 of both," and a description 255 of "customer expedited or engineer expedited updates.
Vector 265 has a vector name 251 of "case lifecycle," a data type 252 of "structured/days," an exception type 253 of "point exceptions," an escalation impact 254 of "both," and a description 255 of "case open days.
The vector 266 has a vector name 251 of "case average update time", a data type 252 of "structured/days", an exception type 253 of "point exception or context exception", an escalation impact 254 of "both", and a description 255 of "average time of update and response-meaningful response".
The vector 267 has a vector name 251 of "number of documents loaded," a data type 252 of "structured/quantity," an exception type 253 of "point exception," an upgrade impact 254 of "case," and a description 255 of "total number of documents loaded in case. Vector 267 represents the number of documents that have been uploaded for a case. For example, a typical case of an enterprise customer involves uploading fewer than ten documents, and if fifty documents are uploaded, an exception signal handler module (discussed below) may proactively determine that there is an informed exception that the administrator can resolve.
Vector 268 has a vector name 251 of "owner change times", a data type 252 of "structured/times", an exception type 253 of "point exception", an upgrade impact 254 of "case", and a "total times of owner change-adaptive, since we use description 255 of FTS (all weather) approach". Vector 268 represents a count of the number of changes to the agent responsible for the case. Typically for enterprise cases, the count of case owner changes for the past three months may be three changes, and if there is a count of ten changes, an exception signal handler module (discussed below) may proactively determine that there is an informed exception that the administrator can resolve.
Vector 269 has a vector name 251 of "customer's upgrade history," a data type 252 of "structure/quantity," an exception type 253 of "contextual exceptions," an upgrade impact 254 of "both," and a description 255 of whether this customer has a previous upgrade history.
The vector 270 has a vector name 251 of "customer CSR low-high," a data type 252 of "structured/quantity," an exception type 253 of "point exceptions," an upgrade impact 254 of "customer," and a description 255 of "how the customer survey results of negative and positive exception detection.
Vector 271 has a vector name 251 of "customer contact CSR low-high", a data type 252 of "structured/quantity", an exception type 253 of "point exceptions", an upgrade impact 254 of "customer", and a description 255 of "how the customer survey results of negative and positive exception detection".
Vector 272 has a vector name 251 of "case owner and head engineer CSR low-high", a data type 252 of "structured/quantity", an exception type 253 of "point exception", an escalation impact 254 of "case", and "is case owner/head engineer customer survey result low? "description 255.
Vector 273 has a vector name 251 of "case priority," a data type 252 of "structured/quantity," an exception type 253 of "consideration," an escalation impact 254 of "case," and "case priority high? "description 255.
Vector 274 has a vector name 251 of "next-in-time" a data type 252 of "structured/days-remaining," an exception type 253 of "consideration," an escalation impact 254 of "both," and a description 255 of "the client tends to escalate cases as the next-in-time approaches.
Vector 275 has a description 255 of "T & R, FSS, TSS in case utilized" vector name 251, "structured/lead engineer" data type 252, "consideration" exception type 253, "both" escalation impact 254 and "such case typically involves higher time sensitivity (e.g., compliance date"). By way of non-limiting example, such cases may relate to tax, legal, and regulatory updates that require timely action to account for changes in applicable laws or regulations in order to be able to generate the correct output documents from the EAS system. The updates may include tax updates, i.e., changes in the market tax rate from the first year to the second year or over another relevant time period.
Vector 276 has a vector name 251 for "joined/new customer," data type 252 for "structured/start of service," exception type 253 for "consideration," upgrade impact 254 for "both," and a description 255 for "this is a new customer — also taking into account the join score.
The vector 277 has a vector name 251 of "increase in the amount of cases from the client," data type 252 of "structured/quantity," exception type 253 of "collective exception," upgrade influence 254 of "both," and a description 255 of "client starts using in one country, deploying new products/functions.
Vector 278 has a vector name 251 of "environmental rights issue," a data type 252 of "structured/quantity," an exception type 253 of "consideration," an upgrade impact 254 of "both," and a description 255 of "engineer encounters an issue while logged into the client environment.
Other examples of vectors 278 used by the anomaly signal handler module include vectors that characterize customer data (e.g., strategic or contractual value associated with the customer at a particular stage of the relationship and the history of the customer providing a positive external reference to the service provider). The customer correlation vector may have a data type 252 of "structured/strategic customer" or "structured/referenceable customer". The customer-related vector may have a "factor of consideration" exception type and an escalation impact of "customer", although case-related and contact-related vectors may also impact cases.
Those of ordinary skill in the art will readily recognize that fig. 2C and 2D are merely specific illustrative examples of vector data 193 and that many other types and arrangements of such data are possible and contemplated by the inventors. Therefore, specific illustrative examples of the type and arrangement of the vector data 193 of fig. 2C and 2D should not be construed as limiting the embodiments set forth in the claims.
FIG. 3 shows an application environment 100 for active customer relationship analysis, including a more detailed block diagram of a signal processor module 130. It should be understood that the chart of FIG. 3 is for exemplary purposes and is not intended to be limiting. Referring to fig. 1, 2A, and 3 concurrently, the application environment 100 includes a service provider computing environment 110 that includes a detection management system 111. The detection management system 111 includes a vector collector module 120, a signal processor module 130, and a verification and integration module 140. The signal processor module 130 receives vector data 193 from the vector collector module 120. In various embodiments, the signal handler module 130 may include only the exception signal handler module 310. In various embodiments, the signal handler module 130 may include only the opinion signal handler module 320. In various embodiments, the signal handler module 130 may include an exception signal handler module 310 and an opinion signal handler module 320. As shown in fig. 3, the signal processor module 130 also includes a normalization module 330 and a priority module 340.
The exception signal processor module 310 processes the vector data 193 to detect exception data 395 of the signal data 194. The opinion signal processor module 320 processes the vector data 193 to detect opinion data 396. It should be understood that the signal data 194 includes anomaly data 395 detected by the anomaly signal processor module 310 and/or opinion data 396 detected by the opinion signal processor module 320.
The anomaly signal processor module 310 detects anomalies within the structured data of the vector data 193 using the anomaly detector model 311 (the anomaly detector model 311 is a machine learning model). In one embodiment, the anomaly detector model 311 utilizes a TensorFlow platform or other machine learning platform as discussed herein or known in the art at the time of the submission or developed or made available after the time of the submission. The machine learning platform provides various statistical methods for anomaly detection. In one embodiment, the machine learning model is trained by executing an anomaly detection and integration process Python programming language script and other scripts as discussed herein, developed in other programming languages, or known in the art at the time of submission or developed or made available after the time of submission.
In one embodiment, the anomaly detector model 311 is trained using training data 191 with supervised learning. As described above, the training data 191 may include data related to any of the vectors listed in fig. 2C and 2D, or data related to any desired vector discussed herein, known in the art, or otherwise made known. Returning to FIG. 3, the anomaly detector model 311 utilizes a machine learning algorithm trained based on trend-based determinations. For example, the vector of the case lifecycle is based on the structured vector data 193, where the vector represents the number of days the case has been opened. In this example, a case life cycle is determined to be abnormal if it exceeds sixty-three days, and is also determined to be abnormal if it is less than half a day. In this example, the threshold is determined by a minimum threshold and a maximum threshold. In this example, the anomaly detector model 311 is trained by data scientists based on such thresholds covering machine learning algorithms.
In one embodiment, the anomaly signal processor module 310 utilizes the IQR to determine anomalies that are outliers, as is known in the art. For example, under IQR, an outlier is a much smaller data value or a much larger data value than other values in the dataset. In one embodiment, the exception signal handler module 310 takes advantage of the state of the artKnown as gaussian distribution algorithms. For example, using a Gaussian distribution algorithm, by fitting the data values with the mean μ and variance σ 2 Other data values of the distribution are compared to detect anomalies. It should be understood that other data anomaly detection methods may be used, such as K-nearest neighbor algorithm (KNN), K-means algorithm based clustering, Support Vector Machines (SVM), and other anomaly detection methods and machine learning algorithms as discussed herein or known in the art at the time of submission or developed or made available after submission to find the most positive and most negative anomalies and potentially integrated signals.
The opinion signal processor module 320 detects opinions within unstructured data of the vector data 193 using an opinion detector model 321, which is a machine learning model. In one embodiment, the opinion detector model 321 utilizes a Natural Language Toolkit (NLTK) platform as well as other natural language platforms as discussed herein or known in the art at the time of submission or developed or made available after submission. In one embodiment, the natural language module is trained by executing the opinion detection and integration process Python script and other scripts as discussed herein or known in the art at the time of submission or developed or made available after submission.
The opinion detector model 321 is trained using training data 191 with supervised learning. The training data 191 is generated based on information collected from the manager about previously determined opinions. In one embodiment, the manager reviews the determined opinions in the form of a corpus, which is a word or phrase. For example, the corpus may be "unpleasant" that the opinion detector model 321 has determined as a negative signal. However, when a manager reviews comments in the use of the "unpleasant" corpus, the manager may find that the word "unpleasant" is something other than for the service provided by the service provider, such as contact's unpleasantness regarding what lunch is eating. In this example, the administrator associates this corpus with a false positive indication. The user interface module 150 enables the service provider's administrator to provide this type of feedback to improve and retrain the machine learning model in a feedback loop to obtain more accurate results in anomaly and/or opinion detection. After associating the corpus with a positive indication of error, the data scientist or agent adds the following to training data 191: the simultaneous occurrence of "unpleasant" and "lunch" in the corpus is not a negative sign and should be ignored. After training the opinion detector model 321 with this new training data 191, the opinion detector model 321 will not determine such "unpleasant" and "lunch" corpora as negative signs. It should be appreciated that the agent may be an industry expert or programmed work or machine learning model as previously described to take action on false positives and continue to improve training data and subsequent results of anomaly and opinion signal detection.
The user interface module 150 enables the service provider's administrator to provide this type of feedback to improve and retrain the machine learning model in a feedback loop to obtain more accurate results in anomaly and/or opinion detection. After the corpus is associated with false positive indications, false positive data is generated indicating that the corpus is not negatively signaled when "unpleasant" and "lunch" are present at the same time and should be ignored. This false positive data is then added to training data 191 by human agents and/or non-human agents, such as, but not limited to, data scientists, programmers, robots, runtime and/or offline machine learning training modules and systems and/or any other agent capable of providing updates and modifications to machine learning based systems and/or databases. After training the opinion detector model 321 with this new training data 191, the opinion detector model 321 will not determine such "unpleasant" and "lunch" corpus as negative signals in the future.
In one embodiment, the opinion signal processor module 320 utilizes opinion analysis in which opinions are negatively and positively ranked based on a polarity scale from negative one to positive one, as is known in the art. For example, the word "disaster" may be assigned a value of-1.0, while the word "unpleasantness" may be assigned a value of-0.7. The opinion signal processor module 320 utilizes tokenization, wherein sentences and words are tokenized with a dictionary parser, as is known in the art. The opinion signal processor module 320 utilizes text classification in which named entities such as places, people, and organizations are identified as nouns, as is known in the art. For example, the word "joy" is both an emotional word and a name of a person. The opinion signal processor module 320 utilizes stemming and morphological recovery, in which different versions of words are integrated, e.g., "frustrated" and "frustrating", as known in the art. The opinion signal processor module 320 utilizes voice tagging in which the context of the voice is determined, as is known in the art. The opinion signal processor module 320 utilizes stop word deletion in which unimportant words are deleted, as is known in the art.
Opinions may be classified as general opinions or frictional opinions. The general opinion indicates that the customer expresses a negative opinion in the dialog text or a positive opinion in the dialog text. The friction opinion indicates that there is a problem with the resolution of the case and the supervisor should be aware of this, as the friction opinion indicates that the case is about to be upgraded.
The normalization module 330 is used when multiple anomalies are detected for an object of a case, contact, or customer. For example, if five different anomalies are detected, the normalization module 330 performs normalization calculations on the plurality of anomalies. The control data 192 includes normalization rules that provide instructions for normalization calculations. For example, the normalization rule may be to sum the number of detected anomalies based on the weights determined by the priority module 340. As another example, the normalization may include a range having a minimum threshold and a maximum threshold (e.g., a shortest day to resolve a case and a longest ninety days to resolve a case), and the normalization algorithm may determine a normalized value based on a minimum scale and a maximum scale. Such normalization values may be determined based on euclidean distance algorithms and other normalization algorithms as discussed herein or known in the art at the time of submission or developed or made available after submission.
The priority module 340 is used to determine the priority of the particular vector determined to be abnormal. The priority module 340 uses weighting to determine the priority of a particular vector. The control data 192 includes weighting rules that provide instructions for priority weighting via the priority module 340. For example, each vector is assigned a weight from zero to one, where zero is the smallest weight and one is the largest weight. In this example, a weight between zero and one may be assigned, for example a weight of 0.65. In this example, if eight anomalies are detected for eight different vectors, and the weight of each vector is one, then eight anomalies are detected. However, if four of the eight vectors have a weight of 0.75 and the other four of the eight vectors have a weight of 0.25, then the priority-based weighting calculation determines that four anomalies are to be detected. It should be understood that other priority calculations may be used as discussed herein or known in the art at the time of submission or developed or made available after submission.
FIG. 4 illustrates an application environment 100 for active customer relationship analysis, which includes a more detailed block diagram of a verification and integration module 140. It should be understood that the schematic diagram of fig. 4 is for exemplary purposes and is not intended to be limiting. Referring to fig. 1, 2A, 3, and 4 concurrently, the application environment 100 includes a service provider computing environment 110 that includes a detection management system 111. The detection management system 111 includes a signal processor module 130, a verification and integration module 140, and a user interface module 150. The verification and integration module 140 receives signal data 194 from the signal processor module 130, the signal data 194 including anomaly data 395 and/or opinion data 396. The verification and integration module 140 includes a signal verifier module 410 and a signal integrator module 420.
Signal data 194 is generated from the detected anomalies and/or detected opinions. The signal verifier module 410 verifies that the generated signal is valid based on the control data 192. When the detected signal is affected by factors unrelated to the anomaly (e.g., anomalous noise), signal validator module 410 prevents the detected signal from being added to signal data 194. For example, if the vector is "case owner change" and an anomaly of too many changes by the case owner is detected, the rules in control data 192 may indicate: when providing support around the clock, this is not an exception if frequent changes are made to the case owner due to the assignment of different agents to be responsible for the case. In this example, although the case owner's change count may be high, the signal validator module 410 determines that this is not an exception for all-weather processed cases.
In some embodiments, the signal integrator module 420 integrates multiple detected signals associated with objects such as cases, contacts, and customers. For example, in the case where the signal handler module includes the anomaly signal handler module 310 and the opinion signal handler module 320 such that anomaly and opinion signals are detected, if five anomalies and/or five opinions are detected, the five anomalies and/or five opinions are integrated together before being added to the signal data 194.
After the signal data 194 is validated by the signal validator module 410 and the signal data 194 is integrated by the signal integrator module 420, the validation and integration module 140 generates signal report data 494. It should be understood that the signal reporting data 494 includes exception reporting data 495 and opinion reporting data 496. The signal report data 494 is then sent to the user interface module 150.
FIG. 5A illustrates an application environment 100 for active customer relationship analysis, which includes a more detailed block diagram of a user interface module 150. It should be understood that the schematic diagram of fig. 5A is for exemplary purposes and is not intended to be limiting. Referring concurrently to fig. 1, 2A, 3, 4, and 5A, the application environment 100 includes a service provider computing environment 110 that includes a detection management system 111. The detection management system 111 includes a verification and integration module 140 and a user interface module 150. The user interface module 150 receives the signal reporting data 494 from the verification and integration module 140. The user interface module 150 includes a dashboard module 510, a notification module 520, and an indicator module 530.
The dashboard module 510 displays the signal report data 494 and includes an exception signal list and an opinion signal list. The exception signal list provides a count of cases with several detected exceptions. The opinion signal list provides a count of cases with several detected opinions. In one embodiment, the dashboard module 510 displays the exception signal list separately from the opinion signal list.
FIG. 5B shows an illustrative example of a signal report 560 generated by the user interface module of FIG. 5A.
As shown in fig. 5B, signal report 560 depicts a negative signal including exception information 561 and opinion information 562. Anomaly information 561 depicts 1 case with 7 anomalies, 3 cases with 6 anomalies, 8 cases with 5 anomalies, 10 cases with 4 anomalies, 6 cases with 3 anomalies, 15 cases with 2 anomalies, and 123 cases with 1 anomaly. Opinion information 562 depicts 2 cases with 2 general opinions, 41 cases with 1 general opinion, 108 cases with 5 or more urgent opinions, and 252 cases with 2 to 4 urgent opinions.
One of ordinary skill in the art will readily recognize that fig. 5B is merely a specific illustrative example of a signal report 560, and that many other types and permutations of such reports are possible and contemplated by the inventors. For example, a particular illustrative example of a signal report 560 includes exception information 561 and opinion information 562, which indicates that in this particular illustrative example, the signal handler module includes an exception signal handler module 310 and an opinion signal handler module 320. However, as described above, some embodiments include only the exception signal handler module 310 or the opinion signal handler module 320. Thus, in these embodiments, only exception information 561 or opinion information 562 would be displayed in signal report 560. Therefore, specific illustrative examples of the type and arrangement of signal reports 560 of fig. 5B should not be construed as limiting the embodiments set forth in the claims.
Fig. 5C shows an illustrative example of a signal report 570 generated by the user interface module of fig. 5A.
As shown in fig. 5C, the signal report 570 depicts a positive signal that includes exception information 571 and opinion information 572. Anomaly information 571 depicts 147 cases with 3 anomalies, 735 cases with 2 anomalies, and 4722 cases with 1 anomaly. Opinion information 572 depicts 1 case with 13 opinions, 3 cases with 12 opinions, and 1 case with 11 opinions, and so on.
Those of ordinary skill in the art will readily recognize that fig. 5C is merely a specific illustrative example of a signal report 570, and that many other types and arrangements of such reports are possible and contemplated by the inventors. For example, a specific illustrative example of the signal report 560 includes exception information 561 and opinion information 562, which indicates that in this specific illustrative example, the signal handler module includes the exception signal handler module 310 and the opinion signal handler module 320. However, as described above, some embodiments include only the exception signal handler module 310 or the opinion signal handler module 320. Thus, in these embodiments, only exception information 561 or opinion information 562 would be displayed in signal report 560. Therefore, specific illustrative examples of the type and arrangement of signal report 570 of fig. 5C should not be construed as limiting the embodiments set forth in the claims.
Returning to FIG. 5A, the dashboard module 510 allows cases to be screened based on their objects, contacts, and customers. Dashboard module 510 allows screening of cases based on negative and positive signals. It should be understood that dashboard module 510 may include other screening criteria, such as the product lines supported, the geographic regions supported, and the names of the vectors desired to be examined.
Dashboard module 510 allows the user to view details about exceptions and comments. For example, if a case detects seven anomalies, the user may select the case and view the name of each anomaly along with information about the anomalies, such as calculated values about the anomalies. As another example, if a case detects five opinions, the user may select the case and view each corpus detected. In addition, dashboard module 510 may display comments on which corpora are detected. For example, an "unpleasant" corpus may be detected as an opinion. The user may then view the comments containing the corpus to view the context of the corpus. For example, the comment may be to reflect a negative opinion "i feel unpleasant with your support". The dashboard module 510 also allows the user to determine a corpus that has been incorrectly determined as an opinion as a false positive.
FIG. 5D shows an illustrative and non-exhaustive example user interface 580 for opinion report data 496 identified based on opinion signals and used for proactive customer relationship analysis. As shown in fig. 5D, the user interface 580 depicts positive opinion report data 496, the positive opinion report data 496 including a first opinion display 581 and a second opinion display 582. The first opinion display 581 shows a positive corpus of "appetized". In this context, the term "emotive" relates to "very emotive" which is similar to text that usually represents a positive opinion. In contrast, if the text is "you are too slow, and will not be exciting if answered quickly," the user will recognize that the word "exciting" will be relevant to the negative opinion when the user reviews the display of such opinions. Thus, in this alternative example, the user designates the opinion so displayed as a false positive of the positive signal. As another example, the second opinion display 582 shows a "very good" corpus of faces that generally indicate positive opinions. In this context, the term "very good" means "you find the cause, very good". Thus, the user does not designate this as false positive.
For example, the comments may be: "do not contact you early, i am very unpleasant because you solve my problem so quickly. "in this case, the" offending "corpus is a false positive of a negative sign because the customer's contacts indicate an offense to himself, not to the agent. Dashboard module 510 allows a user to label the corpus as false positives. After it is labeled, the data scientist may examine the false positive corpus and create training data 191 that may be used to train the opinion detector model 321.
It should be understood that the signal may be a negative signal indicating a problem or a positive signal indicating success. For example, fig. 5B shows a negative signal and fig. 5C shows a positive signal. In addition, there may be a friction related signal indicating that a case is developing that requires upgrading to a high level agent or that requires further escalation in the service provider's contact hierarchy (e.g., to a management level), indicating the urgency of the signal. The friction related signal may indicate that the customer's contact has provided case text or feedback that was previously determined to contribute to a negative signal, such as a low customer survey score provided in the past. In this case, the signal may indicate that the service provider needs to upgrade a particular case recorded under the customer contact name.
In one embodiment, the dashboard module 510 displays any signals related to both exceptions and opinions for the case. For example, the same case may have three exceptions detected by the exception signal handler module 310 and two opinions detected by the opinion signal handler module 320. It should be appreciated that providing exceptions and opinions simultaneously by the dashboard module 510 increases the understanding of the problems associated with the case that the customer is experiencing.
The notification module 520 sends the signal report data 494 as notifications of exceptions and opinions. Notifications are sent to the user via email, text, dialog boxes, and other notification sending mechanisms as discussed herein or known in the art at the time of the submission or developed or made available after the time of the submission. A personalized notification may be sent to the user so that the user receives notification of a signal of interest to the user. For example, a user may request a notification based on certain criteria (e.g., a product line supported by a service provider or a geographic area supported by a service provider)
The indicator module 530 interfaces with the customer service module 281 of the customer service system 180 to provide indications of the signal reporting data 494 of the three objects, customer, contact and case. In one embodiment, when an agent sees a signal indicated within the customer service module 281, the agent may select the signal indicator and view more information about the signal.
Fig. 6 shows an illustrative and non-exhaustive example user interface 600 of signal reporting data 494 for active customer relationship analysis. As shown in fig. 6, the user interface 600 depicts the signal reporting data 494 as a signal indicator 601. In the example shown by signal indicator 601, the case status is green, the customer status is yellow, and the contact status is red. In this example, the contact may need immediate attention, the customer may need a slightly lower level of attention than immediate attention, and the case object itself does not exhibit any signal indicator 601 indicating that immediate attention is needed. It should be understood that other meanings may be assigned to signal indicator 601 depending on the desired application.
Those of ordinary skill in the art will readily recognize that fig. 6 is merely a specific illustrative example of signal reporting data 494, and that many other types and arrangements of such data are possible and contemplated by the inventors. Therefore, specific illustrative examples of the type and arrangement of the signal reports 494 of fig. 6 should not be construed as limiting the embodiments set forth in the claims.
FIG. 7 is an example table 700 of vector controls for active customer relationship analysis. Referring to fig. 1, 2A, 3, 4, 5A, 6, and 7 concurrently, table 700 includes a column 711 that represents the vector control field.
At row 721 of column 711, the vector control field is "vector name". In this context, the vector name may be any of the vector names shown in fig. 6 and any other vector names as discussed herein or known in the art at the time of submission or developed or made available after the submission.
At row 722 of column 711, the vector control field is "enabled". In this context, the vector control field indicates whether the vector is enabled for analysis or disabled.
At line 723 of column 711, the vector control field is "applicable to object-case". In this context, the vector control field indicates that the vector is to be analyzed for case object type.
At line 724 of column 711, the vector control field is "applicable to object-contact". Herein, the vector control field indicates that the vector is to be analyzed for contact object types.
At line 725 of column 711, the vector control field is "applicable to object-client". In this context, the vector control field indicates that the vector is to be analyzed for the customer object type.
At row 726 of column 711, the vector control field is "polarity applicable". In this context, the vector control field indicates whether a vector is to be analyzed as a positive signal having a positive value between zero and one, as a negative signal having a negative value between negative one and zero, or as a positive signal and a negative signal, each of which has a value between negative one and positive one.
At row 727 of column 711, the vector control field is "vector type". Herein, the vector control field means an anomaly-gaussian-based, anomaly-IQR-based, mean-average-based (average), mean-median-based (mean) -median-based, opinion-based, standard deviation-based, threshold-based vector type, and other vector types as discussed herein or known in the art at the time of submission or developed or made available after submission.
At row 728 of column 711, the vector control field is "force low". Herein, if the vector type is threshold-based, the vector control field defines the minimum value of the threshold range.
At row 729 of column 711, the vector control field is "force high". Herein, if the vector type is threshold-based, the vector control field defines the maximum value of the threshold range.
At row 730 of column 711, the vector control field is "weight". Herein, the weights of the available vectors are set within the control data 192 for use by the priority module 340 of the signal processor module 130.
FIG. 8 is a flow diagram of a process 800 for active customer anomaly detection. Referring concurrently to fig. 1, 2A, 3, 4, 5A, and 8, process 800 for active customer anomaly detection begins at operation 810 and the process flow proceeds to operation 811.
In operation 811, case data 282 is received from the customer service system 180. Case data 282 is associated with case information for the customer service system. Case data 282 represents structured data and unstructured data of customer service system 180. The data collection module 230 of the vector collector module 120 collects case data 282 from the customer service system 180.
Once the case data 282 is received at operation 811, process flow advances to operation 812.
At operation 812, case data 282 is collected into vector data 193 by the vector collector module 120. The vector data integrator module 210 integrates the case data 282 and the vector data aggregator module 220 aggregates the case data 282 to generate vector data 193. The vector data integrator module 210 receives integration instructions from the control data 192. The vector data aggregator module 220 receives aggregation instructions from the control data 192.
The control data 192 is modified by the user through the control configuration module 235. In one embodiment, vector data 193 includes assigning a weight to each vector in vector data 193 relative to other vectors of vector data 193. Herein, each vector of the vector data 193 is defined by a vector type including: anomaly-gaussian-based vector types, anomaly-IQR-based vector types, mean-median-based vector types, standard deviation-based vector types, threshold-based vector types, and other vector types as discussed herein or known in the art at the time of submission or developed or made available after submission. For each threshold-based vector data type, a maximum threshold is assigned, and a minimum threshold is assigned and stored as control data 192. In one embodiment, each vector of vector data 193 includes a customer object type, a contact object type, a case object type, and an object type as discussed herein or other object types known in the art at the time of submission or developed or made available after the time of submission. The vector data 193 is transmitted by the vector collector module 120 to the signal processor module 130.
Once the case data 282 is collected into the vector data 193 at operation 812, process flow advances to operation 813.
At operation 813, the exception signal processor module 310 processes the vector data 193 to detect exception data 395. The anomaly signal processor module 310 includes an anomaly detector model 311, the anomaly detector model 311 performing a machine learning-based anomaly detection technique. The anomaly detector model 311 is trained with training data 191. In one embodiment, the machine learning based anomaly detection technique is a supervised machine learning based anomaly detection technique. In one embodiment, the anomaly detector model 311 is trained under a supervised model with training data 191 defined by a user of the detection management system 111.
The exception signal processor module 310 generates exception data 395 from the vector data 193. In one embodiment, exception data 395 includes a point exception type, a context exception type, a collective exception type, and other exception types as discussed herein or known in the art at the time of submission or developed or made available after the time of submission. In one embodiment, the normalization module 330 normalizes the anomaly data 395 and the prioritization module 340 prioritizes the anomaly data 395. The signal processor module 130 transmits the exception data 395 to the verification and integration module 140.
Once the vector data 193 is processed to detect exception data 395 at operation 813, process flow proceeds to operation 814.
At operation 814, the verification and integration module 140 prepares the exception data 395 to generate exception report data 495. In one embodiment, the signal verifier module 410 verifies the anomaly data 395. In one embodiment, the signal integrator module 420 integrates the anomaly data 395. The verification and integration module 140 transmits exception reporting data 495 to the user interface module 150.
Once the verification and integration module 140 prepares the exception data 395 to generate exception report data 495 at operation 814, process flow proceeds to operation 815.
At operation 815, the exception report data 495 is provided to the user interface module 150 for analysis by the user. In one embodiment, dashboard module 510 provides a dashboard user interface to display exception report data 495 to a user. In one embodiment, the notification module 520 sends a notification of exception report data 495 to the user. In one embodiment, the indicator module 530 provides an indication of the exception reporting data 495 by customizing a user interface screen provided to the user by the customer service system based on the exception reporting data 495.
Once the exception report data 495 is provided to the user interface module 150 for analysis by the user at operation 815, process flow advances to operation 816.
At operation 816, the process 800 exits.
Fig. 9 is a flow diagram of a process 900 for proactive customer opinion detection. Referring concurrently to fig. 1, 2A, 3, 4, 5A, and 9, process 900 for proactive customer opinion detection begins at operation 910 and the process flow proceeds to operation 911.
At operation 911, case data 282 is received from the customer service system 180. Case data 282 is associated with case information for the customer service system. Case data 282 represents structured data and unstructured data of customer service system 180. In one embodiment, case data 282 includes textual information representing customer case dialogue information, agent case dialogue information, case survey result review information, and other textual information as discussed herein or known in the art at the time of submission or developed or made available after submission. The data collection module 230 of the vector collector module 120 collects case data 282 from the customer service system 180.
Once the case data 282 is received at operation 911, process flow advances to operation 912.
At operation 912, case data 282 is collected into vector data 193 by the vector collector module 120. The vector data integrator module 210 integrates the case data 282 and the vector data aggregator module 220 aggregates the case data 282 to generate vector data 193. The vector data integrator module 210 receives integration instructions from the control data 192. The vector data aggregator module 220 receives aggregation instructions from the control data 192.
The control data 192 is modified by the user through the control configuration module 235. In one embodiment, each vector of vector data 193 includes a customer object type, a contact object type, a case object type, and an object type as discussed herein or other object types known in the art at the time of submission or developed or made available after the time of submission. The vector data 193 is transmitted by the vector collector module 120 to the signal processor module 130.
Once the case data 282 is collected into the vector data 193 at operation 912, process flow proceeds to operation 913.
In operation 913, the opinion signal processor module 320 processes the vector data 193 to detect opinion data 396. The opinion signal processor module 320 includes an opinion detector model 321, which opinion detector model 321 performs machine learning-based opinion detection techniques. In one embodiment, the machine learning-based opinion detection technique includes corpus data representing a plurality of opinion indications within vector data 193. The opinion detector model 321 is trained with training data 191. In one embodiment, the machine learning based opinion detection technique is a supervised machine learning based opinion detection technique. In one embodiment, the opinion detector model 321 is trained under a supervised model with training data 191 defined by an engineer of the detection management system 111.
The opinion signal processor module 320 generates opinion data 396 from the vector data 193. In one embodiment, the normalization module 330 normalizes the opinion data 396 and the prioritization module 340 prioritizes the opinion data 396. The signal processor module 130 transmits the opinion data 396 to the verification and integration module 140.
Once the vector data 193 is processed to detect opinion data 396 at operation 913, process flow proceeds to operation 914.
At operation 914, the validation and integration module 140 prepares the opinion data 396 to generate opinion report data 496. In one embodiment, the opinion types of the opinion data 396 include negative opinion types, positive opinion types, urgent opinion types, and other opinions as discussed herein or known in the art at the time of submission or developed or made available after submission. In one embodiment, the signal verifier module 410 verifies the opinion data 396. In one embodiment, the signal integrator module 420 integrates the opinion data 396. The validation and integration module 140 transmits opinion report data 496 to the user interface module 150.
Once the opinion data 396 is prepared by the validation and integration module 140 to generate opinion report data 496 at operation 914, process flow proceeds to operation 915.
At operation 915, the opinion report data 496 is provided to the user interface module 150 for analysis by the user. In one embodiment, the dashboard module 510 provides a dashboard user interface to display opinion report data 496 to a user. In one embodiment, the notification module 520 sends a notification of the opinion report data 496 to the user.
In one embodiment, the indicator module 530 provides an indication of the opinion reporting data 496 by customizing a user interface screen provided to a user by a customer service system based on the opinion reporting data 496.
In one embodiment, the user interface module 150 includes a user interface that allows the user to designate the opinion of the opinion reporting data 496 as a false positive. In one embodiment, training data is generated from the false positive designations in order to improve the predictive power of the opinion detector model 321. In one embodiment, one or more opinions associated with false positive designations are removed from the opinion reporting data 496.
Once the opinion reporting data 496 is provided to the user interface module 150 for analysis by the user at operation 915, process flow advances to operation 916.
At operation 916, the process 900 exits.
FIG. 10 is a flow diagram of a process 1000 for proactive customer relationship analysis. Referring also to fig. 1, 2A, 3, 4, 5A, and 10, process 1000 for active customer relationship analysis begins at operation 1010 and process flow advances to operation 1011.
At operation 1011, case data 282 is received from customer service system 180. Case data 282 is associated with case information for the customer service system. Case data 282 represents structured data and unstructured data of customer service system 180. The data collection module 230 of the vector collector module 120 collects case data 282 from the customer service system 180.
Once the case data 282 is received at operation 1011, process flow advances to operation 1012.
At operation 1012, case data 282 is collected into the vector data 193 by the vector collector module 120. The vector data integrator module 210 integrates the case data 282 and the vector data aggregator module 220 aggregates the case data 282 to generate vector data 193. The vector data integrator module 210 receives integration instructions from the control data 192. The vector data aggregator module 220 receives aggregation instructions from the control data 192. The control data 192 is modified by the user through the control configuration module 235.
Herein, each vector of the vector data 193 is defined by a vector type including: opinion-based vector types, anomaly-gaussian-based vector types, anomaly-IQR-based vector types, mean-median-based vector types, standard deviation-based vector types, threshold-based vector types, and other vector types as discussed herein or known in the art at the time of submission or developed or made available after submission. In one embodiment, each vector of vector data 193 includes a customer object type, a contact object type, a case object type, and an object type as discussed herein or other object types known in the art at the time of submission or developed or made available after the time of submission. The vector data 193 is transmitted by the vector collector module 120 to the signal processor module 130.
Once the case data 282 is collected into the vector data 193 in operation 1012, the process flow proceeds to operation 1013.
In operation 1013, the signal processor module 130 processes the vector data 193 to detect the signal data 194. The signal processor module 130 generates signal data 194 from the vector data 193. Signal data 194 includes anomaly data 395, opinion data 396, and other signal data as discussed herein or known in the art at the time of submission or developed or made available after submission. In one embodiment, the normalization module 330 normalizes the signal data 194 and the priority module 340 prioritizes the signal data 194. The signal processor module 130 transmits the signal data 194 to the verification and integration module 140.
Once the vector data 193 is processed to detect the signal data 194 in operation 1013, the process flow proceeds to operation 1014.
At operation 1014, the verification and integration module 140 prepares the signal data 194 to generate signal report data 494. In one embodiment, signal verifier module 410 verifies signal data 194. In one embodiment, the signal integrator module 420 integrates the signal data 194. The verification and integration module 140 transmits the signal reporting data 494 to the user interface module 150.
Once the verification and integration module 140 prepares the signal data 194 to generate the signal report data 494 at operation 1014, process flow proceeds to operation 1015.
At operation 1015, the signal report data 494 is provided to the user interface module 150 for analysis by the user. In one embodiment, the dashboard module 510 provides a dashboard user interface to display the signal reporting data 494 to the user. In one embodiment, the notification module 520 sends a notification of the signaling data 494 to the user. In one embodiment, the indicator module 530 provides an indication of the signal reporting data 494 by customizing user interface screens provided to a user by the customer service system based on the signal reporting data 494.
Once the signal reporting data 494 is provided to the user interface module 150 for analysis by the user at operation 1015, process flow proceeds to operation 1016.
At operation 1016, the process 1000 exits.
Embodiments of the present disclosure provide efficient, effective, and versatile systems and methods for proactive customer relationship analysis. However, the disclosed embodiments do not include, embody, or exclude other forms of innovation in the field of anomaly detection systems and methods.
Moreover, the disclosed embodiments of the system and method for proactive customer relationship analysis are not abstract concepts for at least several reasons.
First, the disclosed systems and methods for proactive customer relationship analysis are not abstract ideas in that they are not solely ideas in their own right (e.g., can be performed in the brain or using pen and paper). For example, in existing customer service systems, the amount of unstructured data for the reviews of thousands of cases is enormous, as the agents of the service providers and customer contacts regularly send text information to each other. It is not feasible for the service provider's manager to read all the reviews and search for certain words that indicate that there are customer opinions to be explored. As another example, in existing customer service systems, the amount of structured data for thousands of cases is equally large, as the agents of the service provider will update the fields with information related to the resolution of each case. It is not feasible for the service provider's administrator to review all of the structured data and compare the structured data in each case. Instead, the disclosed embodiments utilize machine learning algorithms to detect sentiments within unstructured data and anomalies within structured data. Due to the enormous amount of such unstructured and structured data, the human brain cannot perform such detection even with a paper pen.
Second, the disclosed systems and methods for proactive customer relationship analysis are not abstract concepts, in that they are not methods of organizing human activities, such as basic economic principles or practices (including hedging, insurance, risk reduction); business or legal interactions (including contractual agreements; legal obligations; advertising, marketing or sales activities or behaviors; business relationships); and person-to-person interactions (including social activities, education, and adherence to rules or instructions). Rather, the disclosed embodiments perform machine learning model analysis to provide detection of the client's interaction signals with the agent. Using the disclosed embodiments, providing detection of signals used by a service provider allows the service provider to resolve a problem before the problem causes the customer to seek service from a different service provider, thereby allowing the service provider to provide better service to the customer. The disclosed embodiments improve the ability of a service provider's administrator to discover problems that are not opaque to the service provider's administrator, which is not organizing human activities.
Third, while mathematics may be used in the disclosed systems and methods for proactive customer relationship analysis, the disclosed and claimed systems and methods are not abstract concepts, as they are not merely mathematical relationships/formulas. In contrast, the following tangible effects are produced with the disclosed embodiments: a signal that enables a machine learning model to operate on the vector data to determine anomalies or opinions. Such signals are provided to users of the service provider to enhance the commercial viability of the service provider. This is not merely a mathematical relationship/formula.
Furthermore, the disclosed system and method describes practical applications to improve the field of signal detection by providing a technical solution to the technical problem of actively detecting client signals.
In the discussion above, certain aspects of some embodiments include process steps and/or operations and/or instructions described herein in a particular order and/or grouping for purposes of illustration. However, the particular order and/or grouping shown and discussed herein is illustrative only and not limiting. One skilled in the art will recognize that other orders and/or groupings of process steps and/or operations and/or instructions are possible, and in some embodiments one or more of the process steps and/or operations and/or instructions discussed above may be combined and/or omitted. Further, portions of one or more process steps and/or operations and/or instructions may be re-grouped into portions of one or more other process steps and/or operations and/or instructions discussed herein. Accordingly, the particular order and/or grouping of the process steps and/or operations and/or instructions discussed herein is not a limitation on the scope of the invention as claimed below. Thus, many variations may be implemented by one of ordinary skill in the art in light of this disclosure, whether or not explicitly stated in the specification or implied by the specification.

Claims (44)

1. A computing system implemented method for proactively detecting customer satisfaction, the method comprising:
collecting, using one or more computing systems, historical case vector data from one or more customer service systems;
training one or more machine learning anomaly detection models using the historical case vector data to detect anomalies in the case data indicative of potential customer dissatisfaction;
obtaining, using one or more computing systems, current case vector data representing current customer cases associated with one or more customers of a service provider;
providing the current case vector data to one or more machine learning anomaly detection models that are trained;
identifying one or more anomalies in the current case vector data for one or more particular current customer cases using the one or more machine-learned anomaly detection models;
generating, using one or more computing systems, a signal report including a list of each of the one or more particular current customer cases having the one or more identified anomalies and the particular one or more anomalies associated with the listed plurality of particular current customer cases having the one or more identified anomalies; and
providing, using one or more computing systems, the signal report to an agent of the service provider.
2. The computing system implemented method of claim 1, wherein, when collecting the historical case vector data using one or more computing systems, a weight is assigned to each vector in the vector data relative to other vectors in the vector data.
3. The computing system implemented method of claim 1, wherein, when collecting the historical case vector data using one or more computing systems, assigning a vector type to each vector of the vector data, the vector type selected from the group of vector types consisting of:
an anomaly-Gaussian based vector type;
an anomaly-IQR based vector type;
vector type based on mean;
vector type based on mean-median;
a vector type based on standard deviation; and
a threshold-based vector type.
4. The computing system implemented method of claim 3, wherein the threshold-based vector data type comprises assigning one of a maximum threshold, a minimum threshold, and a combination of the foregoing.
5. The computing system implemented method of claim 1, wherein, when collecting the historical case vector data using one or more computing systems, assigning an object type to each vector of the vector data, wherein the object type comprises one of a customer object type, a contact object type, and a case object type.
6. The computing system implemented method of claim 1, wherein the one or more machine learning anomaly detection models comprises a supervised machine learning anomaly detection model.
7. The computing system implemented method of claim 1, wherein the one or more anomalies includes at least one anomaly type selected from a group of anomaly types including:
a point exception type;
a contextual exception type; and
a collective exception type.
8. The computing system implemented method of claim 1, wherein generating, using one or more computing systems, a signal report comprises: verifying the one or more exceptions as valid exceptions.
9. The computing system implemented method of claim 1, wherein providing, using one or more computing systems, the signal report to an agent of the service provider comprises: providing a dashboard user interface that displays the signal report to an agent of the service provider.
10. The computing system implemented method of claim 1, wherein providing, using one or more computing systems, the signal report to an agent of the service provider comprises: sending a notification of the signaling report to a user.
11. The computing system implemented method of claim 1, wherein providing, using one or more computing systems, the signal report to an agent of the service provider comprises: customizing a user interface screen provided by the customer service system to an agent of the service provider based on the signal report.
12. A computing system implemented method for proactively detecting customer satisfaction, the method comprising:
obtaining, using one or more computing systems, current case data representing cases associated with customers of a service provider;
providing the current case data to one or more machine learning-based language processing models;
identifying one or more customer opinions in current case data of one or more specific current customer cases using the one or more machine learning based language processing models;
generating, using one or more computing systems, a signal report including a list of each of the one or more particular current client cases having the one or more identified client opinions and a particular one or more client opinions associated with the listed plurality of particular client cases having the one or more identified one or more client opinions; and
providing, using one or more computing systems, the signal report to an agent of the service provider.
13. The computing system implemented method of claim 12, wherein the current case data includes textual data representing one or more of customer case conversation data, agent case conversation data, and case survey result review data.
14. The computing system implemented method of claim 12, wherein the one or more machine learning-based language processing models comprise corpus data representing a plurality of opinion indications.
15. The computing system implemented method of claim 12, wherein the signal report provided to the agent of the service provider comprises: generating, as a report of a hypothesis specific feedback characteristic for each of a particular one or more customer opinions associated with a particular current customer case listed, the hypothesis specific feedback characteristic when the agent indicates that one or more of the particular one or more customer opinions is a hypothesis specific.
16. The computing system implemented method of claim 15, wherein training data for the one or more machine learning-based language processing models is generated from the false opinion designation feedback features generated by the agent.
17. The computing system implemented method of claim 12, wherein the particular one or more customer opinions associated with the particular current customer case listed are negative opinion types.
18. The computing system implemented method of claim 12 wherein the particular one or more customer opinions associated with the particular current customer case listed are positive opinion types.
19. The computing system implemented method of claim 12 wherein the particular one or more customer opinions associated with the particular current customer case listed are of an urgency opinion type.
20. The computing system implemented method of claim 12, wherein providing the signal report to an agent of the service provider comprises: providing a dashboard user interface that displays the signal report to the agent.
21. The computing system implemented method of claim 12, wherein providing the signal report to an agent of the service provider comprises: sending a notification of the signaling report to the agent.
22. The computing system implemented method of claim 12, wherein providing the signal report to an agent of the service provider comprises: customizing a user interface screen provided by the customer service system to the agent based on the signal report.
23. The computing system implemented method of claim 15, wherein one or more customer opinions are removed from the signal report when they are associated with a fictitious specified feedback.
24. The computing system implemented method of claim 12, wherein in obtaining the current case vector data, each vector in the vector data is assigned a weight relative to other vectors in the vector data.
25. A computing system implemented method for proactively detecting customer satisfaction, the method comprising:
collecting, using one or more computing systems, historical case vector data from one or more customer service systems;
training one or more machine learning anomaly detection models using the historical case vector data to detect anomalies in the case data indicative of potential customer dissatisfaction;
obtaining, using one or more computing systems, current case vector data representing current customer cases associated with one or more customers of a service provider;
providing the current case vector data to one or more machine learning anomaly detection models that are trained;
identifying one or more anomalies in the current case vector data for one or more particular current customer cases using the one or more machine-learned anomaly detection models;
providing the current case vector data to one or more machine learning-based language processing models;
identifying one or more customer opinions in the current case vector data of one or more specific current customer cases using the one or more machine learning based language processing models;
generating, using one or more computing systems, a signal report, the signal report comprising:
a list of each of the one or more particular current customer cases having the one or more identified anomalies and the particular one or more anomalies associated with the plurality of particular current customer cases listed having the one or more identified anomalies; and
a list of each of the one or more particular cases having the one or more identified customer opinions and a particular one or more customer opinions associated with the listed plurality of particular customer cases having the one or more identified one or more customer opinions; and
providing, using one or more computing systems, the signal report to an agent of the service provider.
26. The computing system implemented method of claim 25, wherein, when collecting the historical case vector data using one or more computing systems, a weight is assigned to each vector in the vector data relative to other vectors in the vector data.
27. The computing system implemented method of claim 25, wherein, when collecting said historical case vector data using one or more computing systems, assigning a vector type to each vector of said vector data, a vector type selected from the group of vector types consisting of:
an anomaly-gaussian based vector type;
an anomaly-IQR based vector type;
vector type based on mean;
vector type based on mean-median;
a vector type based on standard deviation; and
a threshold-based vector type.
28. The computing system implemented method of claim 27, wherein the threshold-based vector data type comprises assigning one of a maximum threshold, a minimum threshold, and a combination of the foregoing.
29. The computing system implemented method of claim 25, wherein, when collecting the historical case vector data using one or more computing systems, assigning an object type to each vector of the vector data, wherein the object type comprises one of a customer object type, a contact object type, and a case object type.
30. The computing system implemented method of claim 25, wherein the one or more machine learning anomaly detection models comprise a supervised machine learning anomaly detection model.
31. The computing system implemented method of claim 25, wherein the one or more anomalies includes at least one anomaly type selected from a group of anomaly types including:
a point exception type;
a contextual exception type; and
a collective exception type.
32. The computing system implemented method of claim 25, wherein generating, using one or more computing systems, a signal report comprises: verifying the one or more exceptions as valid exceptions.
33. The computing system implemented method of claim 25, wherein the current case vector data includes textual data representing one or more of customer case conversation data, agent case conversation data, and case survey result review data.
34. The computing system implemented method of claim 25, wherein the one or more machine learning-based language processing models comprise corpus data representing a plurality of opinion indications.
35. The computing system implemented method of claim 25, wherein the signal report provided to the agent of the service provider comprises: generating the hypothesis specified feedback characteristics when the agent indicates that one or more of the particular one or more client opinions are false opinion designations as a report of the hypothesis specified feedback characteristics for each of the particular one or more client opinions associated with the listed plurality of particular current client cases.
36. The computing system implemented method of claim 35, wherein training data for the one or more machine learning based language processing models is generated from the hypothesis specified feedback features generated by the agent.
37. The computing system implemented method of claim 25, wherein the particular one or more customer opinions associated with the particular current customer case listed are negative opinion types.
38. The computing system implemented method of claim 25 wherein the particular one or more customer opinions associated with the particular current customer case listed are positive opinion types.
39. The computing system implemented method of claim 25 wherein the particular one or more client opinions associated with the particular current client cases listed are of the urgent opinion type.
40. The computing system implemented method of claim 25, wherein providing, using one or more computing systems, the signal report to an agent of the service provider comprises: providing a dashboard user interface that displays the signal report to an agent of the service provider.
41. The computing system implemented method of claim 25, wherein providing, using one or more computing systems, the signal report to an agent of the service provider comprises sending a notification of the signal report to a user.
42. The computing system implemented method of claim 25, wherein providing, using one or more computing systems, the signal report to an agent of the service provider comprises: customizing a user interface screen provided by the customer service system to an agent of the service provider based on the signal report.
43. The computing system implemented method of claim 35, wherein the one or more customer opinions are removed from the signal report when associated with a hypothesis specific feedback.
44. The computing system implemented method of claim 25, wherein in acquiring the current case vector data, each vector in the vector data is assigned a weight relative to other vectors in the vector data.
CN202080079645.2A 2019-09-13 2020-09-10 Method and system for active customer relationship analysis Pending CN115039116A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/570,432 US20210081293A1 (en) 2019-09-13 2019-09-13 Method and system for proactive client relationship analysis
US16/570,432 2019-09-13
PCT/US2020/050182 WO2021050716A1 (en) 2019-09-13 2020-09-10 Method and system for proactive client relationship analysis

Publications (1)

Publication Number Publication Date
CN115039116A true CN115039116A (en) 2022-09-09

Family

ID=74866461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080079645.2A Pending CN115039116A (en) 2019-09-13 2020-09-10 Method and system for active customer relationship analysis

Country Status (9)

Country Link
US (1) US20210081293A1 (en)
EP (1) EP4028966A4 (en)
JP (1) JP7449371B2 (en)
CN (1) CN115039116A (en)
AU (1) AU2020347183A1 (en)
BR (1) BR112022004692A2 (en)
CA (1) CA3154383A1 (en)
MX (1) MX2022003105A (en)
WO (1) WO2021050716A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11264012B2 (en) * 2019-12-31 2022-03-01 Avaya Inc. Network topology determination and configuration from aggregated sentiment indicators
US11851096B2 (en) * 2020-04-01 2023-12-26 Siemens Mobility, Inc. Anomaly detection using machine learning
US11201966B1 (en) * 2020-08-25 2021-12-14 Bank Of America Corporation Interactive voice response system with a real time conversation scoring module

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120137367A1 (en) * 2009-11-06 2012-05-31 Cataphora, Inc. Continuous anomaly detection based on behavior modeling and heterogeneous information analysis
US20120254333A1 (en) * 2010-01-07 2012-10-04 Rajarathnam Chandramouli Automated detection of deception in short and multilingual electronic messages
JP2012008947A (en) 2010-06-28 2012-01-12 Hitachi Ltd Business activity analysis method and business support system
US8473624B2 (en) * 2010-07-21 2013-06-25 Nice Systems Ltd. Method and system for routing text based interactions
JP5496863B2 (en) 2010-11-25 2014-05-21 日本電信電話株式会社 Emotion estimation apparatus, method, program, and recording medium
US8774369B2 (en) * 2012-10-23 2014-07-08 Telefonaktiebolaget L M Ericsson (Publ) Method and system to provide priority indicating calls
WO2014126576A2 (en) 2013-02-14 2014-08-21 Adaptive Spectrum And Signal Alignment, Inc. Churn prediction in a broadband network
JP5994154B2 (en) 2013-03-27 2016-09-21 東日本電信電話株式会社 Contact support system and contact support method
US9965524B2 (en) * 2013-04-03 2018-05-08 Salesforce.Com, Inc. Systems and methods for identifying anomalous data in large structured data sets and querying the data sets
US20150242856A1 (en) * 2014-02-21 2015-08-27 International Business Machines Corporation System and Method for Identifying Procurement Fraud/Risk
US10262298B2 (en) * 2014-05-14 2019-04-16 Successfactors, Inc. Mobile dashboard for employee performance management tools
JP5905651B1 (en) 2014-07-30 2016-04-20 株式会社Ubic Performance evaluation apparatus, performance evaluation apparatus control method, and performance evaluation apparatus control program
US9824323B1 (en) * 2014-08-11 2017-11-21 Walgreen Co. Gathering in-store employee ratings using triggered feedback solicitations
US20170061344A1 (en) 2015-08-31 2017-03-02 Linkedin Corporation Identifying and mitigating customer churn risk
US10636047B2 (en) * 2015-09-09 2020-04-28 Hartford Fire Insurance Company System using automatically triggered analytics for feedback data
JP6035404B1 (en) 2015-11-17 2016-11-30 日本生命保険相互会社 Visit preparation system
US20170316438A1 (en) * 2016-04-29 2017-11-02 Genesys Telecommunications Laboratories, Inc. Customer experience analytics
US11232465B2 (en) 2016-07-13 2022-01-25 Airship Group, Inc. Churn prediction with machine learning
US10045218B1 (en) * 2016-07-27 2018-08-07 Argyle Data, Inc. Anomaly detection in streaming telephone network data
US10771313B2 (en) * 2018-01-29 2020-09-08 Cisco Technology, Inc. Using random forests to generate rules for causation analysis of network anomalies

Also Published As

Publication number Publication date
JP7449371B2 (en) 2024-03-13
BR112022004692A2 (en) 2022-06-14
EP4028966A4 (en) 2023-10-11
WO2021050716A1 (en) 2021-03-18
MX2022003105A (en) 2022-05-30
US20210081293A1 (en) 2021-03-18
CA3154383A1 (en) 2021-03-18
EP4028966A1 (en) 2022-07-20
JP2022548251A (en) 2022-11-17
AU2020347183A1 (en) 2022-04-28

Similar Documents

Publication Publication Date Title
CN113228077B (en) System, method and platform for automatic quality management and identification of errors, omissions and/or deviations in the coordination of services and/or payments in response to requests under policy underwriting
US11037080B2 (en) Operational process anomaly detection
US10572796B2 (en) Automated safety KPI enhancement
US11580475B2 (en) Utilizing artificial intelligence to predict risk and compliance actionable insights, predict remediation incidents, and accelerate a remediation process
US10334106B1 (en) Detecting events from customer support sessions
US8453027B2 (en) Similarity detection for error reports
CN115039116A (en) Method and system for active customer relationship analysis
US20210081972A1 (en) System and method for proactive client relationship analysis
US9304991B2 (en) Method and apparatus for using monitoring intent to match business processes or monitoring templates
US20110191128A1 (en) Method and Apparatus for Creating a Monitoring Template for a Business Process
US11455561B2 (en) Alerting to model degradation based on distribution analysis using risk tolerance ratings
US10613525B1 (en) Automated health assessment and outage prediction system
US11366798B2 (en) Intelligent record generation
Gupta et al. Reducing user input requests to improve IT support ticket resolution process
US20220188181A1 (en) Restricting use of selected input in recovery from system failures
CA3053894A1 (en) Defect prediction using historical inspection data
US20200219199A1 (en) Segmented actuarial modeling
US11768917B2 (en) Systems and methods for alerting to model degradation based on distribution analysis
US11256597B2 (en) Ensemble approach to alerting to model degradation
US20220358130A1 (en) Identify and explain life events that may impact outcome plans
US20240070130A1 (en) Methods And Systems For Identifying And Correcting Anomalies In A Data Environment
US20240037090A1 (en) Systems and methods for analyzing veracity of statements
US20210150394A1 (en) Systems and methods for alerting to model degradation based on survival analysis
US20210150397A1 (en) Ensemble approach to alerting to model degradation
US20210150396A1 (en) Systems and methods for alerting to model degradation based on survival analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40071334

Country of ref document: HK