CN114023355B - Agent outbound quality inspection method and system based on artificial intelligence - Google Patents
Agent outbound quality inspection method and system based on artificial intelligence Download PDFInfo
- Publication number
- CN114023355B CN114023355B CN202111213528.3A CN202111213528A CN114023355B CN 114023355 B CN114023355 B CN 114023355B CN 202111213528 A CN202111213528 A CN 202111213528A CN 114023355 B CN114023355 B CN 114023355B
- Authority
- CN
- China
- Prior art keywords
- user
- emotion
- data
- score
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/51—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
- H04M3/5175—Call or contact centers supervision arrangements
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The invention relates to the technical field of intelligent voice service, in particular to an agent outbound quality testing method based on artificial intelligence. Which comprises the following steps: s100: acquiring voice call data of outbound receipts given by a plurality of different users with different grades; the voice call data comprises voice data, call duration, calling number, user score and user emotion score; s200: after voice call data are analyzed by using the artificial intelligent platform, a label system is constructed according to user call data information, wherein the label system comprises a semantic label and an emotion label. According to the invention, when quality inspection is performed, user experience and feedback score of a user can be avoided from being ignored, meanwhile, emotion fluctuation speech interruption data of the user can be extracted, the outbound flow is intelligently analyzed and improved, user experience is ensured, judgment accuracy when manual quality inspection is improved, and the method and the device can be applied to various quality inspection scenes.
Description
Technical Field
The invention relates to the technical field of intelligent voice service, in particular to an agent outbound quality testing method based on artificial intelligence.
Background
Outbound (Outbound) refers to: the telephone automatically dials the telephone of the user outwards through the Computer, and the recorded voice is played to the user through the Computer, which is an integral part of the Computer telephone Telephony integrated Integration modern customer service center system. Outbound is divided into two phases: the method comprises the steps of obtaining outbound data and initiating outbound actions. The outload component is responsible for completing the initiation function of Outbound actions, and is generally used for market analysis, for example, by which a large number of users can be automatically dialed through according to a list to conduct investigation of business requirements or service satisfaction, or customer return visit and other activities.
With the rapid development of economy, banks deposit massive data while providing more abundant products for customers, the amount of customer information is multiplied, unlimited available information is hidden in huge customer information data, and if the information is mined out, managed, utilized and marketed, the hidden information plays a role and is important content for the construction of financial industry. Traditional outbound services are often purely manual dialing and automatic outbound dialing by means of a call center, with appropriate telephone operators being forwarded.
Patent application publication number CN111949784A discloses outbound method and device based on intention recognition, which are used in the technical field of big data. The method comprises the following steps: acquiring voice data and dialogue states of a user history dialogue, and converting the voice data into text data; inputting the text data into an intention recognition model to obtain user intention corresponding to the user history dialogue; and carrying out outbound call operation configuration according to the user intention and the dialogue state, and synthesizing outbound call voice by utilizing the outbound call operation configuration, and initiating outbound call to the user and providing synthesized outbound call voice. The invention carries out valuable outbound service based on accurate intention recognition, can better predict some potential business, and can provide products and services for specific crowd in a targeted way, thereby clients can obtain more satisfactory service, simultaneously a large number of manual seats are reduced, and huge cost brought by employment, training, quality inspection and the like of the manual seats is reduced.
Patent application publication number CN111597818A discloses a call quality inspection method, apparatus, computer device and computer readable storage medium. The call quality inspection method comprises the following steps: acquiring call voice data respectively corresponding to a plurality of outbound numbers of outbound calls; carrying out semantic analysis on each piece of conversation voice data to obtain semantic tags of each piece of conversation voice data; and screening the call voice data according to the semantic tags of the call voice data, and determining the call voice data obtained by screening as the call voice data to be inspected. By adopting the method, the quality inspection efficiency and the quality inspection accuracy of the call quality inspection can be improved.
Both patent documents cannot achieve the reference item when the quality inspection element is used as the quality inspection according to the emotion of the user, so that the user experience and the feedback score of the user are completely ignored, and the quality inspection cannot achieve the expected effect.
Disclosure of Invention
Aiming at the problems in the background technology, the invention provides the seat outbound quality inspection method based on artificial intelligence, which is used for avoiding neglecting user experience and feedback scores of users, extracting emotion fluctuation voice interruption data of the users, intelligently analyzing and improving outbound flows, ensuring user experience, improving judgment accuracy in artificial quality inspection and being applicable to various quality inspection scenes according to the fact that the emotion of the users is taken as a quality inspection element as a reference item in quality inspection.
The technical scheme of the invention is as follows: the seat outbound quality inspection method based on artificial intelligence comprises the following steps:
s100: acquiring voice call data of outbound data given by a plurality of different users with different grades;
the voice call data comprises voice data, call duration, calling number, user score and user emotion score;
s200: after voice call data are analyzed by using an artificial intelligent platform, a label system is constructed according to user call data information, wherein the label system comprises semantic labels and emotion labels, and the semantic labels are text meanings in a user communication process, and are aimed and expected to be used; the emotion label is an emotion feedback mark in the communication process of the user;
in step S200, the emotion label for the user includes the steps of:
s210: classifying the emotion of the user into A, B, C, D, E five grade labels according to the analysis of the voice call data;
s220: the A, B, C, D, E is respectively given with "good", "general", "bad", "extremely bad" emotion labels;
s230: the five emotion tags are endowed with different comprehensive score calculation proportions of 100%, 80%, 60%, 40% and 0% respectively;
s300: assigning intelligent comprehensive scores in the extracted voice call data by combining the label system with user scores, wherein the comprehensive scores are divided into a plurality of layers;
s400: extracting part from the voice call data of the comprehensive scores of the multiple layers to perform manual quality inspection;
the comprehensive score calculation proportion of the emotion labels is 100%, 80%, 60%, 40% and 0% respectively;
in the comprehensive score calculating process, calculating the comprehensive score of the emotion label according to the proportion;
if the emotion label of the user is good, the comprehensive score=emotion label is 100% + the rest comprehensive score judgment element reference items;
if the emotion label of the user is "good", the comprehensive score=emotion label 80% + the rest comprehensive score judgment element reference items;
if the emotion label of the user is "general", the comprehensive score=emotion label 60% + the rest comprehensive score judgment element reference items;
if the emotion label of the user is 'bad', the comprehensive score=emotion label is 40% + the rest comprehensive score judgment element reference items;
if the emotion label of the user is 'very bad', the comprehensive score=emotion label 0% + the rest comprehensive score judgment element reference items;
the rest comprehensive score judging element reference items comprise the age of the user and the time period of dialing outwards.
Preferably, the emotional tags are respectively 100%, 80%, 60%, 40% and 0% in terms of comprehensive score calculation proportion of "good", "general", "poor", "very poor";
in the comprehensive score calculation process, the comprehensive score calculation of the emotion tags is settled according to the proportion.
Preferably, in step S300, the specific steps for the composite score demarcation are as follows:
s310: acquiring comprehensive score distribution intervals of a plurality of voice call data and demarcating the comprehensive score distribution intervals;
s320: defining a manual quality inspection sampling interval according to the demarcation range of the comprehensive grading distribution interval;
s330: and endowing classification information of the manual quality inspection sampling interval, including 'excellent', 'good', 'qualified', 'unqualified'.
Preferably, in S200, the analyzed voice call data is processed while the emotion label is analyzed after the voice call data is analyzed, and the processing object includes scoring the emotion of the user, recording the duration of the emotion fluctuation section, recording the time node of the emotion fluctuation section, taking the change of the frequency band according to the volume of the user as the judgment basis of the emotion fluctuation of the user and classifying the problem of user communication;
according to the processing object, processing of emotion labels and deep processing items are given, wherein the deep processing items are user emotion scoring, emotion fluctuation period duration recording, emotion fluctuation period time nodes and problem classification of user communication.
Preferably, the system comprises an outbound database, an artificial intelligent processing platform, an artificial quality inspection module and a multi-level scoring database;
the outbound database is a data set of outbound voice call data and a user feedback score for single voice call data;
the artificial intelligent processing platform further processes the voice call data to process the data to form a tag system and further process the voice call data;
the multi-level scoring database comprehensively scores various data acquired by the artificial intelligent processing platform, and the comprehensive scores are listed as an ordered data set according to certain logic;
the manual quality inspection module selectively extracts and inspects the orderly arranged data of the multi-level scoring database again.
Preferably, the artificial intelligence processing platform specifically comprises a voice data analysis module, a voice data recognition module, a voice data transfer module and a voice data screening module;
the voice data analysis module obtains the volume frequency band of the user call and classifies the communication problem of the user for dialing duration record, user emotion score, emotion fluctuation section duration record and emotion fluctuation section time node record;
the voice data recognition module can further process the source of voice call data and the rating information fed back by a user;
the voice data text conversion module is used for converting voice call information of a user in external call into text information;
the voice data screening module is used for screening the label data further processed by the voice data analysis module and the voice data recognition module according to the scoring logic, and the number of the samples can be one or a plurality as samples of different scoring sections.
Preferably, the artificial intelligence processing platform further comprises an autonomous learning module, and the autonomous learning module can be utilized to autonomously learn according to the manual extraction habit of the manual quality inspection module, so that the manual quality inspection habit is integrated into a reference element for processing voice call data;
the autonomous learning model utilizes an AI algorithm, and can autonomously optimize or give out optimization suggestions to the process of the system for manual habit processing.
Preferably, the scoring logic comprises label optimal logic, label score priority logic, user feedback scoring priority logic, emotion fluctuation duration priority logic and user emotion scoring priority logic;
the label score priority logic comprises label score positive order priority logic and label score negative order priority logic.
Preferably, the artificial intelligent processing platform gives different comprehensive score calculation proportions according to reference factors of comprehensive scores in the process of processing the call data of the user;
the upper limit of the comprehensive scoring proportion is 100%, and the lower limit is 0%; the comprehensive score proportion is 80% -100% and 70% -80% is good, the comprehensive score proportion is 60% -70% is qualified, and the comprehensive score proportion is less than 60% and is unqualified.
Compared with the prior art, the invention has the following beneficial technical effects:
(1): according to the method, user experience and feedback scoring of the user can be avoided from being ignored by taking the emotion of the user as a quality inspection element as a reference item in quality inspection during quality inspection, emotion fluctuation speech interruption data of the user can be extracted, and the outbound flow is analyzed and improved, so that the user experience is ensured.
(2): the voice call data after final grading is manually extracted, and the time node, the range and the duty ratio of emotion fluctuation in the voice call data and the original data during voice call can be all extracted, so that the judgment accuracy during manual quality inspection is facilitated.
(3): the mass data can be processed by utilizing the autonomous learning model and the AI algorithm, the discrete distribution curve of the manual processing habit is obtained, the manual quality inspection habit is effectively combined with the intelligent inspection, and the method can be applied to various quality inspection scenes.
(4): the artificial intelligent processing platform is utilized to automatically process a large amount of voice call data, the inefficiency of manual sampling inspection is replaced, only a representative voice call data case is needed for manual quality inspection, the labor cost is minimized, and meanwhile, the quality inspection data is more accurate.
Drawings
FIG. 1 is a flow chart of an artificial intelligence based method for screening agent outbound quality in accordance with one embodiment of the present invention;
FIG. 2 is a flow chart of the user emotion tagging in the present invention;
FIG. 3 is a flow chart of the invention for comprehensive scoring interval demarcation;
FIG. 4 is a block diagram illustrating an artificial intelligence based off-seat call quality inspection system in accordance with yet another embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a data processing of the voice data analysis module of FIG. 4;
reference numerals: 101 outbound database; 102 an artificial intelligence processing platform; 103, a manual quality inspection module; 104 a multi-level scoring database.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Example 1
As shown in fig. 1, the seat outbound quality testing method based on artificial intelligence provided by the invention comprises the following steps:
s100: acquiring voice call data of outbound receipts given by a plurality of different users with different grades;
the voice call data comprises voice data, call duration, calling number, user score and user emotion score; the method comprises the steps of obtaining voice data, call duration, calling number, user score and user emotion score; and part of the data are obtained by system statistics, part of the data are obtained from an outbound call record database, and part of popup windows in the call process and data input by the user are obtained for assisting in judging emotion labels of the user.
For example, in the process of scoring and passing through, the user inputs corresponding scores according to the voice data broadcast during outbound, the scores are varied from 1 to 10, wherein 10 is very satisfied, 1 is not satisfied, and the score values of the other scores corresponding to the scores are uniformly distributed in a section from 1 to 10; if the user input 6 is classified as normal;
s200: after voice call data are analyzed by using an artificial intelligent platform, a label system is constructed according to user call data information, wherein the label system comprises semantic labels and emotion labels, and the semantic labels are text meanings in a user communication process, and are aimed and expected to be used; the emotion label is an emotion feedback mark in the communication process of the user;
the artificial intelligence platform is used for analyzing and processing voice call data, and the analyzed objects specifically comprise: dialing duration, user emotion, emotion fluctuation section duration, emotion fluctuation time break nodes and user call volume frequency bands;
constructing based on a voice analysis algorithm model, a voice preprocessing technology, an emotion detection technology, a voice acoustic parameter analysis technology and parameter standardization;
firstly, normalizing the acquired voice call data parameters, acquiring a parameter sample of the voice call data by utilizing voice acoustics and voice preprocessing technology, inputting the parameter sample of the voice call data into a voice analysis algorithm model for running, and acquiring an emotion label system of a user by combining emotion detection technology.
The emotion detection technology can automatically detect and judge the emotion of the user or the seat in the call, and can provide higher accuracy and timeliness.
As shown in fig. 2; in step S200, the emotion label for the user includes the steps of:
s210: classifying the emotion of the user into A, B, C, D, E five grade labels according to the analysis of the voice call data; the dividing basis is that the grading in the conversation process of the user combines with emotion indexes obtained by using emotion detection technology;
s220: the A, B, C, D, E is respectively given with "good", "general", "bad", "extremely bad" emotion labels; the emotion labels with different grades are endowed to A, B, C, D, E, so that the emotion of the user can be clearly divided into a plurality of different grades and used as a calculation element of the subsequent comprehensive scores;
s230: the method comprises the steps of endowing five emotion labels of 'good', 'general', 'poor', 'very poor' with different comprehensive score calculation proportions; the aim of different calculation proportions by assigning different grades of emotion labels is that the comprehensive score is also divided into different grades according to the calculation proportions;
the comprehensive score calculation proportion of the emotion labels of "good", "general", "poor" and "very poor" is 100%, 80%, 60%, 40% and 0%, respectively;
in the comprehensive score calculating process, calculating the comprehensive score of the emotion label according to the proportion; for example, if the label value of the emotion of the user is good, the calculation formula of the composite score is:
composite score = user tag × 80% + remaining composite score judgment element references including age of user, time period of dialing out;
for example, if the time period of the dialing out is 6:00 pm and the user is in the working hours of most users, the user will feel the mind, and the emotion label value will be decreased.
And the following steps: if the time period of the dialing is 9:00 am and is in the normal active time of most users, the user will not feel the objection, and the emotion label value is kept in the normal interval.
S300: assigning intelligent comprehensive scores in the extracted voice call data by combining the label system with user scores, wherein the comprehensive scores are divided into a plurality of layers;
as shown in fig. 3; in step S300, the specific steps for the composite score demarcation are as follows:
s310: acquiring comprehensive score distribution intervals of a plurality of voice call data and demarcating the comprehensive score distribution intervals;
s320: defining a manual quality inspection sampling interval according to the demarcation range of the comprehensive grading distribution interval;
s330: and endowing classification information of the manual quality inspection sampling interval, including 'excellent', 'good', 'qualified', 'unqualified'.
S400: and extracting a part from the voice call data of the comprehensive scores of the multiple layers to perform manual quality inspection.
The above-mentioned multiple levels of comprehensive scores are scores after comprehensive scores are given according to the emotion tags and the rest of the comprehensive score judgment element reference items, (hereinafter referred to as final scores), and the reference of the final scores is used as a sampling inspection standard in the case of manual quality inspection.
In S200, processing the analyzed voice call data while analyzing the voice call data and then labeling the emotion, wherein the processing object comprises scoring the emotion of the user, recording the duration of the emotion fluctuation, recording the time node of the emotion fluctuation, taking the change of the volume frequency band of the user as the judgment basis of the emotion fluctuation of the user and classifying the problem of user communication;
according to the comprehensive processing object, processing of emotion labels and deep processing items are given, wherein the deep processing items are user emotion scoring, emotion fluctuation duration recording, emotion fluctuation time breaking nodes and problem classification of user communication.
During manual quality inspection, the judgment elements of comprehensive scores can be traced;
for example: manually extracting voice call data with the final score of poor, wherein the time node, range and duty ratio of emotion fluctuation in the voice call data and the original data during voice call can be all called;
by using the method, the user experience and the feedback score of the user can be avoided from being ignored according to the condition that the emotion of the user is taken as a quality inspection element as a reference item in quality inspection, meanwhile, the emotion fluctuation speech interruption data of the user can be extracted, and the outbound flow is analyzed and improved, so that the user experience is ensured.
Example two
As shown in fig. 4, according to the agent outbound quality testing method based on artificial intelligence described in the first embodiment, the present embodiment proposes an agent outbound quality testing system based on artificial intelligence, including an outbound database 101, an artificial intelligence processing platform 102, an artificial quality testing module 103, and a multi-level scoring database 104;
wherein the outbound database 101 is a data set of outbound voice call data and user feedback scores for individual voice call data;
the artificial intelligence processing platform 102 further processes the voice call data to form a tag system for processing the data and simultaneously deeply processes the voice call data;
the multi-level scoring database 104 comprehensively scores various data acquired by the artificial intelligence processing platform 102, and the comprehensive scores are listed as an ordered data set according to certain logic;
the manual quality inspection module 103 selectively extracts and re-inspects the orderly arranged data of the multi-level scoring database 104.
The artificial intelligence processing platform 102 specifically includes a voice data analysis module, a voice data recognition module, a voice data transfer module, and a voice data screening module;
as shown in fig. 5, the voice data analysis module obtains the volume frequency band of the user call and classifies the user communication problem for the dialing duration record, the user emotion score, the emotion fluctuation section duration record and the emotion fluctuation time-out node record;
the voice data recognition module can further process the source of voice call data and the rating information fed back by the user;
the voice data transfer module is used for converting voice call information of a user and an outbound call into text information; the converted text information can be displayed through the external terminal equipment, and the quality inspection data can be visualized while the converted text information is displayed.
The voice data screening module is used for screening the label data further processed by the voice data analysis module and the voice data recognition module according to the scoring logic, and the label data can be used as samples of different scoring sections, and one or a plurality of samples can be used;
the scoring logic comprises label optimal logic, label score priority logic, user feedback scoring priority logic, emotion fluctuation duration priority logic and user emotion scoring priority logic;
the label score priority logic comprises label score positive order priority logic and label score negative order priority logic;
the label score positive sequence priority logic is used for sorting the label scores from high to low; otherwise, the label score is ranked from low to high by the label score reverse order priority logic;
the artificial intelligence processing platform 102 gives different comprehensive score calculation proportions according to reference factors of comprehensive scores and reference bases respectively in the process of processing the call data of the user;
the upper limit of the comprehensive score proportion is 100%, the lower limit is 0%, the comprehensive score proportion is 80% -100% to represent excellent, the comprehensive score proportion is 70% -80% to represent good, the comprehensive score proportion is 60% -70% to represent pass, and the comprehensive score proportion is less than 60% to represent fail.
In the following description of the first embodiment, the remaining comprehensive score judgment element reference items are all 100% in this example, and the remaining comprehensive score judgment element reference items are set autonomously by the requirements.
Example one: extracting a piece of voice data from the outbound database, wherein the score of the user feedback is 10 points; the voice data analysis module is used for recording dialing time length, user emotion grading, emotion fluctuation section time length and emotion fluctuation time-out node recording, obtaining the volume frequency band of the user call and classifying the user communication problem;
automatically searching the mood swings break time of the user; if no abnormality exists, marking the label of the user as an A-level label, representing 'good', and calculating emotion labels in comprehensive score calculation according to 100%;
and (3) after calculating the rest comprehensive score judging element reference items, activating to obtain a final score, and grading the final score to be excellent in 'excellent', 'good', 'qualified' and 'unqualified'.
Example two: extracting a piece of voice data from the outbound database, wherein the score of the feedback of the user is 6 points; the voice data analysis module is used for recording dialing time length, user emotion grading, emotion fluctuation section time length and emotion fluctuation time-out node recording, obtaining the volume frequency band of the user call and classifying the user communication problem;
automatically searching the mood fluctuation breaking time of the user, and marking the label of the user as a C-level label to represent 'general', wherein the mood label in the comprehensive score calculation is calculated according to 60 percent if the mood fluctuation breaking time of the user is short;
and (3) after calculating the rest comprehensive score judging element reference items, activating to obtain a final score, and grading the final score as a qualified grade in 'excellent', 'good', 'qualified' and 'unqualified'.
Example three: extracting a piece of voice data from the outbound database, wherein the score of the feedback of the user is 1 score; the voice data analysis module is used for recording dialing time length, user emotion grading, emotion fluctuation section time length and emotion fluctuation time-out node recording, obtaining the volume frequency band of the user call and classifying the user communication problem;
automatically searching the mood fluctuation breaking time of the user, marking the label of the user as an E-grade label if the mood fluctuation breaking time is longer than the mood fluctuation breaking time, and calculating the mood label in the comprehensive score calculation according to 0%;
and (3) after calculating the rest comprehensive score judging element reference items, activating to obtain a final score, and grading the final score as disqualification in 'excellent', 'good', 'qualified' and 'disqualification'.
Combining the three examples, classifying the voice data by a plurality of unqualified voice data, qualified voice data and excellent voice data, and manually extracting and checking, wherein when the voice call data are manually extracted and checked, the time node, the range and the duty ratio of the emotion fluctuation break in the voice call data and the original data during the voice call can be all called; the judgment accuracy in the process of manual quality inspection is facilitated.
Example III
As shown in fig. 4, based on the agent outbound quality testing method based on artificial intelligence described in the first embodiment, and based on the second embodiment, the present embodiment proposes an agent outbound quality testing system based on artificial intelligence, including an outbound database 101, an artificial intelligence processing platform 102, an artificial quality testing module 103, and a multi-level scoring database 104;
wherein the outbound database 101 is a data set of outbound voice call data and user feedback scores for individual voice call data;
the artificial intelligence processing platform 102 further processes the voice call data to form a tag system for processing the data and simultaneously deeply processes the voice call data;
the multi-level scoring database 104 comprehensively scores various data acquired by the artificial intelligence processing platform 102, and the comprehensive scores are listed as an ordered data set according to certain logic;
the manual quality inspection module 103 selectively extracts and re-inspects the orderly arranged data of the multi-level scoring database 104.
The artificial intelligence processing platform 102 specifically includes a voice data analysis module, a voice data recognition module, a voice data transfer module, and a voice data screening module;
the voice data analysis module is used for recording dialing time length, user emotion grading, emotion fluctuation section time length recording and emotion fluctuation time-break node recording, acquiring a user call volume frequency band and classifying user communication problems;
the voice data recognition module can further process the source of voice call data and the rating information fed back by the user;
the voice data transfer module is used for converting voice call information of a user and an outbound call into text information;
the voice data screening module is used for screening the label data further processed by the voice data analysis module and the voice data recognition module according to the scoring logic, and the label data can be used as samples of different scoring sections, and one or a plurality of samples can be used;
the scoring logic comprises label optimal logic, label score priority logic, user feedback scoring priority logic, emotion fluctuation duration priority logic and user emotion scoring priority logic;
the label score priority logic comprises label score positive order priority logic and label score negative order priority logic;
the artificial intelligence processing platform 102 gives different comprehensive score calculation proportions according to reference factors of comprehensive scores and reference bases respectively in the process of processing the call data of the user;
the upper limit of the comprehensive score proportion is 100%, the lower limit is 0%, the comprehensive score proportion is 80% -100% to represent excellent, the comprehensive score proportion is 70% -80% to represent good, the comprehensive score proportion is 60% -70% to represent pass, and the comprehensive score proportion is less than 60% to represent fail.
In this embodiment, the artificial intelligence processing platform 102 further includes an autonomous learning module, and the autonomous learning module is utilized to autonomously learn according to the manual extraction habit of the artificial quality inspection module 103, so as to integrate the artificial quality inspection habit into the reference element for processing the voice call data;
the autonomous learning model utilizes an AI algorithm, and can autonomously optimize or give out an optimized suggestion to the process of the system for the manual habit processing.
The mass data can be processed by utilizing the autonomous learning model and the AI algorithm, so that a discrete distribution curve of the manual processing habit is obtained, and the manual quality inspection habit is effectively combined with the intelligent inspection;
the artificial intelligent processing platform is utilized to automatically process a large amount of voice call data, the inefficiency of manual sampling inspection is replaced, only a representative voice call data case is needed for manual quality inspection, the labor cost is minimized, and meanwhile, the quality inspection data is more accurate.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (8)
1. The seat outbound quality control method based on artificial intelligence is characterized by comprising the following steps of:
s100: acquiring voice call data of outbound data given by a plurality of different users with different grades;
the voice call data comprises voice data, call duration, calling number, user score and user emotion score;
s200: after voice call data are analyzed by using an artificial intelligent platform, a label system is constructed according to user call data information, wherein the label system comprises semantic labels and emotion labels, and the semantic labels are text meanings in a user communication process, and are aimed and expected to be used; the emotion label is an emotion feedback mark in the communication process of the user;
in step S200, the emotion label for the user includes the steps of:
s210: classifying the emotion of the user into A, B, C, D, E five grade labels according to the analysis of the voice call data;
s220: the A, B, C, D, E is respectively given with "good", "general", "bad", "extremely bad" emotion labels;
s230: the five emotion tags are endowed with different comprehensive score calculation proportions of 100%, 80%, 60%, 40% and 0% respectively;
s300: assigning intelligent comprehensive scores in the extracted voice call data by combining the label system with user scores, wherein the comprehensive scores are divided into a plurality of layers;
s400: extracting part from the voice call data of the comprehensive scores of the multiple layers to perform manual quality inspection;
the comprehensive score calculation proportion of the emotion labels is 100%, 80%, 60%, 40% and 0% respectively;
in the comprehensive score calculating process, calculating the comprehensive score of the emotion label according to the proportion;
if the emotion label of the user is good, the comprehensive score=emotion label is 100% + the rest comprehensive score judgment element reference items;
if the emotion label of the user is "good", the comprehensive score=emotion label 80% + the rest comprehensive score judgment element reference items;
if the emotion label of the user is "general", the comprehensive score=emotion label 60% + the rest comprehensive score judgment element reference items;
if the emotion label of the user is 'bad', the comprehensive score=emotion label is 40% + the rest comprehensive score judgment element reference items;
if the emotion label of the user is 'very bad', the comprehensive score=emotion label 0% + the rest comprehensive score judgment element reference items;
the rest comprehensive score judging element reference items comprise the age of the user and the time period of dialing outwards.
2. The artificial intelligence based agent outbound quality control method as claimed in claim 1, wherein in step S300, the following steps are specified for the composite score demarcation:
s310: acquiring comprehensive score distribution intervals of a plurality of voice call data and demarcating the comprehensive score distribution intervals;
s320: defining a manual quality inspection sampling interval according to the demarcation range of the comprehensive grading distribution interval;
s330: and endowing classification information of the manual quality inspection sampling interval, including 'excellent', 'good', 'qualified', 'unqualified'.
3. The method for detecting the external call quality of an agent based on artificial intelligence according to claim 1, wherein in S200, the analyzed voice call data is processed while the emotion label is analyzed on the voice call data, and the processing object includes scoring the emotion of the user, recording the duration of the emotion fluctuation segment, recording the time node of the emotion fluctuation segment, taking the change of the frequency band according to the volume of the user as the judgment basis of the emotion fluctuation of the user and classifying the problem of the user communication;
according to the processing object, processing of emotion labels and deep processing items are given, wherein the deep processing items are user emotion scoring, emotion fluctuation period duration recording, emotion fluctuation period time nodes and problem classification of user communication.
4. An agent outbound quality testing system based on artificial intelligence, capable of realizing the agent outbound quality testing method according to any one of claims 1-3, characterized by comprising an outbound database, an artificial intelligence processing platform, an artificial quality testing module and a multi-level grading database;
the outbound database is a data set of outbound voice call data and a user feedback score for single voice call data;
the artificial intelligent processing platform further processes the voice call data to process the data to form a tag system and further process the voice call data;
the multi-level scoring database comprehensively scores various data acquired by the artificial intelligence processing platform,
the comprehensive scores are listed as an ordered data set according to a certain logic;
the manual quality inspection module selectively extracts and inspects the orderly arranged data of the multi-level scoring database again.
5. The system for detecting the outbound of an agent based on artificial intelligence according to claim 4, wherein the artificial intelligence processing platform specifically comprises a voice data analysis module, a voice data recognition module, a voice data transfer module and a voice data screening module;
the voice data analysis module obtains the volume frequency band of the user call and classifies the communication problem of the user for dialing duration record, user emotion score, emotion fluctuation section duration record and emotion fluctuation section time node record;
the voice data recognition module can further process the source of voice call data and the rating information fed back by a user;
the voice data text conversion module is used for converting voice call information of a user in external call into text information;
the voice data screening module is used for screening the label data further processed by the voice data analysis module and the voice data recognition module according to the scoring logic, and the number of the samples can be one or a plurality as samples of different scoring sections.
6. The system of claim 5, wherein the artificial intelligence processing platform further comprises an autonomous learning module, wherein the autonomous learning module is capable of autonomously learning according to the manual extraction habit of the artificial quality inspection module, and inducing the artificial quality inspection habit into a reference element for processing voice call data;
the autonomous learning module utilizes an AI algorithm, and can autonomously optimize the process of the system or give an optimized suggestion to the process of the system.
7. The artificial intelligence based agent outbound quality control system of claim 6, wherein the scoring logic comprises tag optimization logic, tag score prioritization logic, user feedback scoring prioritization logic, mood swings duration prioritization logic, user mood scoring prioritization logic;
the label score priority logic comprises label score positive order priority logic and label score negative order priority logic.
8. The system for detecting the outbound quality of an agent based on artificial intelligence according to claim 6, wherein the artificial intelligence processing platform gives different comprehensive score calculation proportions according to reference basis respectively according to reference elements of comprehensive scores in the process of processing call data of users;
the upper limit of the comprehensive scoring proportion is 100%, and the lower limit is 0%; the comprehensive score proportion is 80% -100% and 70% -80% is good, the comprehensive score proportion is 60% -70% is qualified, and the comprehensive score proportion is less than 60% and is unqualified.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111213528.3A CN114023355B (en) | 2021-10-19 | 2021-10-19 | Agent outbound quality inspection method and system based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111213528.3A CN114023355B (en) | 2021-10-19 | 2021-10-19 | Agent outbound quality inspection method and system based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114023355A CN114023355A (en) | 2022-02-08 |
CN114023355B true CN114023355B (en) | 2023-07-25 |
Family
ID=80056616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111213528.3A Active CN114023355B (en) | 2021-10-19 | 2021-10-19 | Agent outbound quality inspection method and system based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114023355B (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5864285B2 (en) * | 2012-01-26 | 2016-02-17 | 株式会社三井住友銀行 | Telephone reception service support system and method |
CN104301554A (en) * | 2013-07-18 | 2015-01-21 | 中兴通讯股份有限公司 | Device and method used for detecting service quality of customer service staff |
US20210158447A1 (en) * | 2016-09-15 | 2021-05-27 | Simpsx Technologies Llc | Web Browser and Operating System Portal and Search Portal with Price Time Priority Queues |
CN110472224B (en) * | 2019-06-24 | 2023-07-07 | 深圳追一科技有限公司 | Quality of service detection method, apparatus, computer device and storage medium |
CN111199158A (en) * | 2019-12-30 | 2020-05-26 | 沈阳民航东北凯亚有限公司 | Method and device for scoring civil aviation customer service |
CN111597818B (en) * | 2020-04-09 | 2023-10-24 | 深圳追一科技有限公司 | Call quality inspection method, device, computer equipment and computer readable storage medium |
-
2021
- 2021-10-19 CN CN202111213528.3A patent/CN114023355B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114023355A (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220328037A1 (en) | System and method for neural network orchestration | |
CN112804400A (en) | Customer service call voice quality inspection method and device, electronic equipment and storage medium | |
CN110019149A (en) | A kind of method for building up of service knowledge base, device and equipment | |
WO2021068843A1 (en) | Emotion recognition method and apparatus, electronic device, and readable storage medium | |
US8781880B2 (en) | System, method and apparatus for voice analytics of recorded audio | |
CN106503236A (en) | Question classification method and device based on artificial intelligence | |
US20070067159A1 (en) | Monitoring, mining, and classifying electronically recordable conversations | |
CN103699955B (en) | Business model analysis method and device based on self-defined classifying rules | |
CN113468296B (en) | Model self-iteration type intelligent customer service quality inspection system and method capable of configuring business logic | |
CN105808721A (en) | Data mining based customer service content analysis method and system | |
CN109190652A (en) | It attends a banquet sort management method, device, computer equipment and storage medium | |
KR20210028480A (en) | Apparatus for supporting consultation based on artificial intelligence | |
TW201935370A (en) | System and method for evaluating customer service quality from text content | |
CA3182191A1 (en) | Voice quality inspection method and device, computer equipment and storage medium | |
WO2016131241A1 (en) | Quality detection method and device | |
CN112468853B (en) | Television resource recommendation method and device, computer equipment and storage medium | |
CN114023355B (en) | Agent outbound quality inspection method and system based on artificial intelligence | |
CN112365302B (en) | Product recommendation network training method, device, equipment and medium | |
CN113505606A (en) | Training information acquisition method and device, electronic equipment and storage medium | |
JP6567128B1 (en) | Conversation support system and conversation support method | |
CN113297365A (en) | User intention determination method, device, equipment and storage medium | |
CN115168603B (en) | Automatic feedback response method, device and storage medium for color ring back tone service process | |
US11825022B1 (en) | AI avatar coaching system based on free speech emotion analysis for managing in place of CS managers | |
CN113256180B (en) | Customer service work order information intelligent dynamic loading method and system based on machine learning | |
CN116600053B (en) | Customer service system based on AI large language model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |