CN107818798B - Customer service quality evaluation method, device, equipment and storage medium - Google Patents

Customer service quality evaluation method, device, equipment and storage medium Download PDF

Info

Publication number
CN107818798B
CN107818798B CN201710984748.3A CN201710984748A CN107818798B CN 107818798 B CN107818798 B CN 107818798B CN 201710984748 A CN201710984748 A CN 201710984748A CN 107818798 B CN107818798 B CN 107818798B
Authority
CN
China
Prior art keywords
time period
fundamental frequency
user
specified time
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710984748.3A
Other languages
Chinese (zh)
Other versions
CN107818798A (en
Inventor
于静磊
张仕梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710984748.3A priority Critical patent/CN107818798B/en
Publication of CN107818798A publication Critical patent/CN107818798A/en
Application granted granted Critical
Publication of CN107818798B publication Critical patent/CN107818798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Child & Adolescent Psychology (AREA)
  • Quality & Reliability (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for evaluating the quality of customer service, wherein the method comprises the following steps: in the process of communication between a user and a customer service, acquiring fundamental frequency information and speech rate information of the user speech in each specified time period in real time; determining the emotional state of the user in each appointed time period according to the fundamental frequency information and the speech rate information of each appointed time period; and generating a service quality evaluation result according to the emotional states of the user in all the specified time periods in the call process and the accuracy of the customer service answer. In the embodiment of the invention, in the process of communication between the user and the customer service, the emotion state of the user in the time period is determined in real time according to the voice of each user, and the obtained emotion result is more reliable and objective; the evaluation can be completed when the communication is finished based on the voice characteristics, so that comprehensive and objective service quality evaluation is obtained, and the customer service can obtain the evaluation result in time and provide effective reference for the customer service work adjustment.

Description

Customer service quality evaluation method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to a customer service evaluation technology, in particular to a method, a device, equipment and a storage medium for evaluating customer service quality.
Background
The customer service is necessary for any industry or a company, and the customer service bears the responsibilities of new product recommendation, old customer maintenance, question answering and the like, so that whether customer service personnel meet the requirements of users or not and whether the customer service personnel are satisfied with the users or not in the process of communicating with the users directly relates to the performance of the company.
At present, most industries such as banks, communication industries and the like generally adopt a voice mode or a short message mode to ask a user for evaluating the service quality of customer service staff after the telephone communication between the customer service staff and the user is finished. The voice mode is that synchronous voice is played when the telephone communication is finished and the telephone is not hung up, and a user can select to express own evaluation according to the voice prompt. The short message mode is that after the telephone communication is finished, the short message is sent to the user through the system, and after the user receives the short message, the user replies the short message to feed back the evaluation of the user,
fig. 1 is a schematic diagram of a structure and a process of customer service evaluation in the prior art, as shown in fig. 1, a user makes a call, customer service staff get through the call, the user communicates with the customer service staff, after the communication is completed, the system initiates an evaluation process (for example, playing synchronous voice or sending a short message), if the system receives the evaluation of the user, evaluation statistics is performed, and if the system does not receive the evaluation of the user, the process is directly ended.
The above evaluation method has the following disadvantages:
(1) hysteresis evaluated: after the communication between the customer service personnel and the user is finished, an evaluation process is initiated, and the customer service personnel cannot acquire psychological feedback of the user at the first time and cannot adjust the communication mode in time;
(2) subjectivity of evaluation: when the user feeds back the evaluation information, the user often gives good or bad information at will, and effective reference cannot be provided for the adjustment of customer service work;
(3) incompleteness evaluated: the user can select evaluation or not, and the user feedback cannot be truly and comprehensively received.
In addition, there are other evaluation methods: after the call is finished, the emotion is judged based on the call record, but the customer service personnel still cannot acquire the psychological feedback of the user at the first time, cannot adjust the communication mode in time, and the obtained evaluation result is not accurate enough.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for evaluating the quality of customer service, which can objectively and timely obtain the service quality evaluation according to voice characteristics, have more accurate and comprehensive evaluation and provide effective reference for the adjustment of customer service work.
In a first aspect, an embodiment of the present invention provides a method for evaluating quality of customer service, including:
in the process of communication between a user and a customer service, acquiring fundamental frequency information and speech rate information of the user speech in each specified time period in real time;
determining the emotional state of the user in each appointed time period according to the fundamental frequency information and the speech rate information of each appointed time period;
and generating a service quality evaluation result according to the emotional states of the user in all the specified time periods in the call process and the accuracy of the customer service answer.
In a second aspect, an embodiment of the present invention further provides a device for evaluating quality of customer service, including:
the information acquisition module is used for acquiring fundamental frequency information and speech rate information of user speech in each specified time period in real time in the process of communication between a user and a customer service;
the emotion determining module is used for determining the emotion state of the user in each specified time period according to the fundamental frequency information and the speech rate information of each specified time period;
and the evaluation generation module is used for generating a service quality evaluation result according to the emotional states of the user in all the specified time periods in the call process and the accuracy of the customer service answers.
In a third aspect, an embodiment of the present invention further provides an apparatus, including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method for quality of customer service evaluation as described in any embodiment of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for evaluating quality of customer service according to any embodiment of the present invention.
According to the technical scheme of the embodiment of the invention, in the process of communication between the user and the customer service, the emotion state of the user in the time period is determined in real time according to the fundamental frequency information and the speed information of each section of user voice, and the obtained emotion result is more reliable and objective; and generating a service quality evaluation result according to the emotional states of the user in all the appointed time periods and the accuracy of the customer service response in the conversation process, finishing evaluation at the end of communication based on the voice characteristics to obtain comprehensive, objective and accurate service quality evaluation, and obtaining the evaluation result in time by the customer service to provide effective reference for the customer service work adjustment.
Drawings
FIG. 1 is a schematic diagram of a prior art architecture and flow of customer service evaluation;
FIG. 2 is a flowchart of a method for evaluating customer service quality according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a structure and a flow of evaluating customer service quality according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for evaluating quality of customer service provided by the second embodiment of the present invention;
fig. 5 is a schematic flowchart of a process of determining an emotional state of a user according to fundamental frequency information and speech rate information according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a customer service quality evaluation device according to a third embodiment of the present invention;
fig. 7 is another schematic structural diagram of a customer service quality evaluation device according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of an apparatus provided in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 2 is a flowchart of a method for evaluating customer service quality according to an embodiment of the present invention, where the embodiment is applicable to a situation where objective real-time evaluation is performed on the service quality of customer service, and the method may be executed by a customer service quality evaluation device, and the device may be implemented by software and/or hardware, and may be generally integrated in a server. As shown in fig. 2, the method specifically includes:
s210, in the process of the user and the customer service communication, the fundamental frequency information and the speech speed information of the user speech in each appointed time period are acquired in real time.
In the conversation process, the user voice and the customer service voice can be separated, the user voice is analyzed, and the fundamental frequency information and the speed information are obtained from the user voice analysis result. The voice analysis process can be executed in the customer service quality evaluation device; the method can also be realized by means of an additional voice recognition device, for example, the customer service terminal transmits the call voice to the voice recognition device in real time for analysis, and the customer service quality evaluation device acquires the voice analysis result from the voice recognition device.
The length of the specified time period can be determined according to the current call content category, the call content category can be determined according to the keywords in the voice analysis result, and the call content category comprises: new product introduction and question answering, etc. In practical application, each call content category and the corresponding time period length thereof can be stored in advance, and after the call content category is determined according to specific call voice, the voice is processed in a segmented mode according to the corresponding time period length. For example, for new product introduction, more customer service speaking is provided, and less user speaking is provided, so that the detectable emotion change of the user is less, and the time period can be set longer, such as 3 to 5 minutes; for answering questions, the user speaks more, the detected emotion change of the user is more, and the time period length can be set to be shorter, such as 1 minute. During a call, a plurality of call content categories may be involved, and thus the length of each designated time period may vary during a call.
The fundamental frequency information for the specified time period includes: fundamental mean, fundamental maximum, fundamental minimum, and the like. The speech rate information for the specified time period includes: speed mean, starting speed, ending speed, etc.
And S220, determining the emotional state of the user in each specified time period according to the fundamental frequency information and the speech rate information of each specified time period.
The emotional state of the user in this embodiment includes: pleasure, disinterest, and anger, but may be subdivided into more emotional states according to actual situations, and the embodiment of the present invention is not limited thereto. The emotional state of the user can be expressed through the voice characteristics of the user, so that the voice emotion rule can be preset, and the emotional state of the user can be determined based on the voice emotion rule and the currently acquired fundamental frequency information and the currently acquired speed information. Specifically, a three-dimensional model of PAD emotion is adopted: pleasure (Pleasure) represents positive and negative characteristics (happiness-unhappy) of emotional states, activation (applied) represents the neurophysiological activation level (activation-inactivation) of emotions, Dominance (Dominance) represents the control state (master control-controlled) of an individual on scenes and other people, and two indexes of Pleasure and activation are mainly referred to in the process of communication between customer service and a user. The joy of the emotion is in positive correlation with the fundamental Frequency (FO), the higher the average value of FO in a certain period of time is, the more the emotion of the user tends to be happy, and the lower the average value of FO in a certain period of time is, the less the emotion of the user tends to be happy. The activation of emotions is positively correlated with FO, the average FO of pleasure and anger is higher than other emotions, the short term speech rate in pleasure is faster, and the fundamental frequency in anger is the most increased. And aiming at each appointed time period, determining the emotional state of the user in the time period according to the fundamental frequency information and the speech speed information of the voice of the user in the time period, so that the obtained emotional result is more accurate and reliable.
And S230, generating a service quality evaluation result according to the emotional states of the user in all the specified time periods in the call process and the accuracy of customer service answers.
The customer service voice analysis result can be compared with the standard answer, and the accuracy of the customer service answer is determined, so that the accuracy parameters of a plurality of specified time periods are obtained. When the communication is finished, the emotional states of the user and the answering accuracy of the user in all the appointed time periods in the communication process can be integrated to obtain the service quality evaluation result.
According to the technical scheme of the embodiment, in the process of communication between the user and the customer service, the emotion state of the user in the time period is determined in real time according to the fundamental frequency information and the speed information of each section of user voice, and the obtained emotion result is more reliable and objective; and generating a service quality evaluation result according to the emotional states of the user in all the appointed time periods and the accuracy of the customer service response in the conversation process, finishing evaluation at the end of communication based on the voice characteristics to obtain comprehensive, objective and accurate service quality evaluation, and obtaining the evaluation result in time by the customer service to provide effective reference for the customer service work adjustment.
After the emotional state of the user in the current designated time period is obtained in S220, the emotional state of the user in the designated time period may be fed back to the customer service terminal in real time. Therefore, the customer service can see the current emotional state of the user in time, so that the communication mode of the customer service can be adjusted in time in the subsequent communication, and the satisfaction degree of the user is improved.
Fig. 3 is a schematic diagram of a framework and a flow of evaluating customer service quality according to an embodiment of the present invention, as shown in fig. 3, a user makes a call, a customer service person connects the call, the user communicates with the customer service person, and during the communication, the emotion state of the user and the answer accuracy of the customer service are obtained in real time for each designated time period based on voice characteristics, and the emotion state of the user is fed back to a customer service terminal in real time for reference by the customer service; and when the communication is finished, the emotional states of the user and the answering accuracy of the user in all the appointed time periods in the communication process can be integrated to obtain the service quality evaluation result.
Example two
Fig. 4 is a flowchart of a method for evaluating customer service quality according to a second embodiment of the present invention, and this embodiment provides a preferred implementation manner for determining an emotional state of a user in a specified time period and a preferred implementation manner for generating a result of evaluating customer service quality according to the emotional state of the user and the accuracy of customer service response based on the above embodiments. As shown in fig. 4, the method includes:
and S410, acquiring fundamental frequency information and speech rate information of the user speech in each specified time period in real time in the process of the user and the customer service communication.
And S420, acquiring a preset fundamental frequency threshold and a preset speech rate threshold corresponding to the conversation content category of the specified time period according to each specified time period.
The preset fundamental frequency threshold includes a fundamental frequency variation threshold (i.e., a first fundamental frequency threshold) and a fundamental frequency mean threshold (i.e., a second fundamental frequency threshold), and the preset speech rate threshold refers to a speech rate variation threshold. The preset fundamental frequency threshold and the preset speech rate threshold can both be experience values, the thresholds corresponding to different industries and different call content categories may be different, specifically, the fundamental frequency threshold and the speech rate threshold corresponding to each call content category in different industries can be prestored, and in the actual communication process, the corresponding threshold is obtained according to the speech content.
S430, obtaining a first emotion result according to the fundamental frequency information of the specified time period and the preset fundamental frequency threshold, and obtaining a second emotion result according to the speech rate information of the specified time period and the preset speech rate threshold.
According to a preset voice emotion rule, corresponding emotion results are obtained according to the fundamental frequency information and the speech speed information respectively, and the obtained emotion results can be verified mutually, so that the accuracy of emotion analysis based on voice is improved.
S440, determining the emotional state of the user in the specified time period according to the first emotional result and the second emotional result.
If the first emotion result and the second emotion result are not contradictory, the emotion result obtained according to the fundamental frequency information and the emotion result obtained according to the speech rate information are successfully verified mutually, and the obtained emotion state of the user is correct. If the first emotion result is inconsistent with the second emotion result, one emotion result can be selected as a final emotion state of the user, preferably, corresponding weights can be set for the fundamental frequency parameter and the speech rate parameter according to actual conditions, and according to the fundamental frequency parameter weight and the speech rate parameter weight, the emotion result corresponding to the parameter with the large weight is selected as the emotion state of the user in a specified time period.
And S450, feeding back the emotional state of the user in the specified time period to the customer service terminal in real time.
And S460, generating a service quality evaluation result according to the emotional states of the user in all the specified time periods and the accuracy of customer service answers in the call process.
Further, in S430, obtaining a first emotion result according to the fundamental frequency information of the specified time period and the preset fundamental frequency threshold, including: calculating a fundamental frequency change value of the appointed time period according to the fundamental frequency information of the appointed time period; comparing the fundamental frequency variation value with a first fundamental frequency threshold value; determining that the user is in a first emotional state for the specified time period if the fundamental frequency change value is greater than the first fundamental frequency threshold.
Wherein, the calculation formula of the fundamental frequency variation value is as follows: (FO)max-FO)/T,FOmaxDenotes the maximum value of the fundamental frequency for a specified period of time, FO denotes the mean value of the fundamental frequency for a specified period of time, and T denotes the duration of the specified period of time. The change value of the fundamental frequency is larger than the first threshold value of the fundamental frequency, the fundamental frequency of the user in the specified time period is changed a lot, and according to the voice emotion rule, the emotion of the user in the specified time period is determined to be angry, namely the first emotional state is angry. And if the fundamental frequency change value is less than or equal to the first fundamental frequency threshold value, the emotional state of the user is pending, and the emotional state of the user in the specified time period is determined through other parameters.
Further, in S430, obtaining a first emotion result according to the fundamental frequency information of the specified time period and the preset fundamental frequency threshold, including: comparing the fundamental frequency mean value in the fundamental frequency information of the specified time period with a second fundamental frequency threshold value; determining that the user is in a second emotional state for the specified time period if the fundamental frequency mean is less than the second fundamental frequency threshold; determining that the user is in a third emotional state for the specified time period if the fundamental frequency mean is greater than the second fundamental frequency threshold.
The average value of the fundamental frequency in the pleasant state is higher, and the average value of the fundamental frequency in the unpleasant state is lower. The second emotional state is unhappy and the third emotional state is pleasant. The fundamental frequency average avg (fo) can be directly obtained by analyzing the user voice, and of course, if the voice analysis result does not include the fundamental frequency average, the fundamental frequency average can be calculated according to the fundamental frequency information. If the mean fundamental frequency is equal to the second threshold fundamental frequency, it can be determined that the user is in the second emotional state or the third emotional state for the specified time period according to the actual situation.
Further, in S430, obtaining a second emotion result according to the speech rate information of the specified time period and the preset speech rate threshold, including: calculating the speech rate change value of the specified time period according to the speech rate information of the specified time period; comparing the speech rate change value with the preset speech rate threshold value; and if the speech rate change value is larger than the preset speech rate threshold value, determining that the user is in a third emotional state in the specified time period.
Wherein, the calculation formula of the speech rate variation value is as follows: (S)end-Sstart)/T,SendIndicating an ending speech rate for a specified time period, SstartDenotes the starting speech rate of the specified period of time, and T denotes the duration of the specified period of time. And determining that the emotion of the user in the specified time period is joyful according to the voice emotion rule, wherein the speed change value is larger than a preset speed threshold value and indicates that the speed of the user in the specified time period is fast. And if the speech speed change value is less than or equal to the preset speech speed threshold value, the emotional state of the user is undetermined, and the emotional state of the user in the specified time period is determined through other parameters.
It should be noted that, in S430, the execution sequence of the three emotion determining manners is not differentiated in sequence, and may also be executed simultaneously, and the final emotion state of the user is obtained by integrating the three results. For example, fig. 5 shows an implementation sequence of S430, determining an emotional state of the user according to the variation of the speech rate (S431), determining an emotional state of the user according to the variation of the fundamental frequency (S432), determining an emotional state of the user according to the mean value of the fundamental frequency (S433), and finally synthesizing the three results to obtain a final emotional state of the user.
Further, S460 includes: acquiring emotion scores corresponding to the emotion states of all appointed time periods in the call process, and calculating the weighted sum of the emotion scores of all the appointed time periods according to the weights corresponding to the call content categories of all the appointed time periods to obtain a first evaluation score; acquiring an accuracy score corresponding to the accuracy of each designated time period in the call process, and calculating the weighted sum of the accuracy scores of all designated time periods according to the weight corresponding to the call content category of each designated time period to obtain a second evaluation score; and calculating to obtain the service quality evaluation result according to a preset emotion parameter weight, a preset accuracy parameter weight, the first evaluation score and the second evaluation score.
There is a corresponding score for each emotional state, e.g., a pleasure state score of 90, a disinterest state score of 60, and an anger state score of 20. Different accuracies also have corresponding scores, e.g., 90% accuracy for a score of 90% and 70% accuracy for a score of 70%. In practical application, the emotion score and the accuracy score can be normalized respectively, and then the evaluation score can be calculated. Calculating the weighted sum of the emotion scores of the users in all the appointed time periods in the whole call process according to the weight corresponding to the call content category in each appointed time period, and taking the weighted sum as a first evaluation score; according to the weight corresponding to the conversation content category of each appointed time period, calculating the weighted sum of the customer service answer accuracy scores of each appointed time period in the whole conversation process as a second evaluation score; and then, calculating a total evaluation score according to the preset emotion parameter weight and the accuracy parameter weight to obtain a final service quality evaluation result of the call.
Obtaining the accuracy of the customer service response for the specified time period includes: taking each appointed time period as a current appointed time period, extracting a question attribute from a user voice analysis result of the current appointed time period, and acquiring a standard answer corresponding to the question attribute; acquiring a customer service voice analysis result corresponding to the user voice analysis result of the current specified time period; and calculating the matching degree of the answer in the customer service voice analysis result and the standard answer to obtain the accuracy of the customer service answer in the current specified time period.
The standard answer corresponding to the user question attribute can be obtained by using the machine learning model, the extracted question attribute is input into the machine learning model, and the standard answer corresponding to the question attribute is output. When a machine learning model is established, a question attribute sample and a standard answer sample are used for training, and parameters of a classifier are adjusted, so that the classifier can accurately output corresponding standard answers according to question attributes. Specifically, the answer matching degree can be calculated by using a text similarity algorithm, such as a Jaccard similarity coefficient, a Euclidean distance and the like, and the matching degree is used as the accuracy of the customer service answer.
According to the technical scheme of the embodiment, the emotion state of the user is relatively objective by utilizing the preset voice emotion rule, the preset threshold value, the currently acquired fundamental frequency information and the speed information, and the customer service can see the emotion of the user in time, so that the communication mode is adjusted in time, and the customer is satisfied. According to the comparison between the customer service voice and the standard answers, the customer service answering accuracy can be objectively obtained, comprehensive and objective service quality evaluation results are obtained by integrating the emotional state of the user and the customer service answering accuracy, and effective references are provided for the regulation of customer service work.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a customer service quality evaluation device according to a third embodiment of the present invention, and as shown in fig. 6, the device includes: an information acquisition module 610, an emotion determination module 620, and an evaluation generation module 630.
The information acquisition module 610 is used for acquiring fundamental frequency information and speech rate information of user speech in each specified time period in real time in the process of communication between a user and a customer service;
an emotion determining module 620, configured to determine, according to the fundamental frequency information and the speech rate information of each specified time period, an emotion state of the user in each specified time period;
and the evaluation generation module 630 is configured to generate a service quality evaluation result according to the emotional states of the user in all the specified time periods during the call and the accuracy of the customer service answer.
Optionally, the emotion determining module 620 includes:
the threshold acquisition unit is used for acquiring a preset fundamental frequency threshold and a preset speech rate threshold corresponding to the conversation content category of each specified time period;
the result generating unit is used for obtaining a first emotion result according to the fundamental frequency information of the specified time period and the preset fundamental frequency threshold value and obtaining a second emotion result according to the speech rate information of the specified time period and the preset speech rate threshold value;
and the emotion determining unit is used for determining the emotional state of the user in the specified time period according to the first emotion result and the second emotion result.
Further, the result generating unit is specifically configured to:
calculating a fundamental frequency change value of the appointed time period according to the fundamental frequency information of the appointed time period;
comparing the fundamental frequency variation value with a first fundamental frequency threshold value;
determining that the user is in a first emotional state for the specified time period if the fundamental frequency change value is greater than the first fundamental frequency threshold.
Further, the result generation unit is further configured to:
comparing the fundamental frequency mean value in the fundamental frequency information of the specified time period with a second fundamental frequency threshold value;
determining that the user is in a second emotional state for the specified time period if the fundamental frequency mean is less than the second fundamental frequency threshold;
determining that the user is in a third emotional state for the specified time period if the fundamental frequency mean is greater than the second fundamental frequency threshold.
Further, the result generation unit is further configured to:
calculating the speech rate change value of the specified time period according to the speech rate information of the specified time period;
comparing the speech rate change value with the preset speech rate threshold value;
and if the speech rate change value is larger than the preset speech rate threshold value, determining that the user is in a third emotional state in the specified time period.
Further, the emotion determining unit is specifically configured to: and under the condition that the first emotion result is contradictory to the second emotion result, selecting the emotion result corresponding to the parameter with the larger weight as the emotion state of the user in the specified time period according to the preset fundamental frequency parameter weight and the preset speech speed parameter weight.
Optionally, the evaluation generating module 640 includes:
the first acquisition unit is used for acquiring emotion scores corresponding to the emotion states of all the appointed time periods in the call process, and calculating the weighted sum of the emotion scores of all the appointed time periods according to the weight corresponding to the call content category of all the appointed time periods to obtain a first evaluation score;
the second acquisition unit is used for acquiring the accuracy scores corresponding to the accuracy of all the specified time periods in the call process, and calculating the weighted sum of the accuracy scores of all the specified time periods according to the weight corresponding to the call content category of each specified time period to obtain a second evaluation score;
and the evaluation calculation unit is used for calculating the service quality evaluation result according to a preset emotion parameter weight, a preset accuracy parameter weight, the first evaluation score and the second evaluation score.
As shown in fig. 7, the apparatus may further include:
the emotion feedback module 640 is used for feeding back the emotion state of the user in the current specified time period to the customer service terminal in real time;
the standard answer obtaining module 650 is configured to take each specified time period as a current specified time period, extract a question attribute from a user voice analysis result of the current specified time period, and obtain a standard answer corresponding to the question attribute;
the analysis result acquisition module 660 is used for acquiring a customer service voice analysis result corresponding to the user voice analysis result of the current specified time period;
and the accuracy calculation module 670 is configured to calculate a matching degree between the answer in the customer service voice analysis result and the standard answer, so as to obtain the accuracy of the customer service answer in the current specified time period.
The customer service quality evaluation device provided by the embodiment of the invention can execute the customer service quality evaluation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. For details of the technology not described in detail in this embodiment, reference may be made to the method for evaluating the quality of service of customer service provided in any embodiment of the present invention.
Example four
Fig. 8 is a schematic structural diagram of an apparatus provided in the fourth embodiment of the present invention. FIG. 8 illustrates a block diagram of an exemplary device 12 suitable for use in implementing embodiments of the present invention. The device 12 shown in fig. 8 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present invention.
As shown in FIG. 8, device 12 is in the form of a general purpose computing device. The components of device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, and commonly referred to as a "hard drive"). Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with device 12, and/or with any devices (e.g., network card, modem, etc.) that enable device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown in FIG. 8, the network adapter 20 communicates with the other modules of the device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement the customer service quality evaluation method provided by the embodiment of the present invention.
EXAMPLE five
Fifth, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for evaluating quality of service of customer service according to any embodiment of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (15)

1. A method for evaluating customer service quality is characterized by comprising the following steps:
in the process of communication between a user and a customer service, acquiring fundamental frequency information and speech rate information of the user speech in each specified time period in real time, wherein the length of the specified time period is determined according to the type of the current communication content;
determining the emotional state of the user in each appointed time period according to the fundamental frequency information and the speech rate information of each appointed time period;
and generating a service quality evaluation result according to the emotional states of the user in all the specified time periods in the call process and the accuracy of the customer service answer.
2. The method according to claim 1, wherein determining the emotional state of the user in each of the designated time periods according to the fundamental frequency information and the speech rate information of each of the designated time periods comprises:
acquiring a preset fundamental frequency threshold and a preset speech rate threshold corresponding to the conversation content category of each appointed time period;
obtaining a first emotion result according to the fundamental frequency information of the specified time period and the preset fundamental frequency threshold, and obtaining a second emotion result according to the speech rate information of the specified time period and the preset speech rate threshold;
and determining the emotional state of the user in the specified time period according to the first emotional result and the second emotional result.
3. The method of claim 2, wherein obtaining a first emotion result according to the fundamental frequency information of the specified time period and the preset fundamental frequency threshold comprises:
calculating a fundamental frequency change value of the appointed time period according to the fundamental frequency information of the appointed time period;
comparing the fundamental frequency variation value with a first fundamental frequency threshold value;
determining that the user is in a first emotional state for the specified time period if the fundamental frequency change value is greater than the first fundamental frequency threshold.
4. The method of claim 2, wherein obtaining a first emotion result according to the fundamental frequency information of the specified time period and the preset fundamental frequency threshold comprises:
comparing the fundamental frequency mean value in the fundamental frequency information of the specified time period with a second fundamental frequency threshold value;
determining that the user is in a second emotional state for the specified time period if the fundamental frequency mean is less than the second fundamental frequency threshold;
determining that the user is in a third emotional state for the specified time period if the fundamental frequency mean is greater than the second fundamental frequency threshold.
5. The method according to claim 2, wherein obtaining a second emotion result according to the speech rate information of the specified time period and the preset speech rate threshold comprises:
calculating the speech rate change value of the specified time period according to the speech rate information of the specified time period;
comparing the speech rate change value with the preset speech rate threshold value;
and if the speech rate change value is larger than the preset speech rate threshold value, determining that the user is in a third emotional state in the specified time period.
6. The method of claim 2, wherein determining the emotional state of the user for the specified time period based on the first emotional result and the second emotional result comprises:
and if the first emotion result is inconsistent with the second emotion result, selecting the emotion result corresponding to the parameter with the larger weight as the emotion state of the user in the specified time period according to the preset fundamental frequency parameter weight and the preset speech speed parameter weight.
7. The method of claim 1, wherein generating a quality of service assessment result based on the emotional state of the user and the accuracy of the customer service response for all specified time periods during the call comprises:
acquiring emotion scores corresponding to the emotion states of all appointed time periods in the call process, and calculating the weighted sum of the emotion scores of all the appointed time periods according to the weights corresponding to the call content categories of all the appointed time periods to obtain a first evaluation score;
acquiring an accuracy score corresponding to the accuracy of each designated time period in the call process, and calculating the weighted sum of the accuracy scores of all designated time periods according to the weight corresponding to the call content category of each designated time period to obtain a second evaluation score;
and calculating to obtain the service quality evaluation result according to a preset emotion parameter weight, a preset accuracy parameter weight, the first evaluation score and the second evaluation score.
8. The method of claim 1, wherein prior to generating a quality of service assessment result based on the emotional state of the user for all specified time periods during the call and the accuracy of the customer service response, the method further comprises:
taking each appointed time period as a current appointed time period, extracting a question attribute from a user voice analysis result of the current appointed time period, and acquiring a standard answer corresponding to the question attribute;
acquiring a customer service voice analysis result corresponding to the user voice analysis result of the current specified time period;
and calculating the matching degree of the answer in the customer service voice analysis result and the standard answer to obtain the accuracy of the customer service answer in the current specified time period.
9. The method according to any one of claims 1 to 8, wherein after determining the emotional state of the user in each specified time period according to the fundamental frequency information and the speech rate information in each specified time period, the method further comprises:
and feeding back the emotional state of the user in the current specified time period to the customer service terminal in real time.
10. A customer service quality evaluation device is characterized by comprising:
the information acquisition module is used for acquiring fundamental frequency information and speech rate information of user speech in each specified time period in real time in the process of communication between a user and a customer service, wherein the length of the specified time period is determined according to the type of communication content;
the emotion determining module is used for determining the emotion state of the user in each specified time period according to the fundamental frequency information and the speech rate information of each specified time period;
and the evaluation generation module is used for generating a service quality evaluation result according to the emotional states of the user in all the specified time periods in the call process and the accuracy of the customer service answers.
11. The apparatus of claim 10, wherein the emotion determination module comprises:
the threshold acquisition unit is used for acquiring a preset fundamental frequency threshold and a preset speech rate threshold corresponding to the conversation content category of each specified time period;
the result generating unit is used for obtaining a first emotion result according to the fundamental frequency information of the specified time period and the preset fundamental frequency threshold value and obtaining a second emotion result according to the speech rate information of the specified time period and the preset speech rate threshold value;
and the emotion determining unit is used for determining the emotional state of the user in the specified time period according to the first emotion result and the second emotion result.
12. The apparatus according to claim 11, wherein the result generation unit is specifically configured to:
calculating a fundamental frequency change value of the appointed time period according to the fundamental frequency information of the appointed time period; comparing the fundamental frequency variation value with a first fundamental frequency threshold value; determining that the user is in a first emotional state for the specified time period if the fundamental frequency change value is greater than the first fundamental frequency threshold;
comparing the fundamental frequency mean value in the fundamental frequency information of the specified time period with a second fundamental frequency threshold value; determining that the user is in a second emotional state for the specified time period if the fundamental frequency mean is less than the second fundamental frequency threshold; determining that the user is in a third emotional state for the specified time period if the fundamental frequency mean is greater than the second fundamental frequency threshold;
calculating the speech rate change value of the specified time period according to the speech rate information of the specified time period; comparing the speech rate change value with the preset speech rate threshold value; and if the speech rate change value is larger than the preset speech rate threshold value, determining that the user is in a third emotional state in the specified time period.
13. The apparatus of claim 10, wherein the evaluation generation module comprises:
the first acquisition unit is used for acquiring emotion scores corresponding to the emotion states of all the appointed time periods in the call process, and calculating the weighted sum of the emotion scores of all the appointed time periods according to the weight corresponding to the call content category of all the appointed time periods to obtain a first evaluation score;
the second acquisition unit is used for acquiring the accuracy scores corresponding to the accuracy of all the specified time periods in the call process, and calculating the weighted sum of the accuracy scores of all the specified time periods according to the weight corresponding to the call content category of each specified time period to obtain a second evaluation score;
and the evaluation calculation unit is used for calculating the service quality evaluation result according to a preset emotion parameter weight, a preset accuracy parameter weight, the first evaluation score and the second evaluation score.
14. An apparatus, characterized in that the apparatus comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method for quality of customer service evaluation as recited in any of claims 1-9.
15. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out a method for quality of service evaluation as set forth in any one of claims 1 to 9.
CN201710984748.3A 2017-10-20 2017-10-20 Customer service quality evaluation method, device, equipment and storage medium Active CN107818798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710984748.3A CN107818798B (en) 2017-10-20 2017-10-20 Customer service quality evaluation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710984748.3A CN107818798B (en) 2017-10-20 2017-10-20 Customer service quality evaluation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107818798A CN107818798A (en) 2018-03-20
CN107818798B true CN107818798B (en) 2020-08-18

Family

ID=61608520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710984748.3A Active CN107818798B (en) 2017-10-20 2017-10-20 Customer service quality evaluation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107818798B (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197963A (en) * 2018-03-28 2018-06-22 广州市菲玛尔咨询服务有限公司 A kind of intelligent customer service manages system
CN108962282B (en) * 2018-06-19 2021-07-13 京北方信息技术股份有限公司 Voice detection analysis method and device, computer equipment and storage medium
CN109033257A (en) * 2018-07-06 2018-12-18 中国平安人寿保险股份有限公司 Talk about art recommended method, device, computer equipment and storage medium
CN109242529A (en) * 2018-07-31 2019-01-18 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and the online method of investigation and study of user experience based on scene analysis
CN108962281B (en) * 2018-08-15 2021-05-07 三星电子(中国)研发中心 Language expression evaluation and auxiliary method and device
CN109408756A (en) * 2018-09-21 2019-03-01 广州神马移动信息科技有限公司 The monitoring method and its device of user behavior in Ask-Answer Community
CN109087669B (en) * 2018-10-23 2021-03-02 腾讯科技(深圳)有限公司 Audio similarity detection method and device, storage medium and computer equipment
CN109327632A (en) * 2018-11-23 2019-02-12 深圳前海微众银行股份有限公司 Intelligent quality inspection system, method and the computer readable storage medium of customer service recording
CN109726655A (en) * 2018-12-19 2019-05-07 平安普惠企业管理有限公司 Customer service evaluation method, device, medium and equipment based on Emotion identification
CN111353804A (en) * 2018-12-24 2020-06-30 中移(杭州)信息技术有限公司 Service evaluation method, device, terminal equipment and medium
CN109816220A (en) * 2019-01-07 2019-05-28 平安科技(深圳)有限公司 Quality of service monitoring and treating method and apparatus based on intelligent decision
CN109785862A (en) * 2019-01-21 2019-05-21 深圳壹账通智能科技有限公司 Customer service quality evaluating method, device, electronic equipment and storage medium
CN109902938A (en) * 2019-01-31 2019-06-18 平安科技(深圳)有限公司 Obtain method, apparatus, computer equipment and the storage medium of learning materials
CN111832851B (en) * 2019-04-15 2024-03-29 北京嘀嘀无限科技发展有限公司 Detection method and device
CN110147936A (en) * 2019-04-19 2019-08-20 深圳壹账通智能科技有限公司 Service evaluation method, apparatus based on Emotion identification, storage medium
CN110288214A (en) * 2019-06-14 2019-09-27 秒针信息技术有限公司 The method and device of partition of the level
CN112232101A (en) * 2019-07-15 2021-01-15 北京正和思齐数据科技有限公司 User communication state evaluation method, device and system
CN110363154A (en) * 2019-07-17 2019-10-22 安徽航天信息有限公司 A kind of service quality examining method and system based on Emotion identification
CN110827796B (en) * 2019-09-23 2024-05-24 平安科技(深圳)有限公司 Interviewer judging method and device based on voice, terminal and storage medium
CN110718228B (en) * 2019-10-22 2022-04-12 中信银行股份有限公司 Voice separation method and device, electronic equipment and computer readable storage medium
CN111026793A (en) * 2019-11-25 2020-04-17 珠海格力电器股份有限公司 Data processing method, device, medium and equipment
CN111080109B (en) * 2019-12-06 2023-05-05 中信银行股份有限公司 Customer service quality evaluation method and device and electronic equipment
CN111199158A (en) * 2019-12-30 2020-05-26 沈阳民航东北凯亚有限公司 Method and device for scoring civil aviation customer service
CN111242508A (en) * 2020-02-14 2020-06-05 厦门快商通科技股份有限公司 Method, device and equipment for evaluating customer service quality based on natural language processing
CN111311327A (en) * 2020-02-19 2020-06-19 平安科技(深圳)有限公司 Service evaluation method, device, equipment and storage medium based on artificial intelligence
CN112040074B (en) * 2020-08-24 2022-07-26 华院计算技术(上海)股份有限公司 Professional burnout detection method for telephone customer service personnel based on voice acoustic information
CN112132477A (en) * 2020-09-28 2020-12-25 中国银行股份有限公司 Service performance determination method and device
CN112131369B (en) * 2020-09-29 2024-02-02 中国银行股份有限公司 Service class determining method and device
CN112364661B (en) * 2020-11-11 2024-03-19 北京大米科技有限公司 Data detection method and device, readable storage medium and electronic equipment
CN112671984B (en) * 2020-12-01 2022-09-23 长沙市到家悠享网络科技有限公司 Service mode switching method and device, robot customer service and storage medium
CN112885376A (en) * 2021-01-23 2021-06-01 深圳通联金融网络科技服务有限公司 Method and device for improving voice call quality inspection effect
CN113011158A (en) * 2021-03-23 2021-06-22 北京百度网讯科技有限公司 Information anomaly detection method and device, electronic equipment and storage medium
CN113472958A (en) * 2021-07-13 2021-10-01 上海华客信息科技有限公司 Method, system, electronic device and storage medium for receiving branch telephone in centralized mode
CN113674765A (en) * 2021-08-18 2021-11-19 中国联合网络通信集团有限公司 Voice customer service quality inspection method, device, equipment and storage medium
TWI815400B (en) * 2022-04-14 2023-09-11 合作金庫商業銀行股份有限公司 Emotion analysis system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015174628A1 (en) * 2014-05-12 2015-11-19 주식회사 네이블커뮤니케이션즈 Volte quality measuring system and volte quality measuring method utilizing terminal agent

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis
CN107093431B (en) * 2016-02-18 2020-07-07 中国移动通信集团辽宁有限公司 Method and device for quality inspection of service quality
CN107154257B (en) * 2017-04-18 2021-04-06 苏州工业职业技术学院 Customer service quality evaluation method and system based on customer voice emotion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015174628A1 (en) * 2014-05-12 2015-11-19 주식회사 네이블커뮤니케이션즈 Volte quality measuring system and volte quality measuring method utilizing terminal agent

Also Published As

Publication number Publication date
CN107818798A (en) 2018-03-20

Similar Documents

Publication Publication Date Title
CN107818798B (en) Customer service quality evaluation method, device, equipment and storage medium
US11455985B2 (en) Information processing apparatus
US10706873B2 (en) Real-time speaker state analytics platform
US11417343B2 (en) Automatic speaker identification in calls using multiple speaker-identification parameters
US10623573B2 (en) Personalized support routing based on paralinguistic information
CN107481720B (en) Explicit voiceprint recognition method and device
CN104598644B (en) Favorite label mining method and device
US11355099B2 (en) Word extraction device, related conference extraction system, and word extraction method
US11354754B2 (en) Generating self-support metrics based on paralinguistic information
JP2017016566A (en) Information processing device, information processing method and program
US10692516B2 (en) Dialogue analysis
CN115083434B (en) Emotion recognition method and device, computer equipment and storage medium
JP6915637B2 (en) Information processing equipment, information processing methods, and programs
CN113837594A (en) Quality evaluation method, system, device and medium for customer service in multiple scenes
KR20210123545A (en) Method and apparatus for conversation service based on user feedback
CN112837688B (en) Voice transcription method, device, related system and equipment
JP7044156B2 (en) Generation program, generation method and information processing device
US11798015B1 (en) Adjusting product surveys based on paralinguistic information
US20240194200A1 (en) System and method for change point detection in multi-media multi-person interactions
CN116844555A (en) Method and device for vehicle voice interaction, vehicle, electronic equipment and storage medium
CN117876047A (en) Control method and system of evaluation terminal, computer equipment and readable storage medium
CN116189682A (en) Text information display method and device, electronic equipment and storage medium
CN114298515A (en) Method, device and storage medium for generating student quality portrait
CN116092481A (en) Scoring method and device based on voice data, electronic equipment and storage medium
CN116187292A (en) Dialogue template generation method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant