CN112309431B - Method and system for evaluating voice infectivity of customer service personnel - Google Patents

Method and system for evaluating voice infectivity of customer service personnel Download PDF

Info

Publication number
CN112309431B
CN112309431B CN202010992959.3A CN202010992959A CN112309431B CN 112309431 B CN112309431 B CN 112309431B CN 202010992959 A CN202010992959 A CN 202010992959A CN 112309431 B CN112309431 B CN 112309431B
Authority
CN
China
Prior art keywords
voice
sample
infectivity
evaluation
customer service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010992959.3A
Other languages
Chinese (zh)
Other versions
CN112309431A (en
Inventor
肖龙源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Kuaishangtong Technology Co Ltd
Original Assignee
Xiamen Kuaishangtong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Kuaishangtong Technology Co Ltd filed Critical Xiamen Kuaishangtong Technology Co Ltd
Priority to CN202010992959.3A priority Critical patent/CN112309431B/en
Publication of CN112309431A publication Critical patent/CN112309431A/en
Application granted granted Critical
Publication of CN112309431B publication Critical patent/CN112309431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/60Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for measuring the quality of voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a method for evaluating voice infectivity of customer service personnel, which realizes the evaluation of the voice infectivity of the customer service personnel through the steps of sample selection, sample identification, feature extraction, model training, training result evaluation, target voice identification and the like. The method is based on comprehensive voice infectivity grade grading and deep learning models, adopts the deep learning models, carries out deep self-learning on manually evaluated sample voices, finally realizes the infectivity grade of the machine identification target voice, realizes the machine evaluation of customer service voice infectivity, is more authoritative and standard in evaluation result, and can evaluate the voice infectivity in the early stage, thereby greatly reducing the cost of customer service training and culture.

Description

Method and system for evaluating voice infectivity of customer service personnel
Technical Field
The invention relates to the field of voice recognition and evaluation, in particular to a method and a system for evaluating voice infection of customer service personnel.
Background
The traditional industry or various industries currently operated by means of an Internet platform are free from customer service personnel. Especially the emerging industry and the high-tech industry, the technical strength and the product quality of the technology are often different, and the quality of service is compared at the moment. Therefore, most companies have customer service departments and corresponding personnel, and the departments and the corresponding personnel are more important. The corresponding service capabilities of these people have a great impact on the reputation or sales of the enterprise, and the service capabilities of the customer service personnel include two main aspects of speaking and infectivity. The speaking technique includes business knowledge, communication skill, psychology, and intelligence quotient, and is mainly through post-culture. The infectivity is the natural ability of a person, and is difficult to make large changes through the subsequent effort of learning and training. Therefore, how to accurately identify whether a person is suitable for the post in early time is a great help for enterprises to recruit the post. In the prior art, the aim of realizing voice emotion recognition of customer service personnel is fulfilled by voice recognition and emotion recognition, and the customer service quality is judged by customer emotion recognition, but the methods cannot fundamentally realize the evaluation of voice infection of the customer service personnel, and the only way in which the evaluation of the infectivity can be realized in the prior art is through manual evaluation, and the prior art lacks a mode of evaluating the voice infectivity by an authoritative machine.
Disclosure of Invention
The invention aims to solve the technical problem of realizing machine evaluation of the infectivity of customer service voice, and provides a method and a system for evaluating the infectivity of the voice of customer service personnel.
In order to achieve the above purpose, the present invention provides the following technical solutions: the method for evaluating the voice infectivity of the customer service personnel comprises the following steps of:
sample selection, namely extracting customer service sound samples, grading and grading the infection force of the samples according to a preset evaluation mode, and marking the grades as labels in the customer service sound samples according to grading standards;
sample identification, namely performing spectral treatment on a sound sample to obtain spectral data of the sound sample, and corresponding the grade of the sound sample mark to the spectral data, wherein the corresponding data is divided into a training sample and an evaluation sample;
feature extraction, namely extracting feature values of the frequency spectrum data obtained in the sample identification step, wherein the feature values comprise frequency, volume and pitch;
model training, substituting the extracted characteristic values of the training samples and the corresponding grading labels into a deep learning network to perform model training;
the training result evaluation, substituting the characteristic value of the evaluation sample into a training model generated in the model training step to identify and obtain voice infectivity classification, comparing the identification result with the classification of the label corresponding to the evaluation sample, if the identification accuracy reaches the preset accuracy, storing the training model of the current step and carrying out the next step, and if the identification accuracy does not reach the preset accuracy, returning to the model training step to continue training;
target sound identification, performing spectrum processing on target sound to obtain spectrum data of the target sound, extracting characteristic values of the spectrum data of the target sound, and inputting the extracted characteristic values into a training model to identify so as to obtain the infection classification of the target sound.
Preferably, the infectivity is divided into five stages, namely a first stage, a second stage, a third stage, a fourth stage and a fifth stage, wherein the first stage is the worst infectivity, and the fifth stage is the strongest infectivity.
Further, the target voice recognition step further comprises an emotion recognition process, wherein the emotion recognition process is based on a voice emotion recognition algorithm, and the emotion recognition process outputs the emotion condition of the target voice.
Further, the deep learning network is a deep learning network based on a convolutional neural network.
Further, the deep learning network is a deep learning network based on a recurrent neural network.
Further, the deep learning network is a deep learning network based on a deep belief network.
Furthermore, the deep learning network further comprises a preprocessing step before recognizing the characteristic values, wherein the preprocessing step comprises weighting processing of the characteristic values.
The invention further aims to provide an evaluation system for the voice infectivity of the customer service personnel, which comprises a voice input module, a rating module, a voice recognition module, a feature extraction module, a deep learning module and an output module, wherein the voice input module is used for inputting sample voice and target voice, the rating module is used for inputting a rating label of the sample voice, the voice recognition module is used for carrying out frequency spectrum processing on the sample voice and the target voice, the feature extraction module is used for carrying out feature value extraction on frequency spectrum data of the sample voice and the target voice, the deep learning module is used for carrying out deep learning and recognition on the feature value of the sample voice and the label data and carrying out recognition on the infectivity level of the target voice according to the feature value of the target voice, and the output module is used for outputting an infectivity recognition result of the target voice.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, through adopting a deep learning model, deep self-learning is carried out on the sample sound which is evaluated manually, the infectivity level of the target sound is finally realized, the machine evaluation of the infectivity of the customer service sound is realized, the evaluation result is more authoritative and standard, and the infectivity of the sound can be evaluated in the early stage, so that the cost of customer service training and cultivation is greatly reduced.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the present embodiment discloses a method for evaluating voice infection of a customer service person, which implements the evaluation of voice infection of the customer service person by the following steps:
and selecting a sample, extracting a customer service sound sample, grading and grading the infectivity of the sample according to a preset evaluation mode, and marking the grade as a label into the customer service sound sample according to a grading standard. The method is applicable, the collection and grading process of the sample sound can be carried out in a multi-dimensional evaluation mode, for example, the sample sound can be evaluated from multiple dimensions such as customer satisfaction, company performance completion, colleague sense, leader evaluation and the like, and the subjective bias of personnel substitution can be avoided in a random coverage evaluation mode in the evaluation process. Specifically, the rating mode can be set according to different requirements of each company, total scores are weighted after the evaluation of each dimension is completed, and the rating mode is classified according to different total score values. For example, in this embodiment, the infections may be classified into five classes, first, second, third, fourth, and fifth, respectively, with first being the worst and fifth being the strongest. The classification of the sample is stored together with the audio data of the sample as a label, and the label information of each audio data can be conveniently read in the subsequent step by setting a mapping table. In order to make the subsequent machine learning more accurate, the sample selection needs to be as diverse, wide-range and rich as possible. For example, in this embodiment, the male and female voice samples are set in equal proportion, the age range of the sample voice is selected according to the age ratio of the actual customer service personnel, the number of sample collection is ensured to be as large as possible within an acceptable range, and in addition, the sample collection is also performed in a manner as widely distributed as possible on other elements. Preferably, after the sample sound evaluation is completed, the scores of all the samples are counted, so that the sample collection is summarized to all score intervals as much as possible, and in order to better reflect the real situation, part of sound samples of non-customer service staff can be mixed into the collected samples.
Sample identification, namely performing spectral processing on a sound sample to obtain spectral data of the sound sample, and corresponding the grade of the sound sample mark to the spectral data, wherein the corresponding data is divided into a training sample and an evaluation sample. Preferably, samples in each grade are equally divided into training samples and evaluation samples, so that the subsequent evaluation process is more accurate. The data frequency spectrum process can be realized by firstly converting the audio data into a spectrogram by adopting a frequency spectrum conversion system, and then obtaining the spectrogram by reading the information of the spectrogram, for example, obtaining the spectrogram by a Mel spectrum coefficient method. The audio information data of the target sound needs to be converted into audio map information before the feature value is acquired. It is possible that in other embodiments of the present invention, other spectrum methods may be used to obtain spectrum information for identification data.
And extracting characteristics, namely extracting characteristic values of the frequency spectrum data obtained in the sample identification step, wherein the characteristic values comprise frequency, volume and pitch, and other characteristic parameters can be added according to the requirements of users in other specific embodiments of the invention. The volume can be obtained through the amplitude of the audio information, and the frequency can be obtained through the scanning and identification of the audio spectrum by the audio identification software. And the pitch is obtained by scanning the peaks of the audio pattern. The process of feature extraction may be automatically implemented by a machine, for example, by automatically reading data from a spectrogram by a machine, using a machine reading method in the prior art. In other embodiments of the present invention, the reading may be performed based on deep learning recognition.
Model training, substituting the extracted characteristic values of the training samples and the corresponding grading labels into a deep learning network to perform model training. In particular, the process of machine deep learning may also include multiple convolution processes, which may employ a deep learning network based on convolutional neural networks. Through rolling and pooling, the identification is realized, the corresponding relation is established between the label, and the network can be used for example, leNet-5, VGG, alexNet and the like. The deep learning network can also adopt a deep learning network based on a cyclic neural network, and the cyclic network can be adopted to increase the steps of emotion recognition in the recognition process. Further, preferably, the deep learning network is a deep learning network based on a deep belief network. When the deep learning network based on the deep belief network is adopted, the deep learning network further comprises a preprocessing step before identifying the characteristic values, wherein the preprocessing step comprises weighting processing of the characteristic values. The pre-weighting coefficients can be added to the characteristic values according to specific requirements, so that the characteristic values can be given weight according to the requirements. For example, after the user adds other characteristic parameters except the one in this embodiment, for example, a tone color, it may be weighted, for example, the audio, the pitch and the volume are weighted by 0.2 and the tone color is weighted by 0.4, although there is no related method for extracting the tone color characteristic in the prior art, if there is a related method, a tone color may be added. The weighting is because the tone color is congenital and cannot be changed through later training, and if the tone color does not meet the requirement of customer service personnel, the customer service personnel is not required, so that the proportion of the tone color is increased. The preprocessing process may be implemented in machine language, as applicable.
And (3) evaluating a training result, namely substituting the characteristic value of the evaluation sample into a training model generated in the model training step to identify to obtain a voice infection force grade, comparing the identification result with the grade of a label corresponding to the evaluation sample, if the identification accuracy reaches the preset accuracy, storing the training model of the current step and carrying out the next step, and if the identification accuracy does not reach the preset accuracy, returning to the model training step to continue training. For example, in this embodiment, the recognition accuracy may be set to be greater than 90% to meet the requirement. After the evaluation sample is input, the training model is obtained through the previous step for recognition, the recognition result is compared with the label carried by the user, the obtained grade is marked as accurate in recognition if the obtained grade is consistent with the grade of the label, and otherwise, the obtained grade is marked as inaccurate in recognition. The process of alignment may be implemented in machine language, if applicable. In addition, in order to improve the accuracy of the model, training result evaluation may be performed after training more samples, for example, 50 training result evaluation is performed after 50 sets of results.
Target sound identification, performing spectrum processing on target sound to obtain spectrum data of the target sound, extracting characteristic values of the spectrum data of the target sound, and inputting the extracted characteristic values into a training model to identify so as to obtain the infection classification of the target sound. In another embodiment of the present invention, it is possible that the target voice recognition step further includes an emotion recognition process, where the emotion recognition process is an emotion recognition process based on a voice emotion recognition algorithm, and the emotion recognition process outputs an emotion condition of the target voice. Since emotion is subject to subjective bias, the result of emotion recognition can be output for reference.
In addition, the method can be realized by a system, which comprises a sound recording module, a rating module, a sound identification module, a feature extraction module, a deep learning module and an output module, wherein the sound recording module is used for recording sample sound and target sound, and the device can be a digital recorder and can also receive sound data read by a data transmission or storage device. The rating module is used for inputting a rating label of the sample sound, is feasible, the rating label is manually scored, a rating form can be established before inputting, a mapping relation is formed by importing the rating form into a database, and the mapped form data are retrieved after the subsequent characteristic value extraction is completed to form a data set. The input can be performed by data input hardware such as a keyboard. The voice recognition module is used for performing frequency spectrum processing on the sample voice and the target voice, and can be frequency spectrum processing software arranged in a computer. The feature extraction module is used for extracting feature values of the spectrum data of the sample sound and the target sound, and can be a software program for machine language identification based on a spectrogram, and the software program can automatically run and output the result to the deep learning module. The deep learning module is used for carrying out deep learning and recognition on the characteristic value of the sample sound and the tag data, recognizing the infection level of the target sound according to the characteristic value of the target sound, carrying the deep learning network on a computer by the deep learning module, outputting a result through automatic operation, and starting the training and recognition process manually. The output module is used for outputting the infective force recognition result of the target sound, the module can be built in software loading the deep learning network, and the recognition result is output to a display of the computer through an internal interface output device.
According to the invention, through adopting a deep learning model, deep self-learning is carried out on the sample sound which is evaluated manually, the infectivity level of the target sound is finally realized, the machine evaluation of the infectivity of the customer service sound is realized, the evaluation result is more authoritative and standard, and the infectivity of the sound can be evaluated in the early stage, so that the cost of customer service training and cultivation is greatly reduced.
The embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made to these embodiments without departing from the principles and spirit of the invention, and yet fall within the scope of the invention.

Claims (8)

1. The method for evaluating the voice infectivity of the customer service personnel is characterized by comprising the following steps of:
sample selection, namely extracting customer service sound samples, grading and grading the infection force of the samples according to a preset evaluation mode, and marking the grades as labels in the customer service sound samples according to grading standards;
sample identification, namely performing spectral treatment on a sound sample to obtain spectral data of the sound sample, and corresponding the grade of the sound sample mark to the spectral data, wherein the corresponding data is divided into a training sample and an evaluation sample;
feature extraction, namely extracting feature values of the frequency spectrum data obtained in the sample identification step, wherein the feature values comprise frequency, volume and pitch;
model training, substituting the extracted characteristic values of the training samples and the corresponding grading labels into a deep learning network to perform model training;
the training result evaluation, substituting the characteristic value of the evaluation sample into a training model generated in the model training step to identify and obtain voice infectivity classification, comparing the identification result with the classification of the label corresponding to the evaluation sample, if the identification accuracy reaches the preset accuracy, storing the training model of the current step and carrying out the next step, and if the identification accuracy does not reach the preset accuracy, returning to the model training step to continue training;
target sound identification, performing spectrum treatment on target sound to obtain spectrum data of the target sound, extracting characteristic values of the spectrum data of the target sound, and inputting the extracted characteristic values into a training model to identify so as to obtain the infection classification of the target sound;
the sample sound collection and grading process is carried out in a multi-dimensional evaluation mode, the multi-dimensional evaluation at least comprises evaluation according to a plurality of dimensions of customer satisfaction, company performance completion, colleague sense and leader evaluation, total score is calculated by weighting after the evaluation of each dimension is completed, and the grading of the sample is stored together as a label and the audio data of the sample according to different grading of the total score.
2. The method for evaluating the voice infectivity of the customer service personnel according to claim 1, wherein the infectivity is divided into five stages, namely a first stage, a second stage, a third stage, a fourth stage and a fifth stage, wherein the first stage is the worst infectivity, and the fifth stage is the strongest infectivity.
3. The method for evaluating voice infectivity of customer service personnel according to claim 1, wherein the target voice recognition step further comprises an emotion recognition process, wherein the emotion recognition process is an emotion recognition process based on a voice emotion recognition algorithm, and the emotion recognition process outputs emotion conditions of target voice.
4. The method for evaluating the voice infectivity of the customer service personnel according to claim 1, wherein the deep learning network is a deep learning network based on a convolutional neural network.
5. The method for evaluating the voice infectivity of the customer service personnel according to claim 1, wherein the deep learning network is a deep learning network based on a recurrent neural network.
6. The method for evaluating the voice infectivity of the customer service personnel according to claim 1, wherein the deep learning network is a deep learning network based on a deep belief network.
7. The method for evaluating the voice infectivity of the customer service personnel according to claim 6, wherein the deep learning network further comprises a preprocessing step before recognizing the characteristic values, and the preprocessing step comprises weighting the characteristic values.
8. The evaluation system for the voice infectivity of the customer service personnel is characterized by comprising a voice input module, a rating module, a voice recognition module, a feature extraction module, a deep learning module and an output module, wherein the voice input module is used for inputting sample voice and target voice, the rating module is used for inputting rating labels of the sample voice, the voice recognition module is used for carrying out spectral treatment on the sample voice and the target voice, the feature extraction module is used for carrying out feature value extraction on spectral data of the sample voice and the target voice, the deep learning module is used for carrying out deep learning and recognition on feature values of the sample voice and label data and carrying out recognition on the infectivity grade of the target voice according to the feature values of the target voice, and the output module is used for outputting an infectivity recognition result of the target voice;
the sample sound collection and grading process is carried out in a multi-dimensional evaluation mode, the multi-dimensional evaluation at least comprises evaluation according to a plurality of dimensions of customer satisfaction, company performance completion, colleague sense and leader evaluation, total score is calculated by weighting after the evaluation of each dimension is completed, and the grading of the sample is stored together as a label and the audio data of the sample according to different grading of the total score.
CN202010992959.3A 2020-09-21 2020-09-21 Method and system for evaluating voice infectivity of customer service personnel Active CN112309431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010992959.3A CN112309431B (en) 2020-09-21 2020-09-21 Method and system for evaluating voice infectivity of customer service personnel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010992959.3A CN112309431B (en) 2020-09-21 2020-09-21 Method and system for evaluating voice infectivity of customer service personnel

Publications (2)

Publication Number Publication Date
CN112309431A CN112309431A (en) 2021-02-02
CN112309431B true CN112309431B (en) 2024-02-23

Family

ID=74483325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010992959.3A Active CN112309431B (en) 2020-09-21 2020-09-21 Method and system for evaluating voice infectivity of customer service personnel

Country Status (1)

Country Link
CN (1) CN112309431B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11119791A (en) * 1997-10-20 1999-04-30 Hitachi Ltd System and method for voice feeling recognition
JP2014178835A (en) * 2013-03-14 2014-09-25 Nissha Printing Co Ltd Evaluation system and evaluation method
CN106952656A (en) * 2017-03-13 2017-07-14 中南大学 The long-range assessment method of language appeal and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5855290B2 (en) * 2014-06-16 2016-02-09 パナソニックIpマネジメント株式会社 Service evaluation device, service evaluation system, and service evaluation method
US20190253558A1 (en) * 2018-02-13 2019-08-15 Risto Haukioja System and method to automatically monitor service level agreement compliance in call centers

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11119791A (en) * 1997-10-20 1999-04-30 Hitachi Ltd System and method for voice feeling recognition
JP2014178835A (en) * 2013-03-14 2014-09-25 Nissha Printing Co Ltd Evaluation system and evaluation method
CN106952656A (en) * 2017-03-13 2017-07-14 中南大学 The long-range assessment method of language appeal and system

Also Published As

Publication number Publication date
CN112309431A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112000791A (en) Motor fault knowledge extraction system and method
CN113360616A (en) Automatic question-answering processing method, device, equipment and storage medium
CN110826320A (en) Sensitive data discovery method and system based on text recognition
CN110858269B (en) Fact description text prediction method and device
CN111916108B (en) Voice evaluation method and device
CN108322317A (en) A kind of account identification correlating method and server
CN113807103B (en) Recruitment method, device, equipment and storage medium based on artificial intelligence
TW201935370A (en) System and method for evaluating customer service quality from text content
CN111681021A (en) GCA-RFR model-based digital content resource value evaluation method
CN115457980A (en) Automatic voice quality evaluation method and system without reference voice
CN112052686B (en) Voice learning resource pushing method for user interactive education
CN112309431B (en) Method and system for evaluating voice infectivity of customer service personnel
CN112434862B (en) Method and device for predicting financial dilemma of marketing enterprises
CN113140228A (en) Vocal music scoring method based on graph neural network
CN117196402A (en) Target object determination method and device, storage medium and electronic equipment
CN115984956A (en) Man-machine cooperation student classroom attendance multi-mode visual analysis system
CN114822557A (en) Method, device, equipment and storage medium for distinguishing different sounds in classroom
CN112131354B (en) Answer screening method and device, terminal equipment and computer readable storage medium
CN114678039A (en) Singing evaluation method based on deep learning
CN113889274A (en) Method and device for constructing risk prediction model of autism spectrum disorder
CN112733011A (en) Self-recommendation system for information consultation
CN111061852A (en) Data processing method, device and system
CN116389644B (en) Outbound system based on big data analysis
CN116564351B (en) Voice dialogue quality evaluation method and system and portable electronic equipment
CN113066327B (en) Online intelligent education method for college students

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant