CN107452385A - A kind of voice-based data evaluation method and device - Google Patents
A kind of voice-based data evaluation method and device Download PDFInfo
- Publication number
- CN107452385A CN107452385A CN201710703860.5A CN201710703860A CN107452385A CN 107452385 A CN107452385 A CN 107452385A CN 201710703860 A CN201710703860 A CN 201710703860A CN 107452385 A CN107452385 A CN 107452385A
- Authority
- CN
- China
- Prior art keywords
- voice content
- result
- voice
- emotion identification
- sentiment analysis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000011157 data evaluation Methods 0.000 title claims abstract description 32
- 238000004458 analytical method Methods 0.000 claims abstract description 105
- 230000008451 emotion Effects 0.000 claims abstract description 102
- 238000011156 evaluation Methods 0.000 claims abstract description 82
- 230000004069 differentiation Effects 0.000 claims abstract description 49
- 230000001755 vocal effect Effects 0.000 claims description 20
- 230000036651 mood Effects 0.000 claims description 16
- 238000000513 principal component analysis Methods 0.000 claims description 11
- 230000009467 reduction Effects 0.000 claims description 9
- 238000001228 spectrum Methods 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000013075 data extraction Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 8
- 239000011159 matrix material Substances 0.000 description 16
- 238000012549 training Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 6
- 230000001419 dependent effect Effects 0.000 description 4
- 235000013399 edible fruits Nutrition 0.000 description 4
- 241001269238 Data Species 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000006854 communication Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000013441 quality evaluation Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Child & Adolescent Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the present invention provides a kind of voice-based data evaluation method and device, belongs to technical field of data processing.Voice-based data evaluation method includes carrying out Application on Voiceprint Recognition to voice content according to the voiceprint of voice content, and the voice content is made a distinction according to the result of Application on Voiceprint Recognition;Emotion identification is carried out to the voice content obtained after differentiation, generates Emotion identification result;Speech text is generated according to the voice content obtained after differentiation, sentiment analysis is carried out to the speech text, generates sentiment analysis result;According to the Emotion identification result and the sentiment analysis result, the voice content before differentiation is evaluated, generates evaluation result.By the embodiment of the present invention, data evaluation human cost is reduced, evaluating data is not easy to be lost, and evaluation result is more objective and comprehensive.
Description
Technical field
The present embodiments relate to technical field of data processing, more particularly to a kind of voice-based data evaluation method and
Device.
Background technology
The electricity that enterprise typically possesses substantial amounts exhales artificial customer service team, main to be responsible for collecting user feedback, answer user
Consulting, response customer complaint.The service quality of customer service team is directly connected to the acceptance of the users of an enterprise, its user recorded
Feed back the formulation important also for product improvement, enterprise development plan.Therefore, artificial customer service team is to look forward at this stage
The indispensable important component of industry, and the service quality assessment of team also becomes an important problem, is related to group
Team's performance appraisal, personnel's variation etc..
Traditional customer service team quality evaluation is based primarily upon the artificial marking after service terminates, the evaluation method of this pattern
There are some defects, be mainly reflected in the following aspects:In a first aspect, customer evaluation subjective factor is more, personal habits pair
Scoring has a great influence;It is non-enforceable for evaluation operation in second aspect client, scoring may be caused to lack;In third party
Face, scoring can not associate with message.It can be seen that traditional customer service team quality evaluation human cost is high, and evaluating data
Easily missing, evaluation result can not objectively respond the service quality of customer service.
The content of the invention
In view of this, one of technical problem that the embodiment of the present invention solves is that providing a kind of voice-based data comments
Valency method and device, to overcome in the prior art human cost it is high, evaluating data easily lacks, and evaluation result can not be objective anti-
The defects of reflecting the speech quality of voice content.
Based on above-mentioned purpose, the embodiment of the present invention provides a kind of voice-based data evaluation method, including:
Application on Voiceprint Recognition is carried out to voice content according to the voiceprint of voice content, according to the result of Application on Voiceprint Recognition to described
Voice content makes a distinction;
Emotion identification is carried out to the voice content obtained after differentiation, generates Emotion identification result;
Speech text is generated according to the voice content obtained after the differentiation, sentiment analysis is carried out to the speech text,
Generate sentiment analysis result;
According to the Emotion identification result and the sentiment analysis result, the voice content before differentiation is commented
Valency, generate evaluation result.
Based on above-mentioned purpose, the embodiment of the present invention also provides a kind of voice-based data evaluation device, including:
Voice content discriminating module, Application on Voiceprint Recognition, root are carried out to voice content for the voiceprint according to voice content
The voice content is made a distinction according to the result of Application on Voiceprint Recognition;
Emotion identification module, for carrying out Emotion identification to the voice content obtained after differentiation, generate Emotion identification result;
Sentiment analysis module, for generating speech text according to the voice content obtained after the differentiation, to the voice
Text carries out sentiment analysis, generates sentiment analysis result;
Voice content evaluation module, for according to the Emotion identification result and the sentiment analysis result, before differentiation
The voice content evaluated, generate evaluation result.
From above technical scheme, the voice-based data evaluation method of the embodiment of the present invention, by voice
Appearance makes a distinction, and the voice content to being obtained after differentiation carries out Emotion identification, generates Emotion identification result, also, according to institute
The voice content generation speech text obtained after distinguishing is stated, sentiment analysis is carried out to the speech text, generates sentiment analysis knot
Fruit, finally according to the Emotion identification result and the sentiment analysis result, the voice content before differentiation is evaluated,
Obtain evaluation result.Pass through scheme provided in an embodiment of the present invention, on the one hand, speech data is handled by machine, dropped
Low manual intervention degree, the automatic Evaluation of data can be achieved, such as automatic Evaluation customer service quality, reduce human cost;It is another
Aspect, the evaluation manually to voice content is compared to, it is more objective to be evaluated by the analysis to voice content, also need not
Place one's entire reliance upon artificial evaluation, reduces the dependence to manually evaluating so that evaluating data is not easy to be lost;Another further aspect, root
According to the analysis to voice content and the analysis pair speech text content corresponding with voice content, from multiple dimensions to the voice
Content is evaluated so that evaluation result is more comprehensive.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments described in inventive embodiments, for those of ordinary skill in the art, it can also be obtained according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 is the flow chart of the voice-based data evaluation method of present example one;
Fig. 2 is the flow chart of the voice-based data evaluation method of present example two;
Fig. 3 is the schematic diagram of the voice-based data evaluation device of present example three;
Fig. 4 is the schematic diagram of the voice-based data evaluation device of present example four.
Embodiment
In order that those skilled in the art more fully understand the technical scheme in the embodiment of the present invention, below in conjunction with the present invention
Accompanying drawing in embodiment, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described reality
It is only part of the embodiment of the embodiment of the present invention to apply example, rather than whole embodiments.Based on the implementation in the embodiment of the present invention
Example, the every other embodiment that those of ordinary skill in the art are obtained, it should all belong to the scope that the embodiment of the present invention is protected.
It should be noted that implementing any technical scheme of the embodiment of the present invention must be not necessarily required to reach the above simultaneously
All advantages.Specific implementation of the embodiment of the present invention is further illustrated with reference to accompanying drawing of the embodiment of the present invention.
It is the flow chart of the voice-based data evaluation method of present example one as shown in Figure 1.In the present embodiment,
The voice-based data evaluation method comprises the following steps:
S101:Application on Voiceprint Recognition is carried out to voice content according to the voiceprint of voice content, according to the result of Application on Voiceprint Recognition
The voice content is made a distinction.
In the present embodiment, the voice content can be the particular content of voice document, wherein, voice document can be
The file of the sound of any appropriate record someone, when the scheme of the present embodiment is used to assess the service quality of customer service, voice
File can be the calling record file between client and customer service.By extracting the vocal print of voice content, to distinguish voice content
Corresponding role is customer service or client, that is, it is the voice content of customer service or the voice content of client to distinguish voice content.
Wherein, Application on Voiceprint Recognition can by those skilled in the art according to the actual requirements using it is any it is appropriate by the way of realize,
The embodiment of the present invention is not specifically limited to the mode of Application on Voiceprint Recognition.
S102:Emotion identification is carried out to the voice content obtained after differentiation, generates Emotion identification result.
After being distinguished to voice content, Emotion identification is carried out to the voice content after differentiation, for example, respectively to the language of customer service
The voice content of sound content and client are identified.The service quality of customer service can pass through customer service and/or visitor to a certain extent
The mood at family embodies, such as, during customer service is communicated with client, the mood of client can be by communication process
The frequency of sound embodies, and the feature of the language of different people in frequency domain can be entirely different, is embodied on frequency spectrum, same
Feature of the personal different moods on frequency spectrum is also entirely different, therefore, can be generated by voice content corresponding
Emotion identification result.That is, Emotion identification, generation can be carried out by analyzing the frequency spectrum of the voice content obtained after the differentiation
Emotion identification result.
S103:Speech text is generated according to the voice content obtained after the differentiation, emotion is carried out to the speech text
Analysis, generate sentiment analysis result.
In a kind of feasible pattern of the present embodiment, the term vector of the speech text can be obtained, utilizes convolutional Neural
Network carries out sentiment analysis to the term vector, generates sentiment analysis result.By the way that voice content is converted into text data, adopt
By the use of term vector as the feature of text, the machine learning of Multi-Layer Feedback network (RNN) is utilized based on a large amount of artificial mark training sets
Method is trained, and obtains training pattern, when carrying out sentiment analysis to speech text using the training pattern, by extracting language
Term vector in sound text, to analyze the emotion of customer service and client, and sentiment analysis result corresponding to generation, specifically, logical
The term vector for obtaining the speech text is crossed, sentiment analysis is carried out to the term vector using convolutional neural networks, generates emotion
Analysis result.But not limited to this, in actual use, those skilled in the art can also according to the actual requirements, using other suitable
Work as mode, or realized using other neutral nets or machine learning mode and sentiment analysis result is obtained according to text data.
When the actual method using the present embodiment carries out data evaluation, do not have between the step and above-mentioned step S102
Inevitable sequencing, can also be performed parallel.
S104:According to the Emotion identification result and the sentiment analysis result, the voice content before differentiation is entered
Row evaluation, generates evaluation result.
After generation Emotion identification result and sentiment analysis result, Emotion identification result and sentiment analysis result are utilized
The voice content before differentiation is evaluated, and automatically generates evaluation result, the evaluation result is as the clothes to the customer service
The scoring for quality of being engaged in.In addition the indexs of other dimensions is may be incorporated into evaluate the voice content before differentiation, this
Inventive embodiments are not further limited herein.It can be seen that the Emotion identification either carried out to voice content, or in voice
The sentiment analysis of speech text corresponding to appearance, based on the voice content of reality, such as with customer service and the actual speech of client
Based on dialogue, not by manually judging, also do not influenceed by artificial marking such as marking of the client on customer service, reduce manual intervention
Degree.
The method of the present embodiment is handled speech data by machine, reduces manual intervention degree, and number can be achieved
According to automatic Evaluation, be compared to the evaluation manually to voice content, evaluated by the analysis to voice content more objective
See, also need not reduce the dependence to manually evaluating so that evaluating data is not easy to be lost, according to right dependent on artificial evaluation
The analysis of voice content and the analysis pair speech text content corresponding with voice content, from multiple dimensions to the voice content
Evaluated so that evaluation result is more comprehensive.
As the alternative embodiment of the present invention, in the above-described embodiments, the result according to Application on Voiceprint Recognition is to institute
State voice content make a distinction including:The voice content in voice document is divided into the first voice using the result of Application on Voiceprint Recognition
Appearance and the second voice content, it is described when being applied to assess service quality of the customer service in voice call process by the above method
First voice content and second voice content correspond to the voice content of client and the voice content of customer service respectively.And then can
To carry out Emotion identification, Emotion identification result corresponding to generation to the first voice content and the second voice content respectively.That is, root
Application on Voiceprint Recognition is carried out to voice content according to the voiceprint of voice content, according to the result of Application on Voiceprint Recognition by the voice content area
It is divided into the first voice content and the second voice content;The voice content obtained after to differentiation carries out Emotion identification, generates mood
During recognition result, based on the first voice content and the second voice content, Emotion identification is carried out to the voice content obtained after differentiation,
First Emotion identification result corresponding to generation and the second Emotion identification result.But due under normal circumstances, in voice call process
In, the mood of customer service is typically more stable, therefore only the voice content of client can be evaluated, because of the voice content of client
The communication process of client and customer service is not only reflected, satisfaction of the client to customer service can also be reflected indirectly, therefore, should
Evaluation result can also be reflected as the service quality of the customer service of the offering customers service.
Further, it is described when being applied to assess service quality of the customer service in voice call process by the above method
Emotion identification result includes the first ' mood scores and the second ' mood scores, the sentiment analysis result include the scoring of the first emotion and
Second emotion scores, for example, the first ' mood scores and the scoring of the first emotion can correspond to the ' mood scores and emotion of client respectively
Scoring;Second ' mood scores and the scoring of the second emotion can correspond to ' mood scores and the emotion scoring of customer service respectively.That is, it is based on
First voice content and the second voice content, can carry out Emotion identification to the voice content that is obtained after differentiation, corresponding to generation
First Emotion identification result and the second Emotion identification result;And voice is generated according to the voice content obtained after the differentiation
Text, sentiment analysis, the first sentiment analysis result and the second sentiment analysis result corresponding to generation are carried out to the speech text.
In order that the evaluation result of generation is more comprehensive, in the above-described embodiments, in addition to institute's predicate before extraction differentiation
The subjective assessment data that sound content prestores, and commented according to the Emotion identification result, the sentiment analysis result and the subjectivity
Valence mumber evidence, the voice content before differentiation is evaluated, generate evaluation result.For example, the subjective assessment data can be with
It is active scoring of the client to customer service.Subjective assessment data are also served as evaluating to the considerations of voice content, added
Evaluative dimension, more fully, objectively voice content can be evaluated.Because customer service is being assessed in voice call
During service quality when, after voice call terminates, client can carry out scoring feedback for the service quality of customer service, and this is commented
Divide the service quality that also reflects customer service to a certain extent, therefore, can be by client's when evaluating the service quality of customer service
Subjective assessment data are as a dimension in multiple dimensions of evaluation.
As the alternative embodiment of the present invention, the voice content is made a distinction in the result according to Application on Voiceprint Recognition
During, the vocal print extracted can be matched with the vocal print prestored, it is if the match is successful, the voice content is true
It is set to the second voice content (voice content of such as customer service), if it fails to match, the voice content is defined as in the first voice
Hold (voice content of such as client).So, when the above method is applied to assess Service Quality of the customer service in voice call process
, it is necessary to extract the vocal print of all customer services in advance during amount, and the vocal print is stored, made a distinction to the voice content
During can the vocal print extracted is directly matched with the vocal print prestored, because the vocal print prestored is all
The vocal print of customer service, then voice print matching is successfully customer service voices content, and accordingly, what it fails to match is in customer voice
Hold.
In some embodiments of the invention, the voice content in voice document voiceprint to voice content
Application on Voiceprint Recognition is carried out, before being made a distinction according to the result of Application on Voiceprint Recognition to the voice content, in addition to:Judge the voice
Whether the voice content in file has corresponding evaluation result, if the voice content in institute's voice file has corresponding evaluation knot
Fruit, then ignore the institute's voice file for having evaluation result.If voice content is corresponding with evaluation result, show in the voice
Appearance carried out evaluation, and obtained evaluation result.In the case, ignoring has the voice document of evaluation result, can so keep away
Exempt from the evaluation repeated to same voice document, on the one hand reduce workload, on the other hand also ensure to a certain extent
The accuracy of evaluation result, avoids too high or too low evaluating data and repeatedly occurs.
In some embodiments of the invention, vocal print is carried out to voice content in the voiceprint according to voice content
Before identification, in addition to:Emotion identification result, sentiment analysis result and subjective assessment data corresponding to history voice content are entered
Row principal component analysis dimensionality reduction, weight corresponding to generation.Then, in above-described embodiment according to the Emotion identification result, the feelings
Feel analysis result and the subjective assessment data, the voice content before differentiation is evaluated, generation evaluation result includes:
According to weight, the sentiment analysis result and the emotion corresponding to the Emotion identification result and the Emotion identification result point
Analyse result corresponding to weight and, weight corresponding to the subjective assessment data and the subjective assessment data, to corresponding institute
State Emotion identification result, the sentiment analysis result and the subjective assessment data and be weighted summation, and by weighted sum
As a result it is used as evaluation result.
The weight of above-mentioned Emotion identification result, sentiment analysis result and subjective assessment data can be by history voice
Emotion identification result, sentiment analysis result and subjective assessment data carry out principal component analysis dimensionality reduction to generate corresponding to appearance, pass through
Reduce data set number of dimensions, retain low order principal component, ignore high-order principal component, so as to keep in data set to variance contribution
Maximum feature.
As shown in Fig. 2 the flow chart of the voice-based data evaluation method for present example two.As the present invention's
One specific embodiment, the voice-based data evaluation method include:
S201:Judge whether the voice content is corresponding with evaluation result, if without evaluation result, into embodiment one
In step S101, if there is evaluation result, into step 202.
S202:Ignore institute's voice file, that is, skip the voice document, and obtain next voice document.
In addition it is further comprising the steps of on the basis of embodiment one:
S203:Principal component analysis dimensionality reduction, weight corresponding to generation are carried out to Emotion identification result and sentiment analysis result.
When being applied to the above method to assess service quality of the customer service in voice call process, and utilize the master of client
When dynamic scoring is evaluated service quality of the customer service in voice call process, then also need to the active scoring to client and lead
See evaluating data and enter every trade principal component analysis dimensionality reduction, weight corresponding to generation.
It is exemplified below and how utilizes weight corresponding to principal component analysis dimensionality reduction (PCA) generation.First with all languages
Sound historical data obtains the value of the characteristic vector corresponding to main composition, i.e. weight, namely weights based on PCA.Enter afterwards
Service stage, this weights strategy need not be changed in actual use stage system.
For a training set, for example using the scoring of 100 voice historical datas as training data, it is characterized in 10 dimensions,
So it can establish 100*10 matrix, as sample.The covariance matrix of this sample is sought, obtains a 10*10
Covariance matrix, then obtain the characteristic value and characteristic vector of this covariance matrix, it should have 10 characteristic values and feature
Vector, according to the size of characteristic value, the characteristic vector corresponding to preceding four characteristic values is taken, forms 10*4 matrix, this
Matrix is exactly the eigenmatrix of requirement, and 100*10 sample matrix is multiplied by this 10*4 eigenmatrix, has just obtained one
Sample matrix after 100*4 new dimensionality reduction, the dimension of each feature have dropped.In the method, one is finally intentionally got
Individual score value, the score value are used to represent that Emotion identification result, sentiment analysis result and subjectivity in previous embodiment are commented
Valence mumber evidence is assessing the corresponding weight of service quality of the customer service in voice call process, therefore takes previous characteristic value institute right
The characteristic vector answered forms 10*1 matrix, i.e. weights.Data dimension is that then weight matrix is 5* to 5 dimensions in example below
1。
In one example, it is as follows using the scoring of following voice historical datas as training sample:
{“service_voice_emotion”:[0.8,0.7,0.6……],
“customer_voice_emotion”:[0.3,0.4,0.7……],
“service_text_feeling”:[0.2,0.5,0.7……],
“customer_text_feeling”:[0.2,0.5,0.7……],
“customer_rating”:[0.8,0.6,1.0……]}
The scoring of these voice historical datas is all that scoring is the higher the better by between stipulations to 0 to 1.Wherein service_
Voice_emotion is that the voice mood of customer service scores, and the voice mood that customer_voice_emotion is user scores,
Service_text_feeling is that the text emotion analysis result of customer service scores, and customer_text_feeling is user
Text emotion analysis result scoring, customer_rating be user active feedback score.To foregoing training sample data
After carrying out PCA processing, corresponding data matrix XMxNFor:
[[0.8,0.7,0.6,...],
[0.3,0.4,0.7,...],
[0.2,0.5,0.7,...],
[0.2,0.5,0.7,...],
[0.8,0.6,1.0,...]]
Wherein, M is data dimension, equal to 5, N is training samples number in this example, is equal to 100 in this example.
The process for generating weights is as follows:
Average A to training sample, i.e., each dimension has an average Am.
Calculate the difference D=X-A of each sample and average sample.
Construction feature covariance matrixIf data are 3-dimensionals, matrix form is:
Wherein, as it was previously stated, N represent training samples number, A represent training sample average value, T representing matrix transposition,
diRepresent i-th of element in D.
Then, covariance matrix C characteristic value and characteristic vector is sought, obtains the feature value vector and 5x5 dimensions of a 5x1 dimension
Eigenvalue matrix.
Feature value vector is ranked up, it is right because final scoring is that characteristic value one-dimensional, therefore that can take maximum
Characteristic vector corresponding to the characteristic value of maximum should be got, this characteristic vector is final weights.
Illustrate how to obtain final weights below:
For example current data is:
{“service_voice_emotion”:[0.8],
“customer_voice_emotion”:[0.3],
“service_text_feeling”:[0.2],
“customer_text_feeling”:[0.2],
“customer_rating”:[0.8]}
Based on voice historical data, there are a weights for customer service A, then no matter how many current data is, all every
Weights are multiplied by dimension.
FinalData (1x1)=EigenVector [1x5] x Data [5x1]
Wherein, FinalData (1x1) represents the evaluation knot obtained after evaluating the voice content before differentiation
Fruit;EigenVector [1x5] is the vector of five evaluative dimensions, and by taking customer service voices as an example, it represents customer service Emotion identification respectively
As a result, customer anger recognition result, customer service sentiment analysis result, client's sentiment analysis result and the subjective assessment data of client;
Data [5x1] represents the vector of weight corresponding to five evaluative dimensions difference.
For example EigenVector [1x5] is as described above, be respectively [0.8,0.3,0.2,0.2,0.8], Data [5x1] is
[0.0481,1.2456,0.563,0.443,0.235], then finally scoring FinalData (1x1) is 0.80136.
The method of the present embodiment is handled speech data by machine, reduces manual intervention degree, and number can be achieved
According to automatic Evaluation, be compared to the evaluation manually to voice content, evaluated by the analysis to voice content more objective
See, also need not reduce the dependence to manually evaluating so that evaluating data is not easy to be lost, according to right dependent on artificial evaluation
The analysis of voice content and the analysis pair speech text content corresponding with voice content, from multiple dimensions to the voice content
Evaluated so that evaluation result is more comprehensive.
Further, it is also possible to the daily performance kimonos with reference to aforementioned data evaluation procedure, self-defined evaluative dimension, such as customer service
Service type.
As shown in figure 3, the schematic diagram of the voice-based data evaluation device for present example three.The number of the present embodiment
Include according to evaluating apparatus:
Voice content discriminating module 301, the voice content discriminating module 301 are used for the voiceprint according to voice content
Application on Voiceprint Recognition is carried out to voice content, the voice content made a distinction according to the result of Application on Voiceprint Recognition;
Emotion identification module 302, for carrying out Emotion identification to the voice content obtained after differentiation, generate Emotion identification knot
Fruit;
Sentiment analysis module 303, for generating speech text according to the voice content obtained after the differentiation, to institute's predicate
Sound text carries out sentiment analysis, generates sentiment analysis result;
Voice content evaluation module 304, for according to the Emotion identification result and the sentiment analysis result, to distinguishing
The preceding voice content is evaluated, and generates evaluation result.
The voice-based data evaluation device of the present embodiment, speech data is handled by machine, reduces people
Work intervenes degree, and the automatic Evaluation of data can be achieved, the evaluation manually to voice content is compared to, by voice content
It is more objective that analysis is evaluated, and also need not reduce the dependence to manually evaluating dependent on artificial evaluation so that evaluation number
According to not easy to be lost, according to the analysis to voice content and the analysis pair speech text content corresponding with voice content, from multiple
Dimension is evaluated the voice content so that evaluation result is more comprehensive.
As one embodiment of the voice-based data evaluation device of the present invention, the voice content discriminating module 301
Application on Voiceprint Recognition is carried out to voice content for the voiceprint according to voice content, according to the result of Application on Voiceprint Recognition by voice content
It is divided into the first voice content and the second voice content.The Emotion identification module 302 is used for the voice content to being obtained after differentiation
Carry out Emotion identification, and the first Emotion identification result and the second Emotion identification result corresponding to generation.The sentiment analysis module
303 are used to generate speech text according to the voice content obtained after the differentiation, and carry out sentiment analysis to the speech text,
First sentiment analysis result corresponding to generation and the second sentiment analysis result.Concrete principle is the same as the principle in above-mentioned embodiment of the method
It is identical, repeat no more here.
Alternatively, voice content discriminating module 301 is used for the vocal print for extracting the voice content, and the vocal print that will be extracted
Matched with the vocal print prestored, if it fails to match, the voice content be defined as the first voice content, if matching into
Work(, then the voice content is defined as the second voice content.
As shown in figure 4, the schematic diagram of the voice-based data evaluation device for present example four.As base of the present invention
In a specific embodiment of the data evaluation device of voice, the voice-based data evaluation device also includes subjective assessment
Data extraction module 402, for extracting the subjective assessment data of the voice content before distinguishing;Voice content evaluation module 304, use
According to the Emotion identification result, the sentiment analysis result and the subjective assessment data, to the voice before differentiation
Content is evaluated, and generates evaluation result.
As a specific embodiment of the voice-based data evaluation device of the present invention, the voice-based data are commented
Valency device also judges execution module 401 and weight generation module 403 including evaluation result, and the evaluation result judges execution module
401 are used for before voice content discriminating module 301 carries out Application on Voiceprint Recognition according to the voiceprint of voice content to voice content,
Judge whether the voice content is corresponding with evaluation result, if there is evaluation result, ignore the voice content;If do not evaluate
As a result, then performed into the voice content discriminating module 301.The weight generation module 403 is used to distinguish in voice content
Before module 301 carries out Application on Voiceprint Recognition according to the voiceprint of voice content to voice content, to corresponding to history voice content
Emotion identification result, sentiment analysis result and subjective assessment data carry out principal component analysis dimensionality reduction, weight corresponding to generation, the power
Focusing in the certain period of time during follow-up use to change.
In addition, the Emotion identification module 302, enters for the frequency spectrum by analyzing the voice content obtained after the differentiation
Row Emotion identification, generate Emotion identification result.The sentiment analysis module 303, for obtaining the term vector of the speech text,
Sentiment analysis is carried out to the term vector using convolutional neural networks, generates sentiment analysis result.
The voice-based data evaluation device of this example, speech data is handled by machine, reduced artificial
Intervention degree, the automatic Evaluation of data can be achieved, the evaluation manually to voice content is compared to, by dividing voice content
It is more objective that analysis is evaluated, and also need not reduce the dependence to manually evaluating so that evaluating data dependent on artificial evaluation
It is not easy to be lost, according to the analysis to voice content and the analysis pair speech text content corresponding with voice content, from multiple dimensions
Degree is evaluated the voice content so that evaluation result is more comprehensive.
Certainly, the method for the voice-based data evaluation of the embodiment of the present invention is not limited to apply by customer service and client
Voice call content evaluate the service quality of customer service, in addition, the side of the voice-based data evaluation of the embodiment of the present invention
Method, other dimensions are may be incorporated into, for example the type of service of customer service to carry out comprehensive assessment to the service quality of customer service.In addition,
The method of the voice-based data evaluation of the embodiment of the present invention, it can be also used for assessing the voice content of single role to analyze
The mood of the role, for example judge the emotional state of client by only analyzing the voice content of client, so as to propose pin in advance
To the strategy of property.
Finally it should be noted that:Above example is only to illustrate the technical scheme of the embodiment of the present application, rather than it is limited
System;Although the application is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:Its
The technical scheme described in foregoing embodiments can still be modified, or which part technical characteristic is equal
Replace;And these modifications or replacement, the essence of appropriate technical solution is departed from each embodiment technical scheme of the application
Spirit and scope.
It will be understood by those skilled in the art that the embodiment of the embodiment of the present invention can be provided as method, apparatus (equipment) or
Computer program product.Therefore, the embodiment of the present invention can use complete hardware embodiment, complete software embodiment or combine soft
The form of the embodiment of part and hardware aspect.Moreover, the embodiment of the present invention can use wherein includes calculating in one or more
The computer-usable storage medium of machine usable program code (includes but is not limited to magnetic disk storage, CD-ROM, optical memory
Deng) on the form of computer program product implemented.
The embodiment of the present invention is with reference to method, apparatus (equipment) according to embodiments of the present invention and computer program product
Flow chart and/or block diagram describe.It should be understood that can be by every in computer program instructions implementation process figure and/or block diagram
One flow and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computers can be provided
Processor of the programmed instruction to all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices
To produce a machine so that produce use by the instruction of computer or the computing device of other programmable data processing devices
In the dress for realizing the function of being specified in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames
Put.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, so as in computer or
The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in individual square frame or multiple square frames.
Claims (16)
- A kind of 1. voice-based data evaluation method, it is characterised in that including:Application on Voiceprint Recognition is carried out to voice content according to the voiceprint of voice content, according to the result of Application on Voiceprint Recognition to the voice Content makes a distinction;Emotion identification is carried out to the voice content obtained after differentiation, generates Emotion identification result;Speech text is generated according to the voice content obtained after the differentiation, sentiment analysis, generation are carried out to the speech text Sentiment analysis result;According to the Emotion identification result and the sentiment analysis result, the voice content before differentiation is evaluated, it is raw Into evaluation result.
- 2. according to the method for claim 1, it is characterised in that methods described also includes:Extract the subjective assessment data that the voice content has prestored;It is described according to the Emotion identification result and the sentiment analysis result, the voice content before differentiation is commented Valency, generation evaluation result include:According to the Emotion identification result, the sentiment analysis result and the subjective assessment data, The voice content before differentiation is evaluated, generates evaluation result.
- 3. according to the method for claim 1, it is characterised in that the voiceprint according to voice content is to voice content Application on Voiceprint Recognition is carried out, the voice content is made a distinction according to the result of Application on Voiceprint Recognition, including:According to the vocal print of voice content Information carries out Application on Voiceprint Recognition to voice content, and the voice content is divided into the first voice content according to the result of Application on Voiceprint Recognition With the second voice content,The described pair of voice content obtained after distinguishing carries out Emotion identification, generates Emotion identification result, including:To being obtained after differentiation The first voice content and the second voice content carry out Emotion identification, the first Emotion identification result and the second mood corresponding to generation Recognition result,It is described that speech text is generated according to the voice content obtained after the differentiation, sentiment analysis is carried out to the speech text, Sentiment analysis result is generated, including:It is corresponding according to the first voice content obtained after the differentiation and the generation of the second voice content Speech text, sentiment analysis, the first sentiment analysis result and the second emotion corresponding to generation are carried out to the corresponding speech text Analysis result.
- 4. according to the method for claim 3, it is characterised in that the voiceprint according to voice content is to voice content Application on Voiceprint Recognition is carried out, the voice content is divided into by the first voice content and the second voice content according to the result of Application on Voiceprint Recognition Including:The vocal print of the voice content is extracted, and the vocal print extracted is matched with the vocal print prestored, if matching is lost Lose, then the voice content is defined as the first voice content, if the match is successful, the voice content is defined as in the second voice Hold.
- 5. according to the method for claim 1, it is characterised in that in the voiceprint according to voice content in voice Hold before carrying out Application on Voiceprint Recognition, methods described also includes:Judge whether the voice content is corresponding with evaluation result, if there is evaluation result, ignore the voice content;If no Evaluation result, then perform the voiceprint according to voice content and Application on Voiceprint Recognition is carried out to voice content.
- 6. according to the method for claim 2, it is characterised in that in the voiceprint according to voice content in voice Hold before carrying out Application on Voiceprint Recognition, in addition to:Principal component analysis is carried out to Emotion identification result, sentiment analysis result and subjective assessment data corresponding to history voice content Dimensionality reduction, weight corresponding to generation;It is described according to the Emotion identification result, the sentiment analysis result and the subjective assessment data, to the institute before differentiation State voice content to be evaluated, generation evaluation result includes:According to weight, the emotion point corresponding to the Emotion identification result Weight corresponding to weight corresponding to result and the subjective assessment data is analysed, to the corresponding Emotion identification result, the feelings Sense analysis result and the subjective assessment data are weighted summation, and using the result of weighted sum as evaluation result.
- 7. according to the method for claim 1, it is characterised in that the described pair of voice content obtained after distinguishing carries out mood knowledge Not, generation Emotion identification result includes:Emotion identification is carried out by the frequency spectrum for analyzing the voice content obtained after the differentiation, generates Emotion identification result.
- 8. according to the method for claim 1, it is characterised in that described that sentiment analysis, generation are carried out to the speech text Sentiment analysis result includes:The term vector of the speech text is obtained, sentiment analysis is carried out to the term vector using convolutional neural networks, generates feelings Feel analysis result.
- A kind of 9. voice-based data evaluation device, it is characterised in that including:Voice content discriminating module, Application on Voiceprint Recognition is carried out to voice content for the voiceprint according to voice content, according to sound The result of line identification makes a distinction to the voice content;Emotion identification module, for carrying out Emotion identification to the voice content obtained after differentiation, generate Emotion identification result;Sentiment analysis module, for generating speech text according to the voice content obtained after the differentiation, to the speech text Sentiment analysis is carried out, generates sentiment analysis result;Voice content evaluation module, for according to the Emotion identification result and the sentiment analysis result, to the institute before differentiation State voice content to be evaluated, generate evaluation result.
- 10. device according to claim 9, it is characterised in that also include:Subjective assessment data extraction module, for carrying The subjective assessment data for taking the voice content before distinguishing to prestore;The voice content evaluation module, for according to the Emotion identification result, the sentiment analysis result and the subjectivity Evaluating data, the voice content before differentiation is evaluated, generate evaluation result.
- 11. device according to claim 9, it is characterised in that the voice content discriminating module, for according in voice The voiceprint of appearance carries out Application on Voiceprint Recognition to voice content, and the voice content is divided into first according to the result of Application on Voiceprint Recognition Voice content and the second voice content,The Emotion identification module, for carrying out Emotion identification, the first feelings corresponding to generation to the voice content obtained after differentiation Thread recognition result and the second Emotion identification result,The sentiment analysis module, for generating speech text according to the voice content obtained after the differentiation, to the voice Text carries out sentiment analysis, the first sentiment analysis result and the second sentiment analysis result corresponding to generation.
- 12. device according to claim 11, it is characterised in that the voice content discriminating module, it is described for extracting The vocal print of voice content, and the vocal print extracted is matched with the vocal print prestored, if it fails to match, by the voice Content is defined as the first voice content, if the match is successful, the voice content is defined as into the second voice content.
- 13. device according to claim 9, it is characterised in that also including evaluation result judge module, at described Before carrying out Application on Voiceprint Recognition to voice content according to the voiceprint of voice content, judge whether the voice content is corresponding with evaluation As a result, if there is evaluation result, the voice content is ignored;If without evaluation result, perform described according to voice content Voiceprint carries out Application on Voiceprint Recognition to voice content.
- 14. device according to claim 10, it is characterised in that also including weight generation module, in the voice Before content discriminating module carries out Application on Voiceprint Recognition according to the voiceprint of voice content to voice content, to history voice content pair Emotion identification result, sentiment analysis result and the subjective assessment data answered carry out principal component analysis dimensionality reduction, weight corresponding to generation;Voice content evaluation module, for weight, institute according to corresponding to the Emotion identification result and the Emotion identification result State weight corresponding to sentiment analysis result and the sentiment analysis result and, the subjective assessment data and the subjectivity are commented Valence mumber weight corresponding to, to the corresponding Emotion identification result, the sentiment analysis result and the subjective assessment data Summation is weighted, and using the result of weighted sum as evaluation result.
- 15. device according to claim 9, it is characterised in that the Emotion identification module, after analyzing the differentiation The frequency spectrum of the voice content of acquisition carries out Emotion identification, generates Emotion identification result.
- 16. device according to claim 9, it is characterised in that the sentiment analysis module, for obtaining the voice text This term vector, sentiment analysis is carried out to the term vector using convolutional neural networks, generates sentiment analysis result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710703860.5A CN107452385A (en) | 2017-08-16 | 2017-08-16 | A kind of voice-based data evaluation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710703860.5A CN107452385A (en) | 2017-08-16 | 2017-08-16 | A kind of voice-based data evaluation method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107452385A true CN107452385A (en) | 2017-12-08 |
Family
ID=60492595
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710703860.5A Pending CN107452385A (en) | 2017-08-16 | 2017-08-16 | A kind of voice-based data evaluation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107452385A (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108186033A (en) * | 2018-01-08 | 2018-06-22 | 杭州草莽科技有限公司 | A kind of child's mood monitoring method and its system based on artificial intelligence |
CN108416036A (en) * | 2018-03-13 | 2018-08-17 | 杭州声讯网络科技有限公司 | It is a kind of to apply the machine learning method in phone scene |
CN108764753A (en) * | 2018-06-06 | 2018-11-06 | 平安科技(深圳)有限公司 | Test method, apparatus, computer equipment and the storage medium of business personnel's ability |
CN108848416A (en) * | 2018-06-21 | 2018-11-20 | 北京密境和风科技有限公司 | The evaluation method and device of audio-video frequency content |
CN108962281A (en) * | 2018-08-15 | 2018-12-07 | 三星电子(中国)研发中心 | A kind of evaluation of language expression and householder method and device |
CN109618065A (en) * | 2018-12-28 | 2019-04-12 | 合肥凯捷技术有限公司 | A kind of voice quality inspection rating system |
CN109726655A (en) * | 2018-12-19 | 2019-05-07 | 平安普惠企业管理有限公司 | Customer service evaluation method, device, medium and equipment based on Emotion identification |
CN110062117A (en) * | 2019-04-08 | 2019-07-26 | 商客通尚景科技(上海)股份有限公司 | A kind of sonic detection and method for early warning |
CN110349312A (en) * | 2019-07-09 | 2019-10-18 | 江苏万贝科技有限公司 | A kind of intelligent peephole voice reminder identifying system and its method based on household |
CN110364185A (en) * | 2019-07-05 | 2019-10-22 | 平安科技(深圳)有限公司 | A kind of Emotion identification method, terminal device and medium based on voice data |
CN110689890A (en) * | 2019-10-16 | 2020-01-14 | 声耕智能科技(西安)研究院有限公司 | Voice interaction service processing system |
CN110888997A (en) * | 2018-09-10 | 2020-03-17 | 北京京东尚科信息技术有限公司 | Content evaluation method and system and electronic equipment |
CN111199158A (en) * | 2019-12-30 | 2020-05-26 | 沈阳民航东北凯亚有限公司 | Method and device for scoring civil aviation customer service |
CN111341349A (en) * | 2018-12-03 | 2020-06-26 | 本田技研工业株式会社 | Emotion estimation device, emotion estimation method, and storage medium |
WO2020187300A1 (en) * | 2019-03-21 | 2020-09-24 | 杭州海康威视数字技术股份有限公司 | Monitoring system, method and apparatus, server and storage medium |
CN111767736A (en) * | 2019-03-27 | 2020-10-13 | 阿里巴巴集团控股有限公司 | Event processing and data processing method, device, system and storage medium |
CN112052994A (en) * | 2020-08-28 | 2020-12-08 | 中信银行股份有限公司 | Customer complaint upgrade prediction method and device and electronic equipment |
CN112837693A (en) * | 2021-01-29 | 2021-05-25 | 上海钧正网络科技有限公司 | User experience tendency identification method, device, equipment and readable storage medium |
CN112992187A (en) * | 2021-02-26 | 2021-06-18 | 平安科技(深圳)有限公司 | Context-based voice emotion detection method, device, equipment and storage medium |
WO2021164147A1 (en) * | 2020-02-19 | 2021-08-26 | 平安科技(深圳)有限公司 | Artificial intelligence-based service evaluation method and apparatus, device and storage medium |
WO2021232594A1 (en) * | 2020-05-22 | 2021-11-25 | 深圳壹账通智能科技有限公司 | Speech emotion recognition method and apparatus, electronic device, and storage medium |
CN114299921A (en) * | 2021-12-07 | 2022-04-08 | 浙江大学 | Voiceprint security scoring method and system for voice command |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625005A (en) * | 2012-03-05 | 2012-08-01 | 广东天波信息技术股份有限公司 | Call center system with function of real-timely monitoring service quality and implement method of call center system |
CN103811009A (en) * | 2014-03-13 | 2014-05-21 | 华东理工大学 | Smart phone customer service system based on speech analysis |
US20150172465A1 (en) * | 2013-12-18 | 2015-06-18 | Telefonica Digital España, S.L.U. | Method and system for extracting out characteristics of a communication between at least one client and at least one support agent and computer program thereof |
CN105895077A (en) * | 2015-11-15 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Recording editing method and recording device |
CN106228286A (en) * | 2016-07-15 | 2016-12-14 | 西安美林数据技术股份有限公司 | A kind of data analysing method for the assessment of artificial customer service work quality |
-
2017
- 2017-08-16 CN CN201710703860.5A patent/CN107452385A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102625005A (en) * | 2012-03-05 | 2012-08-01 | 广东天波信息技术股份有限公司 | Call center system with function of real-timely monitoring service quality and implement method of call center system |
US20150172465A1 (en) * | 2013-12-18 | 2015-06-18 | Telefonica Digital España, S.L.U. | Method and system for extracting out characteristics of a communication between at least one client and at least one support agent and computer program thereof |
CN103811009A (en) * | 2014-03-13 | 2014-05-21 | 华东理工大学 | Smart phone customer service system based on speech analysis |
CN105895077A (en) * | 2015-11-15 | 2016-08-24 | 乐视移动智能信息技术(北京)有限公司 | Recording editing method and recording device |
CN106228286A (en) * | 2016-07-15 | 2016-12-14 | 西安美林数据技术股份有限公司 | A kind of data analysing method for the assessment of artificial customer service work quality |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108186033A (en) * | 2018-01-08 | 2018-06-22 | 杭州草莽科技有限公司 | A kind of child's mood monitoring method and its system based on artificial intelligence |
CN108416036A (en) * | 2018-03-13 | 2018-08-17 | 杭州声讯网络科技有限公司 | It is a kind of to apply the machine learning method in phone scene |
CN108764753A (en) * | 2018-06-06 | 2018-11-06 | 平安科技(深圳)有限公司 | Test method, apparatus, computer equipment and the storage medium of business personnel's ability |
CN108848416A (en) * | 2018-06-21 | 2018-11-20 | 北京密境和风科技有限公司 | The evaluation method and device of audio-video frequency content |
CN108962281A (en) * | 2018-08-15 | 2018-12-07 | 三星电子(中国)研发中心 | A kind of evaluation of language expression and householder method and device |
CN108962281B (en) * | 2018-08-15 | 2021-05-07 | 三星电子(中国)研发中心 | Language expression evaluation and auxiliary method and device |
CN110888997A (en) * | 2018-09-10 | 2020-03-17 | 北京京东尚科信息技术有限公司 | Content evaluation method and system and electronic equipment |
CN111341349B (en) * | 2018-12-03 | 2023-07-25 | 本田技研工业株式会社 | Emotion estimation device, emotion estimation method, and storage medium |
CN111341349A (en) * | 2018-12-03 | 2020-06-26 | 本田技研工业株式会社 | Emotion estimation device, emotion estimation method, and storage medium |
CN109726655A (en) * | 2018-12-19 | 2019-05-07 | 平安普惠企业管理有限公司 | Customer service evaluation method, device, medium and equipment based on Emotion identification |
CN109618065A (en) * | 2018-12-28 | 2019-04-12 | 合肥凯捷技术有限公司 | A kind of voice quality inspection rating system |
WO2020187300A1 (en) * | 2019-03-21 | 2020-09-24 | 杭州海康威视数字技术股份有限公司 | Monitoring system, method and apparatus, server and storage medium |
CN111767736A (en) * | 2019-03-27 | 2020-10-13 | 阿里巴巴集团控股有限公司 | Event processing and data processing method, device, system and storage medium |
CN110062117A (en) * | 2019-04-08 | 2019-07-26 | 商客通尚景科技(上海)股份有限公司 | A kind of sonic detection and method for early warning |
CN110364185A (en) * | 2019-07-05 | 2019-10-22 | 平安科技(深圳)有限公司 | A kind of Emotion identification method, terminal device and medium based on voice data |
CN110364185B (en) * | 2019-07-05 | 2023-09-29 | 平安科技(深圳)有限公司 | Emotion recognition method based on voice data, terminal equipment and medium |
CN110349312B (en) * | 2019-07-09 | 2021-09-17 | 江苏万贝科技有限公司 | Household-based intelligent cat eye voice reminding and recognition system and method |
CN110349312A (en) * | 2019-07-09 | 2019-10-18 | 江苏万贝科技有限公司 | A kind of intelligent peephole voice reminder identifying system and its method based on household |
CN110689890A (en) * | 2019-10-16 | 2020-01-14 | 声耕智能科技(西安)研究院有限公司 | Voice interaction service processing system |
CN111199158A (en) * | 2019-12-30 | 2020-05-26 | 沈阳民航东北凯亚有限公司 | Method and device for scoring civil aviation customer service |
WO2021164147A1 (en) * | 2020-02-19 | 2021-08-26 | 平安科技(深圳)有限公司 | Artificial intelligence-based service evaluation method and apparatus, device and storage medium |
WO2021232594A1 (en) * | 2020-05-22 | 2021-11-25 | 深圳壹账通智能科技有限公司 | Speech emotion recognition method and apparatus, electronic device, and storage medium |
CN112052994A (en) * | 2020-08-28 | 2020-12-08 | 中信银行股份有限公司 | Customer complaint upgrade prediction method and device and electronic equipment |
CN112837693A (en) * | 2021-01-29 | 2021-05-25 | 上海钧正网络科技有限公司 | User experience tendency identification method, device, equipment and readable storage medium |
CN112992187A (en) * | 2021-02-26 | 2021-06-18 | 平安科技(深圳)有限公司 | Context-based voice emotion detection method, device, equipment and storage medium |
CN114299921A (en) * | 2021-12-07 | 2022-04-08 | 浙江大学 | Voiceprint security scoring method and system for voice command |
CN114299921B (en) * | 2021-12-07 | 2022-11-18 | 浙江大学 | Voiceprint security scoring method and system for voice command |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107452385A (en) | A kind of voice-based data evaluation method and device | |
Sadjadi et al. | The 2019 NIST Audio-Visual Speaker Recognition Evaluation. | |
CN110175229B (en) | Method and system for on-line training based on natural language | |
CN107818798A (en) | Customer service quality evaluating method, device, equipment and storage medium | |
CN109784414A (en) | Customer anger detection method, device and electronic equipment in a kind of phone customer service | |
Sadjadi et al. | The 2019 NIST Speaker Recognition Evaluation CTS Challenge. | |
CN104750674B (en) | A kind of man-machine conversation's satisfaction degree estimation method and system | |
CN112346567A (en) | Virtual interaction model generation method and device based on AI (Artificial Intelligence) and computer equipment | |
CN113468296B (en) | Model self-iteration type intelligent customer service quality inspection system and method capable of configuring business logic | |
CN107547527A (en) | A kind of voice quality inspection financial security control system and control method | |
WO2021042842A1 (en) | Interview method and apparatus based on ai interview system, and computer device | |
CN109857846A (en) | The matching process and device of user's question sentence and knowledge point | |
CN107886231A (en) | The QoS evaluating method and system of customer service | |
CN106776832A (en) | Processing method, apparatus and system for question and answer interactive log | |
CN107767881A (en) | A kind of acquisition methods and device of the satisfaction of voice messaging | |
CN108632137A (en) | Answer model training method, intelligent chat method, device, equipment and medium | |
CN107766560B (en) | Method and system for evaluating customer service flow | |
Shah et al. | First workshop on speech processing for code-switching in multilingual communities: Shared task on code-switched spoken language identification | |
Sawhney et al. | An empirical investigation of bias in the multimodal analysis of financial earnings calls | |
CN107578183A (en) | Method for managing resource and device based on capability evaluation | |
Paprzycki et al. | Data mining approach for analyzing call center performance | |
CN109947651A (en) | Artificial intelligence engine optimization method and device | |
JP2020160551A (en) | Analysis support device for personnel item, analysis support method, program, and recording medium | |
CN116151840B (en) | User service data intelligent management system and method based on big data | |
CN107886233A (en) | The QoS evaluating method and system of customer service |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171208 |