CN107452405A - A kind of method and device that data evaluation is carried out according to voice content - Google Patents

A kind of method and device that data evaluation is carried out according to voice content Download PDF

Info

Publication number
CN107452405A
CN107452405A CN201710703850.1A CN201710703850A CN107452405A CN 107452405 A CN107452405 A CN 107452405A CN 201710703850 A CN201710703850 A CN 201710703850A CN 107452405 A CN107452405 A CN 107452405A
Authority
CN
China
Prior art keywords
voice content
carried out
emotion identification
voice
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710703850.1A
Other languages
Chinese (zh)
Other versions
CN107452405B (en
Inventor
薛刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Easy Thinking Learning Technology Co Ltd
Beijing Yizhen Xuesi Education Technology Co Ltd
Original Assignee
Beijing Easy Thinking Learning Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Easy Thinking Learning Technology Co Ltd filed Critical Beijing Easy Thinking Learning Technology Co Ltd
Priority to CN201710703850.1A priority Critical patent/CN107452405B/en
Publication of CN107452405A publication Critical patent/CN107452405A/en
Application granted granted Critical
Publication of CN107452405B publication Critical patent/CN107452405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5175Call or contact centers supervision arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Telephonic Communication Services (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the present invention provides a kind of method and device that data evaluation is carried out according to voice content, belongs to technical field of data processing.Wherein, the method that data evaluation is carried out according to voice content includes:Voice content is obtained, the voice content is analyzed and processed;Emotion identification is carried out to the voice content after analyzing and processing, generates Emotion identification result;According to data evaluation result corresponding to Emotion identification result generation.By the embodiment of the present invention, the evaluation customer service quality in terms of mood is realized, the active service quality of customer service is obtained the reflection of objective reality.

Description

A kind of method and device that data evaluation is carried out according to voice content
Technical field
The present embodiments relate to technical field of data processing, more particularly to one kind to carry out data evaluation according to voice content Method and device.
Background technology
The electricity that enterprise typically possesses substantial amounts exhales artificial customer service team, main to be responsible for collecting client feedback, answer client Consulting, customer in response are complained.The service quality of customer service team is directly connected to client's public praise of an enterprise, its client recorded Feed back the formulation important also for product improvement, enterprise development plan.Therefore, artificial customer service team is to look forward at this stage The indispensable important component of industry, and the service quality assessment of team also becomes an important problem, is related to group Team's performance appraisal, personnel's variation etc..
But in the prior art, the evaluation to the service quality of customer service is only limitted to the feedback of client, not from customer service Angle to carry out overall merit, especially mood of the customer service in service process to the service quality of customer service, can be to a certain degree On have influence on the mood of client, and then influence the service quality of whole customer service team.It can be seen that prior art is to customer service matter In the evaluation method of amount, following defect is primarily present:The evaluation of customer service quality is only limitted to client feedback, can not be objective, true Reflect the active service quality of customer service on the spot.
The content of the invention
In view of this, one of technical problem that the embodiment of the present invention solves is that provide one kind is carried out according to voice content The method and device of data evaluation, to overcome drawbacks described above of the prior art, realize and the objective of customer service quality is commented The purpose of valency.
Based on above-mentioned purpose, the embodiment of the present invention provides a kind of method that data evaluation is carried out according to voice content, including:
Voice content is obtained, the voice content is analyzed and processed;
Emotion identification is carried out to the voice content after analyzing and processing, generates Emotion identification result;
According to data evaluation result corresponding to Emotion identification result generation.
Based on above-mentioned purpose, the embodiment of the present invention also provides a kind of device that data evaluation is carried out according to voice content, bag Include:
Voice content acquisition module, for obtaining voice content;
Voice content analysis and processing module, for being analyzed and processed to voice content;
Emotion identification module, for carrying out Emotion identification to the voice content after analyzing and processing, generate Emotion identification As a result;
Data evaluation module, for the data evaluation result according to corresponding to the generation of Emotion identification result.
From above technical scheme, the method provided in an embodiment of the present invention that data evaluation is carried out according to voice content, By carrying out Emotion identification to voice content, data evaluation is carried out to voice content according to the result of Emotion identification, compared to people For evaluation, scheme provided in an embodiment of the present invention make it that the data evaluation to voice content is more objective.With customer service and client Exemplified by voice content, scheme provided in an embodiment of the present invention can cause the evaluation to the service quality of customer service to be no longer only limitted to visitor The feedback at family, the evaluation customer service quality in terms of mood is realized, the active service quality of customer service is obtained objective reality Reflection.By being handled voice content and being carried out Emotion identification, it can be determined that go out customer service or client in communication process Emotional change, and then can give warning in advance, control is carried out to the mood of customer service.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments described in inventive embodiments, for those of ordinary skill in the art, it can also be obtained according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 show the method flow diagram that data evaluation is carried out according to voice content of the embodiment of the present invention one;
Fig. 2 show the method flow diagram that data evaluation is carried out according to voice content of the embodiment of the present invention two;
Fig. 3 show the structure drawing of device that data evaluation is carried out according to voice content of the embodiment of the present invention three.
Embodiment
In order that those skilled in the art more fully understand the technical scheme in the embodiment of the present invention, below in conjunction with the present invention Accompanying drawing in embodiment, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described reality It is only part of the embodiment of the embodiment of the present invention to apply example, rather than whole embodiments.Based on the implementation in the embodiment of the present invention Example, the every other embodiment that those of ordinary skill in the art are obtained, it should all belong to the scope that the embodiment of the present invention is protected.
Specific implementation of the embodiment of the present invention is further illustrated with reference to accompanying drawing of the embodiment of the present invention.
It is as shown in Figure 1 the method flow diagram that data evaluation is carried out according to voice content of the embodiment of the present invention one, this reality The method that data evaluation is carried out according to voice content applied in example comprises the following steps:
S101:Voice content is obtained, the voice content is analyzed and processed.
In the present embodiment, the voice content can be with the voice existing for any appropriate format, comprising voice Hold, e.g., the file of the various forms comprising voice call, when the side that the present embodiment is carried out to data evaluation according to voice content When method is applied in the service quality of evaluation customer service, the voice content can be client with it is real-time in customer service communication process The history voice that voice record or client link up with customer service.By being recorded to the voice content of client and customer service, Above-mentioned voice content can be obtained.But the voice content in the embodiment of the present invention is not limited only to this, the voice of other forms, such as The voice of a people is included, or, the voice content such as voice of dialogue including multiple (two and two or more) people also may be used It is equally applicable to scheme provided in an embodiment of the present invention.
After the voice content is got, the voice content is analyzed and processed, for example, to the voice content Including but not limited to noise reduction, cutting, role's identification etc. processing is carried out, so that the follow-up data evaluation for carrying out voice content makes With.The analyzing and processing of voice content can be more convenient for subsequently carrying out Emotion identification to voice content, e.g., by voice content Noise reduction is carried out, substantial amounts of noise can be filtered out, improve the efficiency that the voice content is handled and evaluated.
, can be by filtering for example, be filtered noise reduction to the voice content using frequency domain Wiener filtering Method of Noise Most of noise remove, improve the accuracy rate of role's identification and the identification of voice mood.Certainly, in the presence pair of noise Role identification influence it is less in the case of, can also first to the voice content carry out role's identification, then again to role know The voice content after not carries out noise reduction.Or can also be without noise reduction process.
By taking first noise reduction process as an example, after noise reduction process is carried out to the voice content, by the voice after noise reduction process Hold and carry out cutting by time domain, i.e., voice content is subjected to cutting on a timeline, according to the sequencing of time by the voice Content is divided into multiple natural sentences.Still by taking the dialogic voice of customer service and client as an example, due to the communication process in customer service and client In, generally dialogue is carried out successively, therefore can use time domain cutting strategy, by the cutting of time domain, can form one The dialogue of one, used for processing below.When noise is smaller, can not carry out noise reduction process to the voice content, then Cutting directly can be carried out by time domain to the voice content.
S102:Emotion identification is carried out to the voice content after analyzing and processing, generates Emotion identification result.
After being analyzed and processed to voice content, Emotion identification is further advanced by, identifies that the voice content is corresponding Client or customer service mood, generate Emotion identification result, according to the Emotion identification result, on the one hand can be used in visitor The reference of the evaluation of service quality is taken, on the other hand, when customer service mood i.e. will be out of control, can give warning in advance, pass through prompting etc. Mode is controlled to the mood of customer service, or can interrupt customer service and the communication of client, avoids leaving the map of difference to client.
In a kind of feasible pattern, market can be entered to the voice content after analyzing and processing according to deep neural network model Thread identifies, generates mood sequence corresponding with the voice content.Wherein, deep neural network model can use trained That completes is used to carry out the network model of Emotion identification, deep neural network model can also be voluntarily trained in advance, to enter market Thread identification uses.
It is alternatively possible to mood knowledge is being carried out to the voice content after the analyzing and processing according to deep neural network model Before not, deep neural network model is trained, including:The voice content sample for training is obtained, and to voice content sample Carry out mood mark;Study is trained to deep neural network model using the voice content sample after mark, is used for The deep neural network model of Emotion identification.
Still optionally further, voice content sample can also be optimized, to obtain more preferable deep neural network mould Type training result.For example, being pre-processed to the voice content sample after mark, the pretreatment includes Fourier transformation processing And noise reduction process;Using mark and pretreated voice content sample has been carried out, deep neural network model is trained Study, obtains the deep neural network model for Emotion identification.
Feature of the language of different people in frequency domain can be entirely different, and the different moods of same person are embodied in frequency spectrum On feature be also entirely different, based on these information, by being manually labeled to the frequency spectrum of different people difference mood, then Model training is carried out by deep neural network method (e.g., cyclic convolution neutral net RNN), can finally be gone to know by machine Mood not per a word, a mood dialogue sequence is formed, sequence is talked with by the mood, client can be assessed exactly Emotional change in whole dialog procedure.
Based on this, in a kind of feasible program, according to deep neural network model to the voice content after analyzing and processing Emotion identification is carried out, can be according to the cutting to the voice content when generating mood sequence corresponding with the voice content Multiple voice segments with time order and function ordinal relation are obtained, multiple voice segments are sequentially inputted according to the time order and function Emotion identification is carried out into deep neural network model, respectively multiple Emotion identification results corresponding to acquisition;According to the time Sequencing, multiple Emotion identification results are generated into mood sequence.In such a mode, first voice content is cut by time domain Point, multiple voice segments are obtained, multiple voice segments have regular hour sequencing relation, alternatively, each voice point Section can also be corresponding with different role (such as customer service or client).Multiple voice segments are in chronological sequence sequentially inputted into depth god Emotion identification is carried out respectively through network model, by multiple Emotion identification results corresponding to acquisition.It is possible to further according to multiple Corresponding multiple Emotion identification results are organized into mood sequence by the time order and function order of voice segment.
For example, by voice content A cuttings be (A1, A2, A3, A4, A5) by time domain, A1 prior to A2, A2 prior to A3, A3 prior to A4, A4 are prior to A5.Order according to (A1, A2, A3, A4, A5) inputs deep neural network model respectively, obtains and A1 pairs respectively The S1 answered, S2 corresponding with A2, S3 corresponding with A3, S4 corresponding with A4, S5 corresponding with A5, according to (A1, A2, A3, A4, A5 sequencing), corresponding result S1-S5 is organized as mood sequence (S1, S2, S3, S4, S5).Wherein, the mood sequence The character string of multiple characters is included, the character in the character string corresponds with mood.
S103:According to data evaluation result corresponding to Emotion identification result generation.
For example, after the mood sequence of customer service and client is obtained, data evaluation can be carried out according to the mood sequence, one Aspect can be determined that the quality of the service quality of current customer service, on the other hand, can be by mood sequence, to current customer service Quality is given a mark.
, can be according to the ratio generation pair shared by mood different in the Emotion identification result in a kind of feasible pattern The data evaluation result answered;In a further mode of operation, can be generated according to the variation tendency of mood in the Emotion identification result Corresponding data evaluation result.
The method that data evaluation is carried out according to voice content that the present embodiment provides, by carrying out mood knowledge to voice content Not, data evaluation is carried out to voice content according to the result of Emotion identification, it is provided in an embodiment of the present invention compared to artificial evaluation Scheme make it that the data evaluation to voice content is more objective.By taking the voice content of customer service and client as an example, the embodiment of the present invention The scheme of offer can cause the evaluation to the service quality of customer service to be no longer only limitted to the feedback of client, realize in terms of mood Customer service quality is evaluated, the active service quality of customer service is obtained the reflection of objective reality.At to voice content Manage and carry out Emotion identification, it can be determined that go out the emotional change of customer service or client in communication process, and then can be pre- in advance It is alert, control is carried out to the mood of customer service.
As shown in Fig. 2 be the embodiment of the present invention two according to voice content carry out data evaluation method flow diagram, this reality The method for applying example first carries out cutting to the voice content after noise reduction process by time domain, then carries out angle to the voice content after cutting Color identifies, and carries out identity to the voice content after role's identification, comprises the following steps:
S201:Obtain voice content.
When by the method for the present embodiment be applied to evaluation customer service service quality in when, by constantly upload obtain client and The voice call content of customer service, come obtain it is described take voice content, the voice content can also be transferred from voice content storehouse.
S202:Noise reduction process is carried out to the voice content.
Specific noise reduction process mode can refer to the mode in foregoing embodiment illustrated in fig. 1, will not be repeated here.
S203:Cutting is carried out by time domain to the voice content after noise reduction process.
Specific slit mode can refer to the mode in foregoing embodiment illustrated in fig. 1, will not be repeated here.
S204:Role's identification is carried out to the voice content after cutting, and identity is carried out to the voice content after role's identification Mark.
After cutting is carried out to voice content, it is possible to further carry out role's identification to the voice content after cutting. That is, noise reduction process is first carried out to the voice content, and the voice content after noise reduction process is subjected to cutting by time domain.Then, Role's identification is carried out to the voice content after cutting, and identity is carried out to the voice content after role's identification.For example, to cutting Voice content after point carries out Application on Voiceprint Recognition, can carry out role's identification, such as area to role corresponding to the voice content after cutting It is the voice content of customer service or the voice content of client to divide the voice content., can be first according to institute in a kind of feasible pattern The vocal print for stating voice content carries out role's identification to the voice content after cutting, and the voice content after being identified to role is carried out Identification number.By identification number, it is convenient to be provided for the differentiation processing subsequently to voice content.
In another kind to that in the analyzing and processing mode of the voice content, first can be carried out to the voice content at noise reduction Reason, then role's identification is carried out to the voice content after noise reduction process, and identity is carried out to the voice content after role's identification, Then cutting is carried out by time domain to the voice content after identity., can also be first to institute as it was previously stated, if influence of noise is smaller State voice content and carry out role's identification, and identity is carried out to the voice content after role's identification, to the language after identity Sound content carries out cutting by time domain.
It should be noted that in above-mentioned embodiment, when carrying out noise reduction process, with first carry out noise reduction carry out again it is other Processing, exemplified by such as cutting, but it should be understood by those skilled in the art that in actual use, is not limited to first carry out the side of noise reduction Formula, noise reduction process can be carried out in any appropriate link, e.g., can be after to voice content cutting again to the voice after cutting Content handle etc..
During role's identification is carried out to voice content, vocal print and the storage of customer service can be gathered in advance, when to working as When preceding voice content carries out role's identification, the vocal print of current speech content is extracted, and the vocal print of the customer service with prestoring is carried out Matching, if the match is successful, the voice content is the voice content of customer service, if it fails to match, the voice content is client's Voice content, identified by role, can distinguish the corresponding voice in voice content be it is that client says or customer service say, It is convenient to carry out further data evaluation.
But it is not limited to carry out role's identification according to vocal print, other role's identification methods are equally applicable, and e.g., pass through engineering Habit mode etc..By role's identification and identification, it is more convenient for making a distinction the role in the voice content, improves follow-up Emotion identification and data evaluation efficiency.
It should be noted that during the voice content after to cutting carries out role's identification, when the voice content Only during a corresponding role, identity can not be carried out to the voice content.
S205:Emotion identification is carried out to the voice content after identity, generates Emotion identification result.
Still with according to deep neural network model to after analyzing and processing voice content carry out Emotion identification, generation with it is described Exemplified by mood sequence corresponding to voice content, in an example, such as, every kind of mood is numbered, gentle is 0, happily For 1, laugh for 2, excitement is 3, and indignation is 4, then the mood sequence generated can be [0,0,0,0,0,3,3,4,4,4].It can be seen that Mood sequence can be to include the character strings of multiple characters, and the character in the character string corresponds with mood.Enter one Step ground, for the ease of machine recognition, role corresponding to voice content can also be corresponding with numbering, for example the numbering of customer service is 0, visitor The numbering at family is 1, and the numbering can derive from the identification number that foregoing voice content after being identified to role is carried out.
Then the mood sequence of customer service and client can be expressed as 0:[0,0,0,0,0,0,0,0,0,0];1:[0,0,0,0,0, 3,3,4,4,4].Equally, the mood sequence is to include the character strings of multiple characters, character and mood in the character string Correspond.Customer service mood in voice call process, which is can be seen that, from the mood sequence keeps gentle, and the mood of client is then It is changed into excitement from gentle, and then is changed into indignation.Certainly, other characters can be used to customer service and the change of client, to mood Numbering can equally use other characters, such as letter or numeral, no longer enumerate here.In the present embodiment, when client's Mood from it is gentle be changed into excitement when, can remind customer service, by terminal call or customer service can also be changed avoid client's Mood further deteriorates.
S206:According to data evaluation result corresponding to Emotion identification result generation.
, can be according to shared by mood different in the Emotion identification result as illustrated in the foregoing fig. 1 described in embodiment Data evaluation result corresponding to ratio generation;Or it can be generated according to the variation tendency of mood in the Emotion identification result Corresponding data evaluation result.
For example, it is assumed that whole dialogue only 10 wheels, can be happy according to occurring in client's whole process in preceding kind of mode, The accounting of laugh carries out bonus point, such as adds 1 point more than 40%, 20%~40% plus 0.5, and 20%~10% plus 0.3, it is more than 0 plus 0.2 point, not no then not bonus point.In rear kind of mode, if at the end of 20% at excitement and indignation mood accounting surpass Cross 70% and subtract 1 point, 50%~70% subtracts 0.6 point, and 20%~50 subtracts 0.3, and remaining does not subtract.
But not limited to this, in actual applications, two ways also may be used in combination, e.g., data are carried out with following 5 dimensions Evaluation:1st, 1 point is obtained if client is whole without angry mood occurs, if it happens, then carries out deduction by accounting, at most reduce to 0 point.If the 2, the mood accounting of excitement and indignation more than 70% subtracts 1 point at 20% at the end of, 50%~70% subtracts 0.6 point, 20%~50 subtracts 0.3, and remaining does not subtract.After the 3rd, if the mood of indignation occur in customer service voices, directly subtract 1 point, if client is from opening There is not indignation and the mood of excitement in the place of beginning 20%, and what is occurred since more than 20% subtracts 0.8, and remaining does not subtract.4th, such as Angry mood occurs for fruit customer service, customer service can in the time untill this start to finish, client stop it is angry when Between accounting carry out deduction, if whole terminate all not stop, subtract 1 point, the like.5th, according to occurring in client's whole process Happily, the accounting of laugh carries out bonus point, such as adds 1 point more than 40%, 20%~40% plus 0.5, and 20%~10% adds 0.3, more than 0 plus 0.2 point, not no then not bonus point.
Certainly, above-mentioned specific evaluation rule is set, ratio setting and fraction set and be merely illustrative, and actually should In, those skilled in the art can be appropriately arranged with according to specific needs, and the embodiment of the present invention is not restricted to this.
In addition, the method that data evaluation is carried out according to voice content of the present embodiment, can also be applied to other field, lead to The moods of increase different dimensions is crossed to realize different purposes, for example, in mood increase fear, uneasy or agitation etc. other Mood, it is possible to achieve detect a lie, state of mind stability test etc., no longer illustrate one by one here.
The method that data evaluation is carried out according to voice content that the present embodiment provides, by carrying out mood knowledge to voice content Not, data evaluation is carried out to voice content according to the result of Emotion identification, it is provided in an embodiment of the present invention compared to artificial evaluation Scheme make it that the data evaluation to voice content is more objective.By taking the voice content of customer service and client as an example, the embodiment of the present invention The scheme of offer can cause the evaluation to the service quality of customer service to be no longer only limitted to the feedback of client, realize in terms of mood Customer service quality is evaluated, the active service quality of customer service is obtained the reflection of objective reality.At to voice content Manage and carry out Emotion identification, it can be determined that go out the emotional change of customer service or client in communication process, and then can be pre- in advance It is alert, control is carried out to the mood of customer service.
As shown in figure 3, the structure drawing of device that data evaluation is carried out according to voice content for the embodiment of the present invention three.This reality Applying the device of example includes voice content acquisition module 301, voice content analysis and processing module 302, the sum of Emotion identification module 303 According to evaluation module 304.The voice content acquisition module 301 is used to obtain voice content, and the voice content analyzes and processes mould Block 302 is used to analyze and process voice content, and the Emotion identification module 303 is used for the voice after analyzing and processing Content carries out Emotion identification, generates Emotion identification result;The data evaluation module 304 is used to be generated according to Emotion identification result Corresponding data evaluation result.
In some specific embodiments of the present invention, the voice content analysis and processing module is specifically used for the voice Content carries out cutting by time domain, and role's identification is carried out to the voice content after cutting, and in the voice after role's identification Hold and carry out identity, further, it is also possible to for carrying out role's identification to voice content, and to the voice content after role's identification Identity is carried out, e.g., role's identification is carried out to the voice content according to the vocal print of the voice content, and then role is known Voice content after not carries out identification number.
In some specific embodiments of the present invention, the voice content analysis and processing module can be also used for institute's predicate Sound content carries out role's identification, and carries out identity to the voice content after role's identification, in the voice after identity Hold and carry out cutting by time domain.
In some specific embodiments of the present invention, the voice content analysis and processing module can be also used for institute's predicate Sound content carries out noise reduction process.
In some specific embodiments of the present invention, the Emotion identification module is used for according to deep neural network model pair Voice content after the analyzing and processing carries out Emotion identification, generates mood sequence corresponding with the voice content, the feelings Thread sequence is to include the character strings of multiple characters, and the character in the character string corresponds with mood.
Optionally, Emotion identification module 303 is specifically used for:When having according to the cutting acquisition to the voice content is multiple Between sequencing relation voice segment, multiple voice segments are sequentially input to deep neural network according to the time order and function Emotion identification is carried out in model, respectively multiple Emotion identification results corresponding to acquisition;, will be multiple according to time order and function order Emotion identification result generates mood sequence.
Optionally, the device of the present embodiment can also include:Sample process module 305, in Emotion identification module 303 Before carrying out Emotion identification to the voice content after the analyzing and processing according to deep neural network model, obtain for training Voice content sample, and mood mark is carried out to voice content sample;Training module 306, for using in the voice after marking Hold sample and study is trained to deep neural network model, obtain the deep neural network model for Emotion identification.
Optionally, training module 306 is used to pre-process the voice content sample after mark, and the pretreatment includes Fourier transformation processing and noise reduction process;Using mark and pretreated voice content sample has been carried out, to depth nerve net Network model is trained study, obtains the deep neural network model for Emotion identification.
Optionally, the data evaluation module is specifically used for:According to shared by mood different in the Emotion identification result Ratio generation corresponding to data evaluation result, or, according in the Emotion identification result mood variation tendency generation pair The data evaluation result answered.
The device that data evaluation is carried out according to voice content of the present embodiment, can be obtained identical with above method embodiment Technique effect, repeat no more here.
Device embodiment described above is only schematical, wherein the module illustrated as separating component can To be or may not be physically separate, it can be as the part that module is shown or may not be physics mould Block, you can with positioned at a place, or can also be distributed on multiple mixed-media network modules mixed-medias.It can be selected according to the actual needs In some or all of module realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not paying creativeness Work in the case of, you can to understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can Realized by the mode of software plus required general hardware platform, naturally it is also possible to pass through hardware.Based on such understanding, on The part that technical scheme substantially in other words contributes to prior art is stated to embody in the form of software product, should Computer software product can store in a computer-readable storage medium, the computer readable recording medium storing program for performing include be used for The readable form storage of computer (such as computer) or any mechanism of transmission information.For example, machine readable media is included only Read memory (ROM), random access memory (RAM), magnetic disk storage medium, optical storage media, flash medium, electricity, light, Sound or the transmitting signal of other forms (for example, carrier wave, infrared signal, data signal etc.) etc., the computer software product includes Some instructions are each to cause a computer equipment (can be personal computer, server, or network equipment etc.) execution Method described in some parts of individual embodiment or embodiment.
Finally it should be noted that:Above example is only to illustrate the technical scheme of the embodiment of the present application, rather than it is limited System;Although the application is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:Its The technical scheme described in foregoing embodiments can still be modified, or which part technical characteristic is equal Replace;And these modifications or replacement, the essence of appropriate technical solution is departed from each embodiment technical scheme of the application Spirit and scope.
It will be understood by those skilled in the art that the embodiment of the embodiment of the present invention can be provided as method, apparatus (equipment) or Computer program product.Therefore, the embodiment of the present invention can use complete hardware embodiment, complete software embodiment or combine soft The form of the embodiment of part and hardware aspect.Moreover, the embodiment of the present invention can use wherein includes calculating in one or more The computer-usable storage medium of machine usable program code (includes but is not limited to magnetic disk storage, CD-ROM, optical memory Deng) on the form of computer program product implemented.
The embodiment of the present invention is with reference to method, apparatus (equipment) according to embodiments of the present invention and computer program product Flow chart and/or block diagram describe.It should be understood that can be by every in computer program instructions implementation process figure and/or block diagram One flow and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computers can be provided Processor of the programmed instruction to all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices To produce a machine so that produce use by the instruction of computer or the computing device of other programmable data processing devices In the dress for realizing the function of being specified in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames Put.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, so as in computer or The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in individual square frame or multiple square frames.

Claims (26)

  1. A kind of 1. method that data evaluation is carried out according to voice content, it is characterised in that including:
    Voice content is obtained, the voice content is analyzed and processed;
    Emotion identification is carried out to the voice content after analyzing and processing, generates Emotion identification result;
    According to data evaluation result corresponding to Emotion identification result generation.
  2. 2. according to the method for claim 1, it is characterised in that described analyzing and processing is carried out to the voice content to include:
    Cutting is carried out by time domain to the voice content.
  3. 3. according to the method for claim 2, it is characterised in that also include:Role's knowledge is carried out to the voice content after cutting Not, the voice content after and being identified to role carries out identity.
  4. 4. according to the method for claim 3, it is characterised in that the voice content to after cutting carries out role's identification, And the voice content after being identified to role carries out identity and included:
    Role's identification is carried out to the voice content after cutting according to the vocal print of the voice content, the language after being identified to role Sound content carries out identification number.
  5. 5. according to the method for claim 2, it is characterised in that carrying out cutting by time domain to the voice content includes:
    Role's identification is carried out to the voice content, and identity is carried out to the voice content after role's identification, to identity mark Voice content after knowledge carries out cutting by time domain.
  6. 6. according to the method described in claim any one of 2-5, it is characterised in that described that the voice content is carried out at analysis Reason also includes:Noise reduction process is carried out to the voice content.
  7. 7. according to the method described in claim any one of 2-5, it is characterised in that in the voice after described pair of analyzing and processing Hold and carry out Emotion identification, generation Emotion identification result includes:
    Emotion identification, generation and the voice are carried out to the voice content after the analyzing and processing according to deep neural network model Mood sequence corresponding to content.
  8. 8. according to the method for claim 7, it is characterised in that it is described according to deep neural network model to the analysis at Voice content after reason carries out Emotion identification, and generating mood sequence corresponding with the voice content includes:
    Multiple voice segments with time order and function ordinal relation are obtained according to the cutting to the voice content, will be multiple described Voice segment is sequentially input to carry out Emotion identification in the deep neural network model according to the time order and function, obtains respectively Corresponding multiple Emotion identification results;According to time order and function order, multiple Emotion identification results are generated into mood sequence Row.
  9. 9. according to the method for claim 7, it is characterised in that it is described according to deep neural network model to the analysis Before voice content after processing carries out Emotion identification, methods described also includes:
    The voice content sample for training is obtained, and mood mark is carried out to the voice content sample;
    Study is trained to deep neural network model using the voice content sample after mark, is obtained for Emotion identification Deep neural network model.
  10. 10. according to the method for claim 9, it is characterised in that the voice content sample using after mark is to depth Neural network model is trained study, and obtain includes for the deep neural network model of Emotion identification:To the language after mark Sound content sample is pre-processed, and the pretreatment includes Fourier transformation processing and noise reduction process;
    Using mark and the pretreated voice content sample has been carried out, it is trained to deep neural network model Practise, obtain the deep neural network model for Emotion identification.
  11. 11. according to the method for claim 7, it is characterised in that the mood sequence is the character sequence comprising multiple characters Arrange, the character in the character string corresponds with mood.
  12. 12. according to the method for claim 1, it is characterised in that described according to corresponding to Emotion identification result generation Data evaluation result includes:
    Data evaluation result corresponding to ratio generation according to shared by mood different in the Emotion identification result.
  13. 13. according to the method for claim 1, it is characterised in that described according to corresponding to Emotion identification result generation Data evaluation result includes:
    According to data evaluation result corresponding to the variation tendency generation of mood in the Emotion identification result.
  14. A kind of 14. device that data evaluation is carried out according to voice content, it is characterised in that including:
    Voice content acquisition module, for obtaining voice content;
    Voice content analysis and processing module, for being analyzed and processed to voice content;
    Emotion identification module, for carrying out Emotion identification to the voice content after analyzing and processing, generate Emotion identification result;
    Data evaluation module, for the data evaluation result according to corresponding to the generation of Emotion identification result.
  15. 15. device according to claim 14, it is characterised in that the voice content analysis and processing module is specifically used for:
    Cutting is carried out by time domain to the voice content.
  16. 16. device according to claim 15, it is characterised in that the voice content analysis and processing module is additionally operable to:
    Role's identification is carried out to the voice content after cutting, and identity is carried out to the voice content after role's identification.
  17. 17. device according to claim 16, it is characterised in that the voice content analysis and processing module is also specifically used In:
    Role's identification is carried out to the voice content according to the vocal print of the voice content, the voice content after being identified to role enters Row identification number.
  18. 18. device according to claim 15, it is characterised in that the voice content analysis and processing module is used for:
    Role's identification is carried out to the voice content, and identity is carried out to the voice content after role's identification, to identity mark Voice content after knowledge carries out cutting by time domain.
  19. 19. according to the device described in claim any one of 15-18, it is characterised in that the voice content analysis and processing module It is additionally operable to:Noise reduction process is carried out to the voice content.
  20. 20. according to the device described in claim any one of 15-18, it is characterised in that the Emotion identification module is specifically used for:
    Emotion identification, generation and the voice are carried out to the voice content after the analyzing and processing according to deep neural network model Mood sequence corresponding to content.
  21. 21. device according to claim 20, it is characterised in that the Emotion identification module is specifically used for:
    Multiple voice segments with time order and function ordinal relation are obtained according to the cutting to the voice content, will be multiple described Voice segment is sequentially input to carry out Emotion identification in the deep neural network model according to the time order and function, obtains respectively Corresponding multiple Emotion identification results;According to time order and function order, multiple Emotion identification results are generated into mood sequence Row.
  22. 22. device according to claim 20, it is characterised in that described device also includes:
    Sample process module, for the Emotion identification module according to deep neural network model to the analyzing and processing after Before voice content carries out Emotion identification, the voice content sample for training is obtained, and the voice content sample is carried out Mood marks;
    Training module, for being trained study to deep neural network model using the voice content sample after mark, obtain Deep neural network model for Emotion identification.
  23. 23. device according to claim 22, it is characterised in that the training module, in the voice after mark Hold sample to be pre-processed, the pretreatment includes Fourier transformation processing and noise reduction process;Located using mark has been carried out with pre- The voice content sample after reason, study is trained to deep neural network model, obtains the depth for Emotion identification Neural network model.
  24. 24. device according to claim 20, it is characterised in that the mood sequence is the character sequence comprising multiple characters Arrange, the character in the character string corresponds with mood.
  25. 25. device according to claim 14, it is characterised in that the data evaluation module is specifically used for:
    Data evaluation result corresponding to ratio generation according to shared by mood different in the Emotion identification result.
  26. 26. device according to claim 14, it is characterised in that the data evaluation module is specifically used for:
    According to data evaluation result corresponding to the variation tendency generation of mood in the Emotion identification result.
CN201710703850.1A 2017-08-16 2017-08-16 Method and device for evaluating data according to voice content Active CN107452405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710703850.1A CN107452405B (en) 2017-08-16 2017-08-16 Method and device for evaluating data according to voice content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710703850.1A CN107452405B (en) 2017-08-16 2017-08-16 Method and device for evaluating data according to voice content

Publications (2)

Publication Number Publication Date
CN107452405A true CN107452405A (en) 2017-12-08
CN107452405B CN107452405B (en) 2021-04-09

Family

ID=60492623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710703850.1A Active CN107452405B (en) 2017-08-16 2017-08-16 Method and device for evaluating data according to voice content

Country Status (1)

Country Link
CN (1) CN107452405B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108186033A (en) * 2018-01-08 2018-06-22 杭州草莽科技有限公司 A kind of child's mood monitoring method and its system based on artificial intelligence
CN108259686A (en) * 2017-12-28 2018-07-06 合肥凯捷技术有限公司 A kind of customer service system based on speech analysis
CN108806708A (en) * 2018-06-13 2018-11-13 中国电子科技集团公司第三研究所 Voice de-noising method based on Computational auditory scene analysis and generation confrontation network model
CN108962282A (en) * 2018-06-19 2018-12-07 京北方信息技术股份有限公司 Speech detection analysis method, apparatus, computer equipment and storage medium
CN109065025A (en) * 2018-07-30 2018-12-21 珠海格力电器股份有限公司 A kind of computer storage medium and a kind of processing method and processing device of audio
CN109241519A (en) * 2018-06-28 2019-01-18 平安科技(深圳)有限公司 Environmental Evaluation Model acquisition methods and device, computer equipment and storage medium
CN109327631A (en) * 2018-10-24 2019-02-12 深圳市万屏时代科技有限公司 A kind of artificial customer service system of intelligence
CN109410921A (en) * 2018-09-30 2019-03-01 秒针信息技术有限公司 A kind of method and device carrying out quality evaluation by sound
CN109618065A (en) * 2018-12-28 2019-04-12 合肥凯捷技术有限公司 A kind of voice quality inspection rating system
CN109726655A (en) * 2018-12-19 2019-05-07 平安普惠企业管理有限公司 Customer service evaluation method, device, medium and equipment based on Emotion identification
CN109753663A (en) * 2019-01-16 2019-05-14 中民乡邻投资控股有限公司 A kind of customer anger stage division and device
CN109766770A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 QoS evaluating method, device, computer equipment and storage medium
CN109758141A (en) * 2019-03-06 2019-05-17 清华大学 A kind of psychological pressure monitoring method, apparatus and system
CN109785862A (en) * 2019-01-21 2019-05-21 深圳壹账通智能科技有限公司 Customer service quality evaluating method, device, electronic equipment and storage medium
CN110033778A (en) * 2019-05-07 2019-07-19 苏州市职业大学 One kind state of lying identifies update the system in real time
CN110062117A (en) * 2019-04-08 2019-07-26 商客通尚景科技(上海)股份有限公司 A kind of sonic detection and method for early warning
CN110147936A (en) * 2019-04-19 2019-08-20 深圳壹账通智能科技有限公司 Service evaluation method, apparatus based on Emotion identification, storage medium
CN110288974A (en) * 2018-03-19 2019-09-27 北京京东尚科信息技术有限公司 Voice-based Emotion identification method and device
CN110472224A (en) * 2019-06-24 2019-11-19 深圳追一科技有限公司 Detection method, device, computer equipment and the storage medium of service quality
CN110728996A (en) * 2019-10-24 2020-01-24 北京九狐时代智能科技有限公司 Real-time voice quality inspection method, device, equipment and computer storage medium
CN111009244A (en) * 2019-12-06 2020-04-14 贵州电网有限责任公司 Voice recognition method and system
CN111049998A (en) * 2018-10-11 2020-04-21 上海智臻智能网络科技股份有限公司 Voice customer service quality inspection method, customer service quality inspection equipment and storage medium
CN111049999A (en) * 2018-10-11 2020-04-21 上海智臻智能网络科技股份有限公司 Voice customer service quality inspection system and customer service quality inspection equipment
CN111080109A (en) * 2019-12-06 2020-04-28 中信银行股份有限公司 Customer service quality evaluation method and device and electronic equipment
CN111179929A (en) * 2019-12-31 2020-05-19 中国银行股份有限公司 Voice processing method and device
CN111554304A (en) * 2020-04-25 2020-08-18 中信银行股份有限公司 User tag obtaining method, device and equipment
WO2020187300A1 (en) * 2019-03-21 2020-09-24 杭州海康威视数字技术股份有限公司 Monitoring system, method and apparatus, server and storage medium
CN112509561A (en) * 2020-12-03 2021-03-16 中国联合网络通信集团有限公司 Emotion recognition method, device, equipment and computer readable storage medium
CN112885379A (en) * 2021-01-28 2021-06-01 携程旅游网络技术(上海)有限公司 Customer service voice evaluation method, system, device and storage medium
CN113571096A (en) * 2021-07-23 2021-10-29 平安科技(深圳)有限公司 Speech emotion classification model training method and device, computer equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930735A (en) * 2009-06-23 2010-12-29 富士通株式会社 Speech emotion recognition equipment and speech emotion recognition method
US20110282662A1 (en) * 2010-05-11 2011-11-17 Seiko Epson Corporation Customer Service Data Recording Device, Customer Service Data Recording Method, and Recording Medium
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis
CN105427869A (en) * 2015-11-02 2016-03-23 北京大学 Session emotion autoanalysis method based on depth learning
CN105808721A (en) * 2016-03-07 2016-07-27 中国科学院声学研究所 Data mining based customer service content analysis method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930735A (en) * 2009-06-23 2010-12-29 富士通株式会社 Speech emotion recognition equipment and speech emotion recognition method
US20110282662A1 (en) * 2010-05-11 2011-11-17 Seiko Epson Corporation Customer Service Data Recording Device, Customer Service Data Recording Method, and Recording Medium
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis
CN105427869A (en) * 2015-11-02 2016-03-23 北京大学 Session emotion autoanalysis method based on depth learning
CN105808721A (en) * 2016-03-07 2016-07-27 中国科学院声学研究所 Data mining based customer service content analysis method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋明虎: "《电力行业电话电话客服语音情感识别》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259686A (en) * 2017-12-28 2018-07-06 合肥凯捷技术有限公司 A kind of customer service system based on speech analysis
CN108186033A (en) * 2018-01-08 2018-06-22 杭州草莽科技有限公司 A kind of child's mood monitoring method and its system based on artificial intelligence
CN110288974A (en) * 2018-03-19 2019-09-27 北京京东尚科信息技术有限公司 Voice-based Emotion identification method and device
CN110288974B (en) * 2018-03-19 2024-04-05 北京京东尚科信息技术有限公司 Emotion recognition method and device based on voice
CN108806708A (en) * 2018-06-13 2018-11-13 中国电子科技集团公司第三研究所 Voice de-noising method based on Computational auditory scene analysis and generation confrontation network model
CN108962282A (en) * 2018-06-19 2018-12-07 京北方信息技术股份有限公司 Speech detection analysis method, apparatus, computer equipment and storage medium
CN109241519A (en) * 2018-06-28 2019-01-18 平安科技(深圳)有限公司 Environmental Evaluation Model acquisition methods and device, computer equipment and storage medium
CN109065025A (en) * 2018-07-30 2018-12-21 珠海格力电器股份有限公司 A kind of computer storage medium and a kind of processing method and processing device of audio
CN109410921B (en) * 2018-09-30 2021-09-07 秒针信息技术有限公司 Method and device for quality evaluation through sound
CN109410921A (en) * 2018-09-30 2019-03-01 秒针信息技术有限公司 A kind of method and device carrying out quality evaluation by sound
CN111049999A (en) * 2018-10-11 2020-04-21 上海智臻智能网络科技股份有限公司 Voice customer service quality inspection system and customer service quality inspection equipment
CN111049998A (en) * 2018-10-11 2020-04-21 上海智臻智能网络科技股份有限公司 Voice customer service quality inspection method, customer service quality inspection equipment and storage medium
CN109327631A (en) * 2018-10-24 2019-02-12 深圳市万屏时代科技有限公司 A kind of artificial customer service system of intelligence
CN109766770A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 QoS evaluating method, device, computer equipment and storage medium
CN109726655A (en) * 2018-12-19 2019-05-07 平安普惠企业管理有限公司 Customer service evaluation method, device, medium and equipment based on Emotion identification
CN109618065A (en) * 2018-12-28 2019-04-12 合肥凯捷技术有限公司 A kind of voice quality inspection rating system
CN109753663A (en) * 2019-01-16 2019-05-14 中民乡邻投资控股有限公司 A kind of customer anger stage division and device
CN109753663B (en) * 2019-01-16 2023-12-29 中民乡邻投资控股有限公司 Customer emotion grading method and device
CN109785862A (en) * 2019-01-21 2019-05-21 深圳壹账通智能科技有限公司 Customer service quality evaluating method, device, electronic equipment and storage medium
CN109758141A (en) * 2019-03-06 2019-05-17 清华大学 A kind of psychological pressure monitoring method, apparatus and system
WO2020187300A1 (en) * 2019-03-21 2020-09-24 杭州海康威视数字技术股份有限公司 Monitoring system, method and apparatus, server and storage medium
CN110062117A (en) * 2019-04-08 2019-07-26 商客通尚景科技(上海)股份有限公司 A kind of sonic detection and method for early warning
CN110147936A (en) * 2019-04-19 2019-08-20 深圳壹账通智能科技有限公司 Service evaluation method, apparatus based on Emotion identification, storage medium
CN110033778A (en) * 2019-05-07 2019-07-19 苏州市职业大学 One kind state of lying identifies update the system in real time
CN110472224A (en) * 2019-06-24 2019-11-19 深圳追一科技有限公司 Detection method, device, computer equipment and the storage medium of service quality
CN110472224B (en) * 2019-06-24 2023-07-07 深圳追一科技有限公司 Quality of service detection method, apparatus, computer device and storage medium
CN110728996A (en) * 2019-10-24 2020-01-24 北京九狐时代智能科技有限公司 Real-time voice quality inspection method, device, equipment and computer storage medium
CN111080109A (en) * 2019-12-06 2020-04-28 中信银行股份有限公司 Customer service quality evaluation method and device and electronic equipment
CN111080109B (en) * 2019-12-06 2023-05-05 中信银行股份有限公司 Customer service quality evaluation method and device and electronic equipment
CN111009244A (en) * 2019-12-06 2020-04-14 贵州电网有限责任公司 Voice recognition method and system
CN111179929A (en) * 2019-12-31 2020-05-19 中国银行股份有限公司 Voice processing method and device
CN111179929B (en) * 2019-12-31 2022-11-25 中国银行股份有限公司 Voice processing method and device
CN111554304A (en) * 2020-04-25 2020-08-18 中信银行股份有限公司 User tag obtaining method, device and equipment
CN112509561A (en) * 2020-12-03 2021-03-16 中国联合网络通信集团有限公司 Emotion recognition method, device, equipment and computer readable storage medium
CN112885379A (en) * 2021-01-28 2021-06-01 携程旅游网络技术(上海)有限公司 Customer service voice evaluation method, system, device and storage medium
CN113571096A (en) * 2021-07-23 2021-10-29 平安科技(深圳)有限公司 Speech emotion classification model training method and device, computer equipment and medium

Also Published As

Publication number Publication date
CN107452405B (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN107452405A (en) A kind of method and device that data evaluation is carried out according to voice content
CN107705807B (en) Voice quality detecting method, device, equipment and storage medium based on Emotion identification
CN106782602B (en) Speech emotion recognition method based on deep neural network
CN109522556B (en) Intention recognition method and device
Bertero et al. A first look into a convolutional neural network for speech emotion detection
CN107452385A (en) A kind of voice-based data evaluation method and device
DE602006000090T2 (en) Confidence measure for a speech dialogue system
CN106095834A (en) Intelligent dialogue method and system based on topic
DE212020000731U1 (en) Contrastive pre-training for language tasks
CN110120224A (en) Construction method, device, computer equipment and the storage medium of bird sound identification model
CN110890088B (en) Voice information feedback method and device, computer equipment and storage medium
Rahman et al. A personalized emotion recognition system using an unsupervised feature adaptation scheme
CN107468260A (en) A kind of brain electricity analytical device and analysis method for judging ANIMAL PSYCHE state
EP1926081A1 (en) Method for dialogue adaptation and dialogue system for this purpose
Lebedev How to read neuron-dropping curves?
CN108091323A (en) For identifying the method and apparatus of emotion from voice
Wagner et al. Applying cooperative machine learning to speed up the annotation of social signals in large multi-modal corpora
CN114692621A (en) Method for explaining influence function from sequence to sequence task based on sample in NLP
DE69333762T2 (en) Voice recognition system
CN113516097A (en) Plant leaf disease identification method based on improved EfficentNet-V2
KR102309829B1 (en) Apparatus and method for analyzing call emotions
Huang et al. Speech emotion recognition based on coiflet wavelet packet cepstral coefficients
Li et al. Research on speech emotion recognition based on deep neural network
Nasir et al. Still together?: The role of acoustic features in predicting marital outcome
CN107256455A (en) A kind of career planning method of testing and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant