CN116434787B - Voice emotion recognition method and device, storage medium and electronic equipment - Google Patents
Voice emotion recognition method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN116434787B CN116434787B CN202310705248.7A CN202310705248A CN116434787B CN 116434787 B CN116434787 B CN 116434787B CN 202310705248 A CN202310705248 A CN 202310705248A CN 116434787 B CN116434787 B CN 116434787B
- Authority
- CN
- China
- Prior art keywords
- emotion
- prediction result
- local
- probability
- global
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000003860 storage Methods 0.000 title claims abstract description 22
- 230000008909 emotion recognition Effects 0.000 title claims description 21
- 230000008451 emotion Effects 0.000 claims abstract description 546
- 239000012634 fragment Substances 0.000 claims abstract description 34
- 230000004927 fusion Effects 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 14
- 238000005457 optimization Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 9
- 230000006872 improvement Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000018109 developmental process Effects 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 4
- 206010063659 Aversion Diseases 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 102100032202 Cornulin Human genes 0.000 description 1
- 101000920981 Homo sapiens Cornulin Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Hospice & Palliative Care (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Child & Adolescent Psychology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The specification discloses a method, a device, a storage medium and electronic equipment for recognizing voice emotion, which are used for acquiring target voice, selecting a plurality of voice fragments with preset length from the target voice, respectively inputting each voice fragment and the target voice into a pre-trained emotion prediction model, acquiring a local emotion prediction result corresponding to each voice fragment and a global emotion prediction result of the target voice, fusing the global emotion prediction result with at least one local emotion prediction result to obtain an optimized global emotion prediction result, and determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result. According to the method, the model can output the local emotion prediction result, and the global emotion prediction result and the local emotion prediction result are fused, so that the global emotion prediction result is optimized, and the accuracy of the final emotion prediction result is improved.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a method and apparatus for speech emotion recognition, a storage medium, and an electronic device.
Background
With the development of artificial intelligence, the artificial intelligence is applied to various fields, and when tasks related to user demands are performed by using artificial intelligence technology, the identification of user emotion is involved so as to better meet the user demands. When user emotion is recognized through user voice, the characteristics in the user voice are generally obtained through a neural network, and then the user emotion result is obtained through the classifier and the characteristics in the user voice, but the accuracy of the obtained user emotion result is low, and local emotion expression in the user voice cannot be obtained.
Based on this, the present specification provides a method of speech emotion recognition.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a storage medium, and an electronic device for speech emotion recognition, so as to partially solve the foregoing problems in the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides a method for speech emotion recognition, comprising the following steps:
acquiring target voice;
selecting a plurality of voice fragments with preset lengths from the target voice;
inputting each voice segment into a pre-trained emotion prediction model for obtaining a local emotion prediction result corresponding to the voice segment according to the emotion prediction model; inputting the target voice into the emotion prediction model to obtain a global emotion prediction result of the target voice according to the emotion prediction model;
Fusing the global emotion prediction result with at least one local emotion prediction result to obtain an optimized global emotion prediction result;
and determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result.
Optionally, the local emotion prediction result includes probabilities that the voice fragments belong to each emotion type respectively;
fusing the global emotion prediction result with at least one local emotion prediction result, wherein the method specifically comprises the following steps:
taking the probability that the target voice belongs to each emotion type as global probability and taking the probability that the voice fragment belongs to each emotion type as local probability;
for each emotion type, determining the maximum value of the local probability of the emotion type in the local probability of at least one local emotion prediction result as the local fusion probability of the emotion type;
and weighting the global probability of each emotion type and the local fusion probability of the emotion type according to preset fusion weights and the local fusion probability of the emotion type.
Optionally, for each local emotion prediction result, optimizing the local emotion prediction result according to the optimized global emotion prediction result to obtain an optimized local emotion prediction result;
And determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result and the optimized local emotion prediction result.
Optionally, the global emotion prediction result includes probabilities that the target voice belongs to each emotion type respectively;
the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively;
optimizing the local emotion prediction result according to the optimized global emotion prediction result, specifically including:
aiming at the optimized global emotion prediction result, taking the probability that the target voice belongs to each emotion type as the optimized global probability, and taking the probability that the voice fragment belongs to each emotion type as the local probability;
weighting the optimized global probability of the emotion type and the local probability of the emotion type in the local emotion prediction result according to preset weights for each emotion type to obtain the optimized local probability of the emotion type in the local emotion prediction result;
and obtaining the optimized local emotion prediction result according to the optimized local probability of each emotion type in the local emotion prediction result.
Optionally, taking the probability that the target voice belongs to each emotion type as a global probability, and taking the probability that the voice segment belongs to each emotion type as a local probability;
selecting an emotion type corresponding to the maximum value of the global probability from the optimized global emotion prediction results, and taking the emotion type as a final emotion first prediction result of the target voice;
selecting an emotion type corresponding to the maximum value of the local probability as a final emotion second prediction result of the target voice aiming at each optimized local prediction result;
and determining the final emotion prediction result of the target voice according to the final emotion first prediction result of the target voice and the final emotion second prediction result of the target voice.
Optionally, training the emotion prediction model specifically includes:
acquiring sample voice and emotion marking of the sample voice;
inputting the sample voice into an emotion prediction model to determine an emotion prediction result of the sample according to the emotion prediction model;
determining the emotion prediction result and the difference of emotion marks corresponding to the sample voice;
and training the emotion prediction model according to the difference.
The specification provides a device for speech emotion recognition, comprising:
the target voice acquisition module is used for acquiring target voice;
the voice segment acquisition module is used for selecting a plurality of voice segments with preset lengths from the target voice;
the prediction result acquisition module is used for inputting each voice fragment into a pre-trained emotion prediction model so as to acquire a local emotion prediction result corresponding to the voice fragment according to the emotion prediction model; inputting the target voice into the emotion prediction model to obtain a global emotion prediction result of the target voice according to the emotion prediction model;
the optimization module is used for fusing the global emotion prediction result with at least one local emotion prediction result to obtain an optimized global emotion prediction result;
and the final result determining module is used for determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result.
Optionally, the optimization module is specifically configured to include probabilities that the target speech belongs to each emotion type respectively; the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively; the optimizing module is specifically configured to take a probability that the target voice belongs to each emotion type as a global probability, and a probability that the voice segment belongs to each emotion type as a local probability; for each emotion type, determining the maximum value of the local probability of the emotion type in the local probability of at least one local emotion prediction result as the local fusion probability of the emotion type; and weighting the global probability of each emotion type and the local fusion probability of the emotion type according to preset fusion weights and the local fusion probability of the emotion type.
Optionally, the final result determining module is specifically configured to optimize, for each local emotion prediction result, the local emotion prediction result according to the optimized global emotion prediction result, so as to obtain an optimized local emotion prediction result; and determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result and the optimized local emotion prediction result.
Optionally, the global emotion prediction result includes probabilities that the target voice belongs to each emotion type respectively; the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively; the final result determining module is specifically configured to optimize the local emotion prediction result according to the optimized global emotion prediction result, and specifically includes: aiming at the optimized global emotion prediction result, taking the probability that the target voice belongs to each emotion type as the optimized global probability, and taking the probability that the voice fragment belongs to each emotion type as the local probability; weighting the optimized global probability of the emotion type and the local probability of the emotion type in the local emotion prediction result according to preset weights for each emotion type to obtain the optimized local probability of the emotion type in the local emotion prediction result; and obtaining the optimized local emotion prediction result according to the optimized local probability of each emotion type in the local emotion prediction result.
Optionally, the final result determining module is specifically configured to take a probability that the target voice belongs to each emotion type as a global probability, and a probability that the voice segment belongs to each emotion type as a local probability; selecting an emotion type corresponding to the maximum value of the global probability from the optimized global emotion prediction results, and taking the emotion type as a final emotion first prediction result of the target voice; selecting an emotion type corresponding to the maximum value of the local probability as a final emotion second prediction result of the target voice aiming at each optimized local prediction result; and determining the final emotion prediction result of the target voice according to the final emotion first prediction result of the target voice and the final emotion second prediction result of the target voice.
Optionally, the apparatus further comprises:
the model training module is used for acquiring sample voice and emotion marking of the sample voice; inputting the sample voice into an emotion prediction model to determine an emotion prediction result of the sample according to the emotion prediction model; determining the emotion prediction result and the difference of emotion marks corresponding to the sample voice; and training the emotion prediction model according to the difference.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the method of speech emotion recognition described above.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of speech emotion recognition described above when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
according to the voice emotion recognition method provided by the specification, the local emotion prediction results can be output through the emotion prediction model, and because a plurality of local emotion prediction results of the target voice can generate certain influence on the global emotion prediction result of the target voice, the global emotion prediction result and the local emotion prediction result are fused to optimize the global emotion prediction result, so that the accuracy of the final emotion prediction result is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
FIG. 1 is a schematic flow chart of a method for speech emotion recognition provided in the present specification;
FIG. 2 is a schematic diagram showing the local prediction results provided in the present specification;
FIG. 3 is a schematic diagram of the internal structure of the emotion prediction model provided in the present specification;
FIG. 4 is a schematic diagram of a speech emotion recognition apparatus provided in the present specification;
fig. 5 is a schematic structural diagram of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for identifying speech emotion provided in the present specification, which includes the following steps:
S100: and acquiring target voice.
With the development of artificial intelligence, emotion of a user, such as text, voice, image, etc., can be predicted according to various kinds of information related to the user. However, when predicting the emotion of a user using speech, the accuracy of the prediction result obtained is low. In order to improve the accuracy of predicted emotion of a user, the present specification provides a method of speech emotion recognition. The execution subject of the present specification may be a server for model training, or may be other electronic devices that can predict the emotion of a user. For convenience of explanation, a method of speech emotion recognition provided in the present specification will be explained below with only a server as an execution subject.
In one or more embodiments of the present description, recognizing a user emotion requires first acquiring a user's voice, i.e., acquiring a target voice. In order to more accurately predict the emotion of the user, the server may process the target voice before predicting the emotion of the user according to the target voice, for example, perform format conversion, denoising, removing audio segments of non-users, and the like, which is not limited in this specification.
S102: and selecting a plurality of voice fragments with preset lengths from the target voice.
In order to obtain the emotion prediction result of the user at a certain moment according to the target voice, the server may divide the target voice into a plurality of voice segments, i.e. select a plurality of voice segments with a preset length from the target voice, for example, the time length of the target voice is 10 seconds, the preset length may be 1 second, and then the target voice is divided into 10 voice segments with a time length of 1. The preset length may be a fixed length or a length that varies with the time length of the target voice, which is not limited in this specification.
It should be noted that if the target voice cannot be equally divided, the last remaining voice length may be taken as a preset length to obtain the last voice segment, for example, the target voice has a time length of 9.5 seconds and a preset length of 1 second, the target voice is divided into 9 voice segments having a time length of 1, the target voice has a time length of 0.5 seconds, the preset length is modified into 0.5 seconds, and finally, the target voice is divided into 9 voice segments having a time length of 1 and 1 voice segment having a time length of 0.5.
S104: and inputting each voice segment into a pre-trained emotion prediction model to obtain a local emotion prediction result corresponding to the voice segment according to the emotion prediction model.
It should be noted that, the emotion types of the user include neutral (no emotion), anger, happiness, aversion, fear, surprise, sadness, and the like, and the emotion types can be further divided according to the business requirement, which is not limited in this specification. The local emotion refers to the emotion of the user corresponding to a certain speech segment in the target speech, for example, the time length of the target speech is 1 minute, the emotion of the user is aversive in the first half minute, and the emotion of the user is anger in the second half minute. Of course, during the same time period, the user may exhibit multiple emotion types.
S106: inputting the target voice into the emotion prediction model to obtain a global emotion prediction result of the target voice according to the emotion prediction model.
In one or more embodiments of the present disclosure, the global emotion refers to an overall emotion of the target voice, for example, the time length of the target voice is 1 minute, and during the 1 minute, the emotion of the user is expressed as happiness as a whole.
S108: and fusing the global emotion prediction result with at least one local emotion prediction result to obtain an optimized global emotion prediction result.
Because a plurality of local emotion prediction results of the target voice can generate a certain influence on the global emotion prediction result of the target voice, in order to improve the accuracy of the global emotion prediction result, the server can fuse the global emotion prediction result with at least one local emotion prediction result to obtain an optimized global emotion prediction result.
Specifically, if the global emotion prediction result is fused with only one local emotion prediction result, the global probability of each emotion type in the global emotion prediction result is fused with the corresponding local probability in the local emotion prediction result. Of course, the global probability and the local probability of an emotion type may be fused for only one emotion type, which is not limited in this specification. For example, the global emotion prediction result and the optimized local emotion prediction result have the same emotion types, namely emotion type a, emotion type B and emotion type C, and the server can only fuse the global probability and the local probability of emotion type a, and the prediction results of the other two emotion types are not optimized, and the prediction results of the three emotion types are optimized.
If the global emotion prediction result is fused with a plurality of local emotion prediction results, determining the maximum value of the local probability of each emotion type in the local probabilities of the plurality of local emotion prediction results as the local fusion probability of the emotion type. For example, for emotion type a, there are three local probabilities of the local emotion prediction results, 75%, 40%, 85%, and 85% are local fusion probabilities of emotion type a.
After determining the local fusion probability, the server weights the global probability of the emotion type and the local fusion probability of the emotion type according to the preset fusion weight and the local fusion probability of the emotion type for each emotion type. Wherein, the fusion weight can be set according to the requirement. For example, if the preset fusion weight is 0.6, the global probability in the global emotion prediction result is 80% and the local fusion probability in the local emotion prediction result is 75% for emotion type a. The global probability of the emotion type a is weighted with the local fusion probability of the emotion type, and the optimized global probability of the emotion type is 80% ×0.6+75% ×0.4=78%. Similarly, the server may only fuse the global probability and the local probability of one emotion type, and this description is not limited thereto.
S110: and determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result.
Because the global emotion prediction result of the target voice can have a certain influence on a plurality of local emotion prediction results of the target voice, the server can optimize the local emotion prediction results through the global emotion prediction results and improve the accuracy of the local emotion prediction results, namely, for each local emotion prediction result, the server optimizes the local emotion prediction results according to the optimized global emotion prediction results to obtain optimized local emotion prediction results. And determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result and the optimized local emotion prediction result. The global emotion prediction result includes a probability that the target voice belongs to each emotion type respectively, that is, the global emotion prediction result includes a confidence that the target voice belongs to each emotion type respectively, the local emotion prediction result includes a probability that the voice fragment belongs to each emotion type respectively, that is, the local emotion prediction result includes a confidence that the voice fragment belongs to each emotion type respectively. For example, the global emotion prediction result is that the emotion of the user is anger 70%, aversion 15%, surprise 5%, sadness 5%, fear 3%, happiness 2%.
When the server optimizes the local emotion prediction result according to the optimized global emotion prediction result, aiming at the optimized global emotion prediction result, the probability that the target voice belongs to each emotion type is used as the optimized global probability, and the probability that the voice fragment belongs to each emotion type is used as the local probability. And weighting the optimized global probability of the emotion type and the local probability of the emotion type in the local emotion prediction result according to preset weights for each emotion type to obtain the optimized local probability of the emotion type in the local emotion prediction result. Wherein the weights can be set as required. For example, if the preset weight is 0.9, the global probability of emotion type a is 80% and the local probability of emotion type a is 75% for emotion type a. The optimized emotion type a local probability is 75% ×0.9+80% ×0.1=75.5%.
After obtaining the optimized local probability of the emotion type in the local emotion prediction result, the server can obtain the optimized local emotion prediction result according to the optimized local probability of each emotion type in the local emotion prediction result. It should be noted that, for each local emotion prediction result, at least one emotion type prediction result in the local emotion prediction results is optimized, that is, at least the optimized global probability of one emotion type is fused with the local probability of the emotion type.
After the optimized local emotion prediction result is obtained, the server can determine a final emotion prediction result of the target voice according to the optimized global emotion prediction result and the optimized local emotion prediction result.
Specifically, the final emotion prediction result includes a final emotion first prediction result and a final emotion second prediction result. And selecting the emotion type corresponding to the maximum value of the global probability from the optimized global emotion prediction results as a final emotion first prediction result of the target voice. For example, the global probability in the optimized global emotion prediction result is anger 70%, aversion 15%, surprise 5%, sadness 5%, fear 3% and happiness 2%, wherein the global probability is the maximum 70%, and the corresponding emotion type is anger, so that anger is the final emotion first prediction result of the target voice.
And selecting the emotion type corresponding to the maximum value of the local probability as a final emotion second prediction result of the target voice according to each optimized local prediction result. For example, for each partial prediction, 80% of the partial predictions are happy, 6% of the partial predictions are surprised, 5% of the partial predictions are averted, 5% of the partial predictions are angry, and 4% of the partial predictions are sad, and then the happy is the final emotion second prediction. And the server determines the final emotion prediction result of the target voice according to the final emotion first prediction result of the target voice and the final emotion second prediction result of the target voice.
Based on the voice emotion recognition method shown in fig. 1, the method can output local emotion prediction results through the emotion prediction model, and because a plurality of local emotion prediction results of the target voice can generate a certain influence on the global emotion prediction result of the target voice, the global emotion prediction result and the local emotion prediction result are fused to optimize the global emotion prediction result, so that the accuracy of the final emotion prediction result is improved.
After performing step S110, the server may present the final emotion prediction result to the user. In addition, the server may also present global and local predictors to the user, e.g., using a histogram or pie chart to present confidence in each emotion type in the global predictor. Fig. 2 is a schematic diagram showing a local prediction result provided in the present specification, and as shown in fig. 2, for the local prediction result of the target voice, the server may label a preset length and a confidence level of each emotion type in each preset length to display emotion changes of the user.
The present disclosure also provides a training method for the emotion prediction model, and fig. 3 is a schematic diagram of the internal structure of the emotion prediction model provided in the present disclosure, as shown in fig. 3.
When training the emotion prediction model, the server can acquire sample voice and emotion marking of the sample voice, and then input the sample voice into the emotion prediction model so as to determine an emotion prediction result of the sample according to the emotion prediction model. And then, determining the difference between the emotion prediction result and emotion marks corresponding to the sample voice. And finally training the emotion prediction model according to the difference.
As shown in FIG. 3, the emotion prediction model includes a backbone network, a feature mapping module, and a classifier. When the emotion prediction model is trained, the sample voice is input into the emotion prediction model, and a backbone network in the emotion prediction model can acquire the frame-level emotion characteristics of the sample voice. The types of the backbone network may include convolutional neural networks (Convolutional Neural Network, CNN), recurrent neural networks (Recurrent Neural Networks, RNN), convolutional recurrent neural networks (Convolutional Recurrent Neural Network, CRNN), and the like, and may be non-pre-training models or pre-training models. After the frame-level emotion characteristics of the sample voice are obtained, the characteristic mapping module can map the frame-level emotion characteristics of the sample voice to the segment-level characteristics of the whole voice by means of average pooling, maximum pooling and the like. Finally, the classifier maps the segment level features to the probability of predicting each emotion type, which can be realized through a full connection layer and a Softmax layer.
In one or more embodiments of the present disclosure, the server uses the emotion prediction model to obtain the global emotion prediction result and the local emotion prediction results of the target voice only in order to obtain the emotion prediction model, and when obtaining the global emotion prediction result and the local emotion prediction results, only a plurality of voice fragments and the target voice need to be input into the emotion prediction model. In addition, the process of optimizing each other by the mutual influence between the global emotion prediction result and the local emotion prediction result in the subsequent server is not related to the emotion prediction model, so that the emotion prediction model can be trained only for the purpose of acquiring the emotion prediction result.
In addition, the emotion prediction model can be added with an audio segmentation module for segmenting the target voice into voice fragments with preset length, and then carrying out subsequent steps. For example, the target voice is segmented by using the sliding window, that is, the length of the sliding window is set, and the target voice is segmented by taking the length of the sliding window as a primary segmentation length, so as to obtain a plurality of voice segments, which is not limited in the specification.
The foregoing is a method implemented by one or more embodiments of the present disclosure, and based on the same concept, the present disclosure further provides a corresponding apparatus for speech emotion recognition, as shown in fig. 4.
Fig. 4 is a schematic diagram of a device for speech emotion recognition provided in the present specification, including:
a target voice acquisition module 400, configured to acquire target voice;
a voice segment obtaining module 402, configured to select a plurality of voice segments with preset lengths from the target voice;
a prediction result obtaining module 404, configured to input, for each speech segment, the speech segment into a pre-trained emotion prediction model, so as to obtain, according to the emotion prediction model, a local emotion prediction result corresponding to the speech segment; inputting the target voice into the emotion prediction model to obtain a global emotion prediction result of the target voice according to the emotion prediction model;
the optimizing module 406 is configured to fuse the global emotion prediction result with at least one local emotion prediction result, so as to obtain an optimized global emotion prediction result;
and the final result determining module 408 is configured to determine a final emotion prediction result of the target speech according to the optimized global emotion prediction result.
Optionally, the optimizing module 406 is specifically configured to include probabilities that the target speech belongs to each emotion type; the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively; the optimizing module is specifically configured to take a probability that the target voice belongs to each emotion type as a global probability, and a probability that the voice segment belongs to each emotion type as a local probability; for each emotion type, determining the maximum value of the local probability of the emotion type in the local probability of at least one local emotion prediction result as the local fusion probability of the emotion type; and weighting the global probability of each emotion type and the local fusion probability of the emotion type according to preset fusion weights and the local fusion probability of the emotion type.
Optionally, the final result determining module 408 is specifically configured to optimize, for each local emotion prediction result, the local emotion prediction result according to the optimized global emotion prediction result, so as to obtain an optimized local emotion prediction result; and determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result and the optimized local emotion prediction result.
Optionally, the global emotion prediction result includes probabilities that the target voice belongs to each emotion type respectively; the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively; the final result determining module 408 is specifically configured to optimize the local emotion prediction result according to the optimized global emotion prediction result, and specifically includes: aiming at the optimized global emotion prediction result, taking the probability that the target voice belongs to each emotion type as the optimized global probability, and taking the probability that the voice fragment belongs to each emotion type as the local probability; weighting the optimized global probability of the emotion type and the local probability of the emotion type in the local emotion prediction result according to preset weights for each emotion type to obtain the optimized local probability of the emotion type in the local emotion prediction result; and obtaining the optimized local emotion prediction result according to the optimized local probability of each emotion type in the local emotion prediction result.
Optionally, the final result determining module 408 is specifically configured to take, as a global probability, a probability that the target speech belongs to each emotion type, and take, as a local probability, a probability that the speech segment belongs to each emotion type; selecting an emotion type corresponding to the maximum value of the global probability from the optimized global emotion prediction results, and taking the emotion type as a final emotion first prediction result of the target voice; selecting an emotion type corresponding to the maximum value of the local probability as a final emotion second prediction result of the target voice aiming at each optimized local prediction result; and determining the final emotion prediction result of the target voice according to the final emotion first prediction result of the target voice and the final emotion second prediction result of the target voice.
Optionally, the apparatus further comprises:
the model training module 410 is configured to obtain a sample voice and emotion marks of the sample voice; inputting the sample voice into an emotion prediction model to determine an emotion prediction result of the sample according to the emotion prediction model; determining the emotion prediction result and the difference of emotion marks corresponding to the sample voice; and training the emotion prediction model according to the difference.
The present specification also provides a computer readable storage medium storing a computer program operable to perform a method of speech emotion recognition as provided in fig. 1 above.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 5, which corresponds to fig. 1. At the hardware level, as shown in fig. 5, the electronic device includes a processor, an internal bus, a network interface, a memory, and a nonvolatile storage, and may of course include hardware required by other services. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs the computer program to implement a method for speech emotion recognition as described above with respect to fig. 1.
Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.
Claims (7)
1. A method of speech emotion recognition, the method comprising:
acquiring target voice;
selecting a plurality of voice fragments with preset lengths from the target voice;
inputting each voice segment into a pre-trained emotion prediction model for obtaining a local emotion prediction result corresponding to the voice segment according to the emotion prediction model; inputting the target voice into the emotion prediction model to obtain a global emotion prediction result of the target voice according to the emotion prediction model;
fusing the global emotion prediction result with at least one local emotion prediction result to obtain an optimized global emotion prediction result;
determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result;
determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result, wherein the final emotion prediction result comprises the following specific steps:
optimizing the local emotion prediction results according to the optimized global emotion prediction results aiming at each local emotion prediction result to obtain optimized local emotion prediction results;
Determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result and the optimized local emotion prediction result;
the global emotion prediction result comprises probabilities that the target voice belongs to each emotion type respectively;
the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively;
optimizing the local emotion prediction result according to the optimized global emotion prediction result, specifically including:
aiming at the optimized global emotion prediction result, taking the probability that the target voice belongs to each emotion type as the optimized global probability, and taking the probability that the voice fragment belongs to each emotion type as the local probability;
weighting the optimized global probability of the emotion type and the local probability of the emotion type in the local emotion prediction result according to preset weights for each emotion type to obtain the optimized local probability of the emotion type in the local emotion prediction result;
obtaining an optimized local emotion prediction result according to the optimized local probability of each emotion type in the local emotion prediction result;
Determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result and the optimized local emotion prediction result, wherein the final emotion prediction result comprises the following specific steps of:
taking the probability that the target voice belongs to each emotion type as global probability and taking the probability that the voice fragment belongs to each emotion type as local probability;
selecting an emotion type corresponding to the maximum value of the global probability from the optimized global emotion prediction results, and taking the emotion type as a final emotion first prediction result of the target voice;
selecting an emotion type corresponding to the maximum value of the local probability as a final emotion second prediction result of the target voice aiming at each optimized local prediction result;
and determining the final emotion prediction result of the target voice according to the final emotion first prediction result of the target voice and the final emotion second prediction result of the target voice.
2. The method of claim 1, wherein the global emotion prediction result comprises probabilities that the target speech belongs to each emotion type, respectively;
the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively;
Fusing the global emotion prediction result with at least one local emotion prediction result, wherein the method specifically comprises the following steps:
taking the probability that the target voice belongs to each emotion type as global probability and taking the probability that the voice fragment belongs to each emotion type as local probability;
for each emotion type, determining the maximum value of the local probability of the emotion type in the local probability of at least one local emotion prediction result as the local fusion probability of the emotion type;
and weighting the global probability of each emotion type and the local fusion probability of the emotion type according to preset fusion weights and the local fusion probability of the emotion type.
3. The method of claim 1, wherein training the emotion prediction model comprises:
acquiring sample voice and emotion marking of the sample voice;
inputting the sample voice into an emotion prediction model to determine an emotion prediction result of the sample according to the emotion prediction model;
determining the emotion prediction result and the difference of emotion marks corresponding to the sample voice;
and training the emotion prediction model according to the difference.
4. An apparatus for speech emotion recognition, said apparatus comprising:
the target voice acquisition module is used for acquiring target voice;
the voice segment acquisition module is used for selecting a plurality of voice segments with preset lengths from the target voice;
the prediction result acquisition module is used for inputting each voice fragment into a pre-trained emotion prediction model so as to acquire a local emotion prediction result corresponding to the voice fragment according to the emotion prediction model; inputting the target voice into the emotion prediction model to obtain a global emotion prediction result of the target voice according to the emotion prediction model;
the optimization module is used for fusing the global emotion prediction result with at least one local emotion prediction result to obtain an optimized global emotion prediction result;
the final result determining module is used for determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result;
the final result determining module is specifically configured to optimize, for each local emotion prediction result, the local emotion prediction result according to the optimized global emotion prediction result, and obtain an optimized local emotion prediction result; determining a final emotion prediction result of the target voice according to the optimized global emotion prediction result and the optimized local emotion prediction result;
The global emotion prediction result comprises probabilities that the target voice belongs to each emotion type respectively;
the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively;
the final result determining module is specifically configured to optimize the local emotion prediction result according to the optimized global emotion prediction result, and specifically includes: aiming at the optimized global emotion prediction result, taking the probability that the target voice belongs to each emotion type as the optimized global probability, and taking the probability that the voice fragment belongs to each emotion type as the local probability; weighting the optimized global probability of the emotion type and the local probability of the emotion type in the local emotion prediction result according to preset weights for each emotion type to obtain the optimized local probability of the emotion type in the local emotion prediction result; obtaining an optimized local emotion prediction result according to the optimized local probability of each emotion type in the local emotion prediction result;
the final result determining module is specifically configured to take a probability that the target voice belongs to each emotion type as a global probability, and a probability that the voice segment belongs to each emotion type as a local probability; selecting an emotion type corresponding to the maximum value of the global probability from the optimized global emotion prediction results, and taking the emotion type as a final emotion first prediction result of the target voice; selecting an emotion type corresponding to the maximum value of the local probability as a final emotion second prediction result of the target voice aiming at each optimized local prediction result; and determining the final emotion prediction result of the target voice according to the final emotion first prediction result of the target voice and the final emotion second prediction result of the target voice.
5. The apparatus of claim 4, wherein the global emotion prediction result comprises probabilities that the target speech belongs to each emotion type, respectively; the local emotion prediction result comprises probabilities that the voice fragments belong to each emotion type respectively; the optimization module is specifically used for: taking the probability that the target voice belongs to each emotion type as global probability and taking the probability that the voice fragment belongs to each emotion type as local probability; for each emotion type, determining the maximum value of the local probability of the emotion type in the local probability of at least one local emotion prediction result as the local fusion probability of the emotion type; and weighting the global probability of each emotion type and the local fusion probability of the emotion type according to preset fusion weights and the local fusion probability of the emotion type.
6. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-3.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-3 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310705248.7A CN116434787B (en) | 2023-06-14 | 2023-06-14 | Voice emotion recognition method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310705248.7A CN116434787B (en) | 2023-06-14 | 2023-06-14 | Voice emotion recognition method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116434787A CN116434787A (en) | 2023-07-14 |
CN116434787B true CN116434787B (en) | 2023-09-08 |
Family
ID=87092949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310705248.7A Active CN116434787B (en) | 2023-06-14 | 2023-06-14 | Voice emotion recognition method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116434787B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118645124A (en) * | 2024-08-09 | 2024-09-13 | 中国科学技术大学 | Voice emotion recognition method, device, equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103531206A (en) * | 2013-09-30 | 2014-01-22 | 华南理工大学 | Voice affective characteristic extraction method capable of combining local information and global information |
CN110992988A (en) * | 2019-12-24 | 2020-04-10 | 东南大学 | Speech emotion recognition method and device based on domain confrontation |
CN111564164A (en) * | 2020-04-01 | 2020-08-21 | 中国电力科学研究院有限公司 | Multi-mode emotion recognition method and device |
KR20200109958A (en) * | 2019-03-15 | 2020-09-23 | 숭실대학교산학협력단 | Method of emotion recognition using audio signal, computer readable medium and apparatus for performing the method |
CN112487824A (en) * | 2020-11-19 | 2021-03-12 | 平安科技(深圳)有限公司 | Customer service speech emotion recognition method, device, equipment and storage medium |
CN112489687A (en) * | 2020-10-28 | 2021-03-12 | 深兰人工智能芯片研究院(江苏)有限公司 | Speech emotion recognition method and device based on sequence convolution |
CN113255755A (en) * | 2021-05-18 | 2021-08-13 | 北京理工大学 | Multi-modal emotion classification method based on heterogeneous fusion network |
CN114387996A (en) * | 2022-01-14 | 2022-04-22 | 普强时代(珠海横琴)信息技术有限公司 | Emotion recognition method, device, equipment and storage medium |
CN114387997A (en) * | 2022-01-21 | 2022-04-22 | 合肥工业大学 | Speech emotion recognition method based on deep learning |
KR20220098991A (en) * | 2021-01-05 | 2022-07-12 | 세종대학교산학협력단 | Method and apparatus for recognizing emtions based on speech signal |
CN114821740A (en) * | 2022-05-17 | 2022-07-29 | 中国科学技术大学 | Multi-mode information fusion-based emotion recognition method and device and electronic equipment |
CN115312080A (en) * | 2022-08-09 | 2022-11-08 | 南京工业大学 | Voice emotion recognition model and method based on complementary acoustic characterization |
WO2023065619A1 (en) * | 2021-10-21 | 2023-04-27 | 北京邮电大学 | Multi-dimensional fine-grained dynamic sentiment analysis method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016195474A1 (en) * | 2015-05-29 | 2016-12-08 | Charles Vincent Albert | Method for analysing comprehensive state of a subject |
-
2023
- 2023-06-14 CN CN202310705248.7A patent/CN116434787B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103531206A (en) * | 2013-09-30 | 2014-01-22 | 华南理工大学 | Voice affective characteristic extraction method capable of combining local information and global information |
KR20200109958A (en) * | 2019-03-15 | 2020-09-23 | 숭실대학교산학협력단 | Method of emotion recognition using audio signal, computer readable medium and apparatus for performing the method |
CN110992988A (en) * | 2019-12-24 | 2020-04-10 | 东南大学 | Speech emotion recognition method and device based on domain confrontation |
CN111564164A (en) * | 2020-04-01 | 2020-08-21 | 中国电力科学研究院有限公司 | Multi-mode emotion recognition method and device |
CN112489687A (en) * | 2020-10-28 | 2021-03-12 | 深兰人工智能芯片研究院(江苏)有限公司 | Speech emotion recognition method and device based on sequence convolution |
CN112487824A (en) * | 2020-11-19 | 2021-03-12 | 平安科技(深圳)有限公司 | Customer service speech emotion recognition method, device, equipment and storage medium |
KR20220098991A (en) * | 2021-01-05 | 2022-07-12 | 세종대학교산학협력단 | Method and apparatus for recognizing emtions based on speech signal |
CN113255755A (en) * | 2021-05-18 | 2021-08-13 | 北京理工大学 | Multi-modal emotion classification method based on heterogeneous fusion network |
WO2023065619A1 (en) * | 2021-10-21 | 2023-04-27 | 北京邮电大学 | Multi-dimensional fine-grained dynamic sentiment analysis method and system |
CN114387996A (en) * | 2022-01-14 | 2022-04-22 | 普强时代(珠海横琴)信息技术有限公司 | Emotion recognition method, device, equipment and storage medium |
CN114387997A (en) * | 2022-01-21 | 2022-04-22 | 合肥工业大学 | Speech emotion recognition method based on deep learning |
CN114821740A (en) * | 2022-05-17 | 2022-07-29 | 中国科学技术大学 | Multi-mode information fusion-based emotion recognition method and device and electronic equipment |
CN115312080A (en) * | 2022-08-09 | 2022-11-08 | 南京工业大学 | Voice emotion recognition model and method based on complementary acoustic characterization |
Non-Patent Citations (1)
Title |
---|
Speech Emotion Recognition with Local-Global Aware Deep Representation Learning;Jiaxing Liu et al;ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN116434787A (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112990375B (en) | Model training method and device, storage medium and electronic equipment | |
CN112735407B (en) | Dialogue processing method and device | |
CN116663618B (en) | Operator optimization method and device, storage medium and electronic equipment | |
CN116434787B (en) | Voice emotion recognition method and device, storage medium and electronic equipment | |
CN112417093B (en) | Model training method and device | |
CN115563366A (en) | Model training and data analysis method, device, storage medium and equipment | |
CN116312480A (en) | Voice recognition method, device, equipment and readable storage medium | |
CN117409466B (en) | Three-dimensional dynamic expression generation method and device based on multi-label control | |
CN113887206B (en) | Model training and keyword extraction method and device | |
CN116578877B (en) | Method and device for model training and risk identification of secondary optimization marking | |
CN116186330B (en) | Video deduplication method and device based on multi-mode learning | |
CN116308738B (en) | Model training method, business wind control method and device | |
CN115017915B (en) | Model training and task execution method and device | |
CN114120273A (en) | Model training method and device | |
CN113344590A (en) | Method and device for model training and complaint rate estimation | |
CN118098266B (en) | Voice data processing method and device based on multi-model selection | |
CN115862675B (en) | Emotion recognition method, device, equipment and storage medium | |
CN117576522B (en) | Model training method and device based on mimicry structure dynamic defense | |
CN116384515B (en) | Model training method and device, storage medium and electronic equipment | |
CN116186272B (en) | Combined training method and device, storage medium and electronic equipment | |
CN117351946B (en) | Voice recognition method and device, storage medium and electronic equipment | |
CN114972909B (en) | Model training method, map construction method and map construction device | |
CN116340852B (en) | Model training and business wind control method and device | |
CN117520850A (en) | Model training method and device, storage medium and electronic equipment | |
CN116543759A (en) | Speech recognition processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |