CN108899050A - Speech signal analysis subsystem based on multi-modal Emotion identification system - Google Patents

Speech signal analysis subsystem based on multi-modal Emotion identification system Download PDF

Info

Publication number
CN108899050A
CN108899050A CN201810612829.5A CN201810612829A CN108899050A CN 108899050 A CN108899050 A CN 108899050A CN 201810612829 A CN201810612829 A CN 201810612829A CN 108899050 A CN108899050 A CN 108899050A
Authority
CN
China
Prior art keywords
mood
emotion identification
subsystem
analysis
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810612829.5A
Other languages
Chinese (zh)
Other versions
CN108899050B (en
Inventor
俞旸
凌志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yun Si Powerise Mdt Infotech Ltd
Original Assignee
Nanjing Yun Si Powerise Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yun Si Powerise Mdt Infotech Ltd filed Critical Nanjing Yun Si Powerise Mdt Infotech Ltd
Priority to CN201810612829.5A priority Critical patent/CN108899050B/en
Publication of CN108899050A publication Critical patent/CN108899050A/en
Application granted granted Critical
Publication of CN108899050B publication Critical patent/CN108899050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • G10L15/25Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of speech signal analysis subsystems based on multi-modal Emotion identification system, it includes data acquisition equipment, output equipment, it is characterised in that:It further includes mood analyzing software system, and the mood analyzing software system carries out synthetic analysis and evaluation by the data obtained to the data acquisition equipment, and finally result is exported to the output equipment;The mood analyzing software system includes the Emotion identification subsystem based on voice signal.The breakthrough Emotion identification for having got through five big single modes of the present invention, innovative is encoded by neural network by the information of multiple single modes using deep neural network, carries out comprehensive descision after the association and understanding of depth, accuracy rate is greatly improved, is suitable for most general inquiries and interacts class application scenarios.

Description

Speech signal analysis subsystem based on multi-modal Emotion identification system
Technical field
The present invention relates to moods to set technical field, specifically, more particularly to machine learning, deep learning, computer Vision, natural language processing, speech recognition, human action identification, contactless physiological detection etc. are based on multi-modal Emotion identification The speech signal analysis subsystem of system.
Background technique
Emotion identification is a kind of technology of emotional change for judging people, mainly passes through the external expression and row of collector For variation, the psychological condition of people is inferred.In modern society, Emotion identification technology and it is widely used in smart machine Exploitation, sale shopping guide robot, health control, advertisement marketing etc..Mood is a kind of feeling for combining people, thought and row For state, it includes people to extraneous or autostimulation psychoreaction, also includes the physiological reaction with this psychoreaction. It is inner in various man-machine interactive systems (such as robot, trial system etc.), if system can recognize that the emotional state of people, people Interaction with machine will become more friendly and natural.Therefore, analysis is carried out to mood and identification is Neuscience, psychology The important cross discipline research topic of one of the fields such as, cognitive science, computer science and artificial intelligence.
Long-standing for the research of mood, the method used is also different.In recent years, as eeg signal acquisition is set Standby application and popularization, the fast development of signal processing and machine learning techniques and computer digital animation ability are substantially It improves, the Emotion identification research based on brain electricity has become the heat subject of neural engineering and biomedical engineering field.
Method is induced corresponding to different moods, Emotion identification method is also different, common Emotion identification method master It is divided into two major classes:Identification based on non-physiological signal and the identification based on physiological signal.Mood based on non-physiological signal is known Other method mainly includes the identification to facial expression and speech intonation.Human facial expression recognition method is according between expression and mood Corresponding relationship identifies different moods, and under specific emotional state people can generate specific facial muscle movements and expression mould Formula, the corners of the mouth can upwarp when being such as in a cheerful frame of mind, and eye will appear annular fold;It can frown, open eyes wide when angry.Currently, face Portion's Expression Recognition mostly uses the method for image recognition to realize.Speech intonation recognition methods is according to people under different emotional states Expression of language difference come what is realized, the intonation spoken when being such as in a cheerful frame of mind can be more cheerful and more light-hearted, and intonation can compare when irritated It is more dull.Be based on the advantages of non-physiological signal recognition methods it is easy to operate, do not need special installation.The disadvantage is that it cannot be guaranteed that feelings The reliability of thread identification, because people can cover up the true emotional of oneself by camouflage facial expression and speech intonation, and This camouflage is often not easy to be found.Secondly, being known for the disabled person with certain special diseases based on non-physiological signal Method for distinguishing is often difficult to realize.
Since EEG signals are very faint, in collection process, it is necessary to by the amplifier of high-amplification-factor to brain Electric signal carries out signal amplification.The volume of current commercialized Electroencephalo signal amplifier is generally larger, is unfavorable for portable use. The Electroencephalo signal amplifier of chip is had recently emerged, can effectively solve amplifier volume problems of too, but cost is still It is higher, from practical also a certain distance.
So it will be apparent that the Emotion identification method based on physiological signal requires complicated and expensive signal measurement acquisition system System goes to obtain accurate bio signal, can not apply in extensive scene, especially in some special screnes, such as punishment In detecing, inquest etc., when needing concealed measure, these methods are all unavailable.
Because mood is subjective conscious experience and impression of the individual to environmental stimuli, there is psychology and physiological reaction Feature, it is desirable to not need directly to observe inherent impression, but we can pass through its outer aobvious behavior or physiology and become Change to be inferred, here it is the Emotion identification methods more praised highly now.And in this kind of methods, most of Emotion identification Mainly to the identification of expressive meaning.Its recognition methods is mainly to be carried out by means of the movement of the big muscle group of face.But not yet There are expression, the text said, posture, speech intonation and the physiological characteristic etc. of comprehensive people.
In the prior art, such as:《Multi-modal intelligence mood sensing system》, publication number:CN 107220591 A.The technology It is referred to a kind of multi-modal intelligent mood sensing system, including acquisition module, identification module, Fusion Module, the identification module Including based on expression Emotion identification unit, voice-based Emotion identification unit, Behavior-based control Emotion identification unit and Emotion identification unit based on physiological signal, each Emotion identification unit in the identification module know multi-modal information Not, to obtain mood component, the mood component includes type of emotion and emotional intensity, and the Fusion Module is by the identification The mood component of module carries out the accurate perception that human body mood is realized in fusion.
Summary of the invention
Aiming at the problems existing in the prior art, the present invention just the expression of the innovative comprehensive people of proposition, text, voice, The Emotion identification method and system of posture and the big mode of physiological signal 5.Compared to more past similar patent of invention (such as:It is open Number 107220591 A of CN), the present invention has breakthrough fundamentally in the following aspects.
1. wearable device is not required equipment in the present invention, our innovative propositions only need to obtain video record with And voice signal.
2. the present invention is to go to obtain by way of the non-contact feature amplification that declines of innovation for the feature extraction of physiological signal , which greatly reduces cost and improves the ease of use of product.
3. the present invention is on basic text mood analysis foundation, it is also proposed that the synthesis mood analysis of more wheel dialogues.It should Innovative point not only increases the mood analysis of each local dialog unit, additionally provides to the comprehensive handle of the mood of entire dialog procedure It holds.
4. the present invention is also on the basis of action recognition, Emotion identification of the innovative invention based on human body attitude.And And posture Emotion identification proposed by the present invention is the variation for the main posture of people being identified as key node.
5. the present invention is when each single mode of synthesis is total Emotion identification, innovative proposes basic neural network Mood correspondence, association and the reasoning based on timing of RNN.
In order to achieve the above-mentioned object of the invention, the technical solution adopted by the present invention is:One kind being based on multi-modal Emotion identification system The speech signal analysis subsystem of system, it includes data acquisition equipment, output equipment, it is characterised in that:It further includes mood point Software systems are analysed, the mood analyzing software system carries out comprehensive analysis by the data obtained to the data acquisition equipment and pushes away Reason, finally exports result to the output equipment;The mood analyzing software system includes the mood based on voice signal Recognition subsystem.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute The Emotion identification subsystem based on voice signal is stated, the parameters,acoustics such as fundamental frequency, duration, sound quality and clarity are that the voice of mood is special Sign amount establishes mood speech database, constantly extracts the basic skills that new phonetic feature amount is voice mood identification.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute The Emotion identification subsystem based on voice signal is stated, voice signal is carried out based on neural network MLP (Multilayer Perception model) The model of Emotion identification, firstly, cutting (segmentation) is carried out to continuous voice signal, to obtain discrete sound Small units, these small units overlap, thus the analysis active cell for making model better, and understand front and back Context voice unit;Model extraction speech energy (energy) calibration curve information later;Again in next step, subsystem extracts fundamental frequency (pitch) calibration curve information, tonality feature are portrayed by fundamental frequency feature and are constructed, and are gone by using autocorrelation method Extract fundamental curve.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute Stating mood analyzing software system further includes the Emotion identification subsystem based on face-image expression, the emotion based on text semantic point Subsystem, the Emotion identification subsystem based on human body attitude, the Emotion identification subsystem based on physiological signal are analysed, and based on more Wheel dialog semantics understand that subsystem is associated with the multi-modal mood semantic fusion based on timing and judge subsystem.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute Stating the Emotion identification subsystem based on face-image expression is that people can generate specific expression mould under specific emotional state Formula, the motion information based on dynamic image sequence and facial expression image, optical flow estimation and benchmark optical flow algorithm based on region are from again Sports ground information is effectively obtained in miscellaneous background and multi-pose expression sequence;
The sentiment analysis subsystem based on text semantic, the analysis of text mood can be divided into word, sentence, chapter three Rank, the method based on word are analysis emotional characteristics words, according to threshold decision word polarity or the phase of calculating lexical semantic Like degree;Method based on sentence is to sample mood label to each sentence, extracts evaluates word or obtains evaluation phrase progress Analysis;Method based on chapter is that the whole mood proneness analysis of chapter is carried out on the basis of sentence mood trend analysis;
The Emotion identification subsystem based on human body attitude extracts the Typical examples under the various emotional states of body, right Each posture carries out the nuance that discriminant analysis goes out close mood, feature database is established, according to the duration of human action, frequency The kinetic properties such as rate are as judging basis, and therefrom extracts physical motion information is identified;
The sentiment analysis subsystem based on text semantic is based on the improved feelings of depth convolutional neural networks CNN Thread recognition methods, subsystem carry out mood to the text in problem domain using the lexical semantic vector generated in target domain Classification, its input are the sentences or document indicated with matrix, and every a line of matrix corresponds to a participle element, every a line It is the vector for indicating a word, these vectors are all a kind of forms of word embeddings (high-dimensional vector expression), from A upper module obtains, or the index according to word in vocabulary;
The second layer of subsystem is convolutional neural networks layer;
The third layer of subsystem is time-based party layer, in the characteristic information that previous convolutional layer extracts, Their incidence relations on a timeline are found out, by the corresponding change on the time dimension in each eigenmatrix in preceding layer Summary and induction, to form the characteristic information being more concentrated;
The 4th layer of subsystem is last full connection prediction interval, is the feature letter for the concentration for obtaining preceding layer first Breath, carries out fully intermeshing and all possible respective weights combination is searched in group merging, to find coefficient between them Mode;Next interior layer is Dropout layers, refers to and allows the weight not works of the certain hidden layer nodes of network at random in model training Make, those idle nodes temporarily not think be network structure a part, but its weight must remain (only temporarily When do not update), because it may work again when the input of next sample, next one interior layer is tanh (hyperbola Function), this is a nonlinear logic conversion, the last one interior layer is softmax, it is commonly activated in more classification Function is that logic-based returns, and each probability for needing the possibility classification predicted is sharpened by it, so that in advance The classification of survey is shown one's talent;
The Emotion identification subsystem based on human body attitude, the Emotion abstract based on action recognition refers to defeated according to data Enter source, carry out the characterization and modeling of exercise data first, then carry out the modeling of mood, obtains two sets of tables about movement and mood Levy data;The existing action identification method based on exercise data is used later, its continuous movement is accurately identified, Obtain the action message of the data;It is corresponding that the mood model obtained before and mood data library are subjected to matching again, in the process plus With the auxiliary of action message, the mood of input data is finally extracted;Specially:
Human Modeling
It is that the artis of human body is modeled first, human body is regarded as the rigidity for having inner link is System, it includes bone and artis, and the relative motion of bone and artis constitutes the variation of human body attitude, i.e., usually described Description movement, in the numerous artis of human body, according to the weight to emotion influence, ignore finger and toe, by the ridge of human body Column is abstracted as three neck, chest and abdomen joints, sums up a manikin, wherein above the waist include head, neck, chest, abdomen, Two large arm and two forearms, and the lower part of the body includes two thighs, two shanks;
● emotional state is extracted
For a variety of emotional states of selection, the table that every kind of emotional state is carried out in the case of human normal is had chosen respectively It reaches, and limbs is reacted and carry out detailed analysis;It is the shifting of gravity center of human body first since human body is abstracted into for rigid model It is dynamic, it is divided into forward, backward and natural mode;Other than the movement of center of gravity, the followed by rotation of artis, human body generation movement becomes Change, and artis relevant with mood includes head, thoracic cavity, shoulder and ancon, corresponding movement is the bending on head, thoracic cavity It rotates, the bending of the swing of upper arm and direction of extension and ancon, these parameters combine the movement of upper center of gravity, include in total The movement of 7 degree of freedom gives expression to the movement of people's upper part of the body.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute The Emotion identification subsystem based on face-image expression is stated, based on the ensemble model based on VGG16 and RESNET50.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute Stating the Emotion identification subsystem based on physiological signal is a kind of contactless physiological signal Emotion identification, the physiological mechanism of mood (electrocardio, heart rate, myoelectricity, electrodermal response, breathing, vascular pressure are reacted including mood sensing (brain electricity) and the body physiological of mood Power etc.), mood sensing is the main generation mechanism of mood, the different physiological reactions of brain is reflected by EEG signals, due to it The particularity of signal is identified that time-frequency composes entropy, fractal dimension etc. all by three kinds of time domain, frequency domain and time-frequency domain features As the characteristic quantity for measuring brain activity.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute The Emotion identification subsystem based on physiological signal is stated, when blood is utilized in the Emotion identification of physiological signal flowing in human body The variation of light:Blood can be by blood vessel when heartbeat, and bigger by the blood volume of blood vessel, the light being absorbed by the blood also is got over More, the light of application on human skin surface reflection is fewer, estimates heart rate by the time frequency analysis to image;
The first step is to carry out space filtering to video sequence, to obtain the base band of different spatial frequencys;
Second step is the bandpass filtering carried out in time domain to each base band, extracts interested part variable signal;
Third step amplification and synthesis, the peak value number of statistical signal variation, i.e., the physiology heart rate of approximate the people.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute It states and understands subsystem based on more wheel dialog semantics, input language in the language generation model base of traditional seq2seq to when front-wheel It joined the attention mechanism of Emotion identification on plinth, joined the mood in the more wheel dialogues in front in time series in dialogue management Tracking;It is inner that each current user's word language is input into a two-way LSTM encoder (encoder), then current It screens different emotional state inputs to merge with the encoder output of the user spoken utterances generated just now, is input to decoder jointly In, for the language of the existing user of such decoder there has also been current mood, the system dialog response generated later is exactly individual character Change, specific to the output of active user's emotional state;The information state of mood sensing updates (Sentiment Aware Information State Update, ISU) strategy, be to the opportunity that dialogue state is updated arbitrarily have new information when It carves;When dialogue state updates, more new capital is determining every time, for previous moment same system state, same system behavior, and Identical current time user emotion state, necessarily leads to, identical current time system mode.
The above-mentioned speech signal analysis subsystem based on multi-modal Emotion identification system, it is further characterized by:Institute It states the multi-modal mood semantic fusion association based on timing and judges subsystem, each RNN recurrent neural network is each single mode The intermediate nerve network representation form of Emotion Understanding is organized in temporal sequence, wherein in each time point a nerve net Network unit both is from the output at the correspondence time point of the middle layer of the neural network of single mode subsystem;By each RNN recurrence Output after the neural network at the single time point of neural network is transported to multi-modal fusion association and judges RNN recurrent neural net Network, each time point of multi-modal RNN recurrent neural network summarize the RNN recurrent neural network of each single mode it is current when Between point on neural network output, combine it is multi-modal after, the output of each time point be finally the time point mood Judging result.
Beneficial effect:The breakthrough Emotion identification for having got through 5 big single modes of the present invention, innovative utilization depth nerve The information of multiple single modes is encoded by neural network, carries out comprehensive descision after the association and understanding of depth by network, thus substantially Degree improves accuracy rate, and the requirement to environment and hardware reduces, and has finally widened and has been suitable for the overwhelming majority generally Application scenarios, especially some special screnes, such as criminal investigation, hearing etc..
Detailed description of the invention
Fig. 1 is the embodiment of the present invention based on multi-modal Emotion identification system schematic.
Fig. 2 is the embodiment of the present invention based on multi-modal Emotion identification system flow chart.
Fig. 3 is the VGG16 model support composition of the embodiment of the present invention.
Fig. 4 is the core residual error architecture diagram in the RESNET50 model of the embodiment of the present invention.
Fig. 5 is the synthesis ensemble model support composition of the embodiment of the present invention.
Fig. 6 is that the present invention is obtaining discrete sound small units schematic diagram to the progress cutting of continuous voice signal.
Fig. 7 is the variation schematic diagram of short term energy (STE) of the invention in sound wave.
Fundamental frequency information schematic diagram when Fig. 8 is people's anger of the invention.
Fig. 9 is that MLP (multi-layer perception) neural network that the present invention uses carries out deep learning model Architecture diagram.
Figure 10 is that the nucleus module based on a depth convolutional neural networks that the present invention uses does text mood analysis chart.
Figure 11 is application drawing of the convolutional neural networks of combination syntax tree proposed by the present invention in mood analysis.
Figure 12 is the overview flow chart of human body attitude proposed by the present invention detection.
Figure 13 is 13 sections of human body main models figures that the present invention identifies in human body attitude detection.
Figure 14 be the present invention is based on human body phenomenon:The blood volume of blood vessel is bigger, and the light being absorbed by the blood is also more, people The fewer image figure of the light of skin surface reflection.
Figure 15 is the present invention in human body biological characteristics detection process, and a cosine wave is amplified α times by the method used Process and result figure.
Figure 16 is the present invention in overall flow chart (the circulating more wheels interaction reasons taken turns in interaction Emotion identification more One process of solution).
Figure 17 is that the present invention joined the input language when front-wheel on the basis of the language generation model of traditional seq2seq The attention mechanism architecture diagram of Emotion identification.
Figure 18 is the present invention in more wheel dialogues, is shown the update for the mood sensing that dialogue state taken turns based on front more It is intended to.
Figure 19 is that the present invention utilizes deep neural network by the information of multiple single modes by neural network coding, the pass of depth The main frame composition of comprehensive descision is carried out after connection and understanding.
Figure 20 invention integral product system diagram.
Specific embodiment
The invention is further elaborated below in conjunction with the drawings and specific embodiments.
Because the generation of any mood is all along with certain variations on body, such as facial expression, muscular tone, interior dirty work It moves.Directly carrying out Emotion identification using the variation of these signals is exactly so-called basic recognition methods, also referred to as single mode feelings Thread recognition methods, current main method include face-image, language, text, posture and physiological signal etc..The present invention proposes will Below the computer in each single mode the understanding of mood is merged, correspond to and reasoning obtain it is more complete, accurately Emotion identification method and system.
The present embodiment propose based on multi-modal Emotion identification system, system consist of the following components (Fig. 1 for this Inventive embodiments based on multi-modal Emotion identification system schematic):
Hardware components:Data acquisition equipment includes camera, microphone, detects the bracelet of heartbeat, the inspection of human body attitude multiple spot Sensor is surveyed, robot sensor acquisition system etc., output equipment includes display, and speaker, earphone, printer, robot are mutual Dynamic system etc..
Software section:Synthetic analysis and evaluation is carried out by the data obtained to data acquisition equipment.The system shares 7 It includes based on face-image expression that subsystem (7 modules are as shown in Figure 1), which is constituted, is based on voice signal, based on text semantic, base Understand and in human body attitude, the multi-modal Emotion identification based on physiological signal, and based on more wheel dialog semantics based on timing Multi-modal mood semantic fusion association judgement.
1. the Emotion identification based on facial expression image.
The foundation of human facial expression recognition method is that people can generate specific expression mode under specific emotional state.Base It is all the most common approach in still image Expression Recognition in template and using two class method of neural network, but due to being single width figure Piece identification, discrimination be not inevitable high.The present invention proposes a completely new neural network, it is based on dynamic image sequence, and method is examined The motion information for having considered facial expression image, optical flow estimation and benchmark optical flow algorithm based on region all can be from complex backgrounds and multi-pose Sports ground information is effectively obtained in expression sequence.
2. the Emotion identification based on voice signal
Speech is the important means of the distinctive expression mood of the mankind, and the parameters,acoustics such as fundamental frequency, duration, sound quality and clarity are The main feature amount of mood voice.Mood speech database is established, constantly extracting new phonetic feature amount is voice mood identification Basic skills.Support vector machines and voice mood feature extraction also can be used as based on Dempster-Shafer evidence theory Method.The individual difference of voice signal is obvious, and traditional voice analysis method needs to establish huge sound bank, this can be to knowledge Certain difficulty is not brought.The voice signal reinforced on traditional voice identification types neural net base the invention proposes one Emotion identification.
3. text based Emotion identification
The analysis of text mood can be divided into three word, sentence, chapter ranks in the course of the research.Method based on word Emotional characteristics word is mainly analyzed, according to threshold decision word polarity or the similarity of calculating lexical semantic;Based on sentence Method is to sample mood label to each sentence, extracts evaluates word or obtains evaluation phrase and is analyzed;Based on chapter Method is that the whole mood proneness analysis of chapter is carried out on the basis of sentence mood trend analysis.In text based mood In identification, compare the selection for relying on emotional characteristics word, although affective tag can be determined to each word patch by establishing corpus, There are many paraphrase for many words, and these problems just must be taken into consideration when establishing corpus.The appearance of many emerging vocabulary, also can be significantly Interfere the accuracy of text mood tendency identification.Therefore these traditional methods based on corpus are although relatively simple accurate, But a large amount of manpower is needed to construct corpus in advance, so being not suitable for cross-cutting migration.It is proposed by the present invention to be based on depth The method of habit, a model can be learnt in different fields and scene by the automatic depth to different data, thus into The automatic Emotion identification of row.
4. the Emotion identification based on human body attitude
It include emotional information abundant in the limb motion feature of people.Emotion identification based on human posture mainly mentions The Typical examples under the various emotional states of body are taken, the nuance that discriminant analysis goes out close mood is carried out to each posture, is built Vertical feature database.Emotion identification based on human motion characteristic is mainly according to motilities such as duration, the frequencies of human action Matter is as judging basis, and therefrom extracts physical motion information is identified.Many postures or movement do not have apparent feelings Thread feature tends not to comprehensively be differentiated in identification process, thus this method has biggish limitation.So this Invention proposes the deeper Emotion identification that progress is blended by human body attitude and with other signals.
5. the Emotion identification based on physiological signal
Physiological change is seldom controlled by the subjectivity of people, thus application physiological signal carries out Emotion identification result obtained more It is objective to add.The physiological mechanism of mood includes body physiological reaction (electrocardio, heart rate, myoelectricity, the skin of mood sensing (brain electricity) and mood Electric skin response, breathing, vascular pressure etc.).Mood sensing is the main generation mechanism of mood, be can reflect greatly by EEG signals The different physiological reactions of brain can be known due to the particularity of its signal by three kinds of time domain, frequency domain and time-frequency domain features Not, in addition time-frequency composes entropy, fractal dimension etc. and all can be used as the characteristic quantity for measuring brain activity.Although physiological signal carries Accurate emotional information, but signal strength is very faint, and such as when acquiring electrocardiosignal, it is dry to have biggish myoelectric potential It disturbs, so more demanding during the extraction process.And interference source is very more in practice, therefore is difficult to be effectively removed physiological signal In artefact.The present invention proposes that the variation of the blood based on face and the colour of skin detects some physiological reactions automatically, such as heartbeat, Breathing etc..
Based on the Emotion identification subsystem for having above 5 kinds of single modes, the invention proposes will be single on the basis of timing Mood semanteme under mode is trained after timing alignment, to realize that the auto-associating of cross-module state in timing is corresponding and most Synthesis Emotion identification, understanding and the reasoning and judging merged eventually.Fig. 2 is the embodiment of the present invention based on multi-modal Emotion identification system System flow chart.
Module describes in detail one by one below.
1. the Emotion identification based on facial expression image:
Process can be substantially classified as to the conventional method that facial expression image identifies based on computer vision.
First image preprocessing mainly carries out Face datection, face gray processing etc. and eliminates disturbing factor.Second expression is special Sign is extracted and is mainly based upon the feature extraction of still image and the image characteristics extraction of dynamic sequence, before carrying out Expression Recognition First to carry out Feature Dimension Reduction.Last Expression Recognition is mainly that suitable sorting algorithm is selected to carry out the expressive features after dimensionality reduction Classification.
Traditional sorting algorithm includes:
● the detection method based on the colour of skin
Based on Gauss model, it is based on mixed Gauss model, based on histogram model, experiment shows mixed Gauss model ratio Gauss model is good.
● the method based on statistical model
Artificial neural network:Different angle Face datection is carried out using multiple neural networks.
Based on probabilistic model:Face is detected by the conditional probability of estimation facial image and inhuman face image.Support to Amount machine:Face and non-face judgement are carried out using the hyperplane of support vector machines.
● the detection method based on Heuristic Model
Distorted pattern:Matched using two face contour lines of deforming template and crown contour line and left and right.
Mosaic map mosaic:Human face region is divided into multiple mosaic blocks, is verified using one group of rule and edge feature.
Recently since be more easier acquisition and the extensive GPU calculating of large-scale data accelerate employment artificial neural networks The great raising that deep learning method obtains, and be proved to better than above most of conventional method.The present embodiment proposes Based on the following ensemble model based on VGG16 and RESNET50.
The VGG16 model framework of the present embodiment first is as shown in Figure 3:
Secondly the core residual error framework in the RESNET50 model of the present embodiment is as shown in Figure 4:
The synthesis ensemble model framework based on above 2 frameworks that last the present embodiment proposes is as shown in Figure 5:
(as shown in the table) is counted by the result in open experimental data, the model that the present embodiment proposes, which has reached, works as Preceding most advanced level, and operational efficiency is high.
Accuracy rate Accuracy Recall rate
Baseline system based on SVM 31.8% 43.7% 54.2%
Industry dominant systems based on VGG16 59.2% 70.1% 69.5%
Industry dominant systems based on RESNET50 65.1% 76.5% 74.8%
Algorithm proposed by the present invention 67.2% 79.4% 78.2%
2. the Emotion identification based on voice signal:
The development of traditional voice emotion recognition research be unable to do without the support of emotional speech database.The quality in emotional speech library Height directly determines the performance quality that obtained emotion recognition system is trained by it.Currently, existing emotional speech in field Library type multiplicity, there is no unified to establish standard, can be divided into performance type, leading type, natural type according to the type of excitation emotion This 3 classifications;Two classifications of identification type and synthesis type can be divided into according to application target;English, moral can be divided into according to languages difference Language, Chinese etc..
In these methods, prosodic features can be substantially summarized as the acoustic feature of speech emotion recognition, based on spectrum Correlated characteristic and sound quality feature this 3 seed type, these features usually extracted as unit of frame, but counted with global characteristics The form of value participates in the identification of emotion.The unit of global statistics is usually acoustically independent sentence or word, common to unite Meter index has extreme value, extreme value range, variance etc..Common feature has:
● prosodic features, which refers to, outmatches pitch, the duration of a sound, speed and weight on semantic symbol etc. in voice Variation, is a kind of structural arrangement to voice flow expression way.Its presence or absence has no effect on us to word, word, sentence It listens and distinguishes, but decide that in short whether sounding natural pleasing to the ear, modulation in tone prosodic features is otherwise known as " super-segmental feature " Or " paralinguistics feature ", its emotion separating capacity have obtained being widely recognized as speech emotion recognition area research persons, have made With very universal, wherein the most commonly used prosodic features has duration (duration), fundamental frequency (pitch), energy (energy) etc..
● the correlated characteristic based on spectrum is considered as sound channel (vocal tract) change in shape and sound generating movements The embodiment of correlation between (articulator movement), in the voice including speech recognition, Speaker identification etc. Field of signal processing has successful utilization.Nwe et al. is carried out by the earthquake intensity to emotional speech the study found that voice In distribution of the affective content to spectrum energy in each frequency spectrum section have a significant impact for example, expression happiness emotion Voice shows high-energy in high band, and expresses sad voice and show the apparent low energy of difference in same frequency range Amount.In recent years, there are more and more researchers to apply to the identification of speech emotional for correlated characteristic is composed, and play and change The effect of kind system identification performance, the emotion separating capacity of earthquake intensity is very important.In speech emotion recognition task Used in linear spectrum signature.
● sound quality be characterized in people assign voice a kind of subjective evaluation index, for measure voice it is whether pure, Clearly, recognizable etc..The acoustics that sound quality has an impact is presented with and wheezes, trill, choke with sobs, and is occurred frequently in Speaker is excited, is difficult under the situation inhibited.Listening for speech emotional distinguishes in experiment that the person of distinguishing is listened in the variation of sound quality Unanimously the expression regarded as with speech emotional have close relationship.In speech emotion recognition research, for measuring sound The acoustic feature of quality generally has:Formant frequency and its bandwidth (format frequency and bandwidth), frequency Perturbation and Shimmer (jitter and shimmer), glottis parameter (glottal parameter) etc..
This invention proposes one based on neural network MLP (Multilayer Perception model) to voice signal on this basis Carry out the model of Emotion identification.Firstly, this invention carries out cutting (segmentation) to continuous voice signal, thus To discrete sound small units (as shown in Figure 6).These units overlap, so as to the analysis for making model better Active cell, and understand the context voice unit of front and back.Model extraction speech energy (energy) calibration curve information later.Cause Very important effect is played in speech recognition for energy information, also the no less important in Emotion identification.Such as it is glad and raw When gas, the speech energy of people can be significantly higher than sadness.Fig. 7, which is shown, to be utilized in short term energy (STE) in sound wave Change capture people happiness and the emotional changes such as anger when, the speech energy variation of people.
Again in next step, system extracts fundamental frequency (pitch) calibration curve information.Tonality feature is in the speech recognition of most of language Play very important effect.And tonality feature can be portrayed and be constructed by fundamental frequency feature.Therefore it seeks in the actual environment Finding reliable, the effective fundamental frequency extracting method of one kind is a very difficult thing.The present embodiment uses Autocorrelation method goes to extract fundamental curve.As Fig. 8 show the present embodiment use autocorrelation Method goes to extract in fundamental curve, the fundamental frequency information of people's anger.
Furthermore the system that this invention proposes also is extracted Mel Frequency Cepstral from voice The important information such as Coefficients (MFCC) and Formant Frequencies.Neural network is utilized in final system MLP (multi-layer perception) carries out deep learning, and (model framework is as shown in Figure 9:The MLP that the present embodiment uses The deep learning of (multi-layer perception) neural network progress vocal print mood).
3. text based Emotion identification:
The present embodiment propose based on the improved Emotion identification method of depth convolutional neural networks CNN.Module benefit The lexical semantic vector generated in target domain carries out mood classification to the text in problem domain.The core of the module It is a depth convolutional neural networks system (as shown in Figure 10).
Its input is the sentence or document indicated with matrix.Every a line of matrix correspond to a participle element, one As be a word, be also possible to a character.That is every a line is the vector for indicating a word.In general, these to Amount is all a kind of form of word embeddings (high-dimensional vector expression), obtains, but can also use from a upper module The form of one-hot vector, namely the index according to word in vocabulary.If indicating 10 lists with the term vector of 100 dimensions The sentence of word will obtain the matrix of 10x100 dimension as input.
The second layer of the module is convolutional neural networks layer.This step an important improvement has been done into the present embodiment.It passes The operation of system is (yellow convolution window in Figure 10), if convolution window width is m (window size 3 has been used in figure), then taking m A continuous word (example in Figure 10 is " ordering Beijing "), their corresponding term vectors are linked together to obtain a m* The vector x i of d dimension:I+m-1 (d indicates term vector dimension).Then vector x i:I+m-1 be multiplied with convolution kernel w (w be also one to Amount), ci=f (wxi:I+m-1+b), window sliding obtains c=[c1, c2 ..., cn-m+1], then does maximum value to c and choose It is worth to one, it is assumed that present K convolution kernel again, then finally obtaining the vector of K dimension.These traditional convolution windows are just for even M continuous word.So the purpose for doing selection operation here is exactly the sentence for handling different length, so that no matter sentence length is How much, convolution kernel width is how many, and the vector that most Zhongdao obtains fixed length indicates, while maximum value is chosen and goes to refine most important Characteristic information, it is most significant that its hypothesis is that maximum value represents in some feature.Convolution mind is demonstrated through a large number of experiments It is suitable for multiple-task, and effect highly significant through network model, does not have to carry out cumbersome feature work compared to conventional method Journey nor need syntax parsing tree.In addition the mode input in advance imitate than random initializtion term vector by trained term vector Fruit is well very much, everybody can input preparatory trained term vector using deep learning at present.Compared to common traditional volume Product window, the present embodiment proposition also do convolution to grammatically continuous m word.These m word may not be practical continuous (example in Figure 10 is red mark " ordering hotel "), but grammatically they are a continuous semantic structures.For example scheme Sentence shown in 11 " John hit the ball ", if selecting using convolution window size to be 3, there will be " John hit The window of the " and " hit the ball " two complete 3 words.But it is clear that none embodies the complete core of the sentence Innermost thoughts and feelings justice.And if from syntactic analysis tree, remove to determine the word in the window of " continuous ", then have " John hit ball " and " hit the ball " two convolution windows.So, it is obvious that this 2 convolution windows all more embody complete reasonable language Justice.The convolution window based on syntactic analysis tree for having the two new does maximum in conjunction with pervious traditional convolution window jointly Value is chosen.Characteristic information obtained in this way will make the easier meaning for grasping passage of model.
The third layer of the module is time-based party layer.The input of text word and word is with front and back or time sequencing On strong association.The main target of this layer is exactly to find out it in the characteristic information that previous convolutional layer extracts Incidence relation on a timeline.Main excavation process is will be on the time dimension in each eigenmatrix in preceding layer Corresponding change summary and induction.To form the characteristic information being more concentrated.
The 4th layer of the module is last full connection prediction interval.This layer is actually comprising many tiny detail analysis. It is the characteristic information for the concentration for obtaining preceding layer first, carries out fully intermeshing and group merges all possible respective weights group of search It closes, to find the coefficient mode between them.Next interior layer is Dropout layers.Dropout refers in model The weight of the certain hidden layer nodes of network is allowed not work at random when training, those idle nodes can temporarily not be thought to be net A part of network structure, but its weight must be remained and (not updated temporarily only), because when next sample input It may work again.Next one interior layer is tanh (hyperbolic function).This is a nonlinear logic conversion.Most The latter interior layer is softmax, and it is that logic-based returns that it, which is common activation primitive in more classification,.It is by each need The probability of the possibility classification to be predicted is sharpened, so that the classification of prediction is shown one's talent.
4. the Emotion identification based on human body attitude:
This invention proposes the Emotion abstract method for acting and changing based on human body attitude.Mood based on action recognition Extractive technique refers to according to input data source, carries out the characterization and modeling of exercise data first, then carries out the modeling of mood, obtains 2 sets of characterize datas about movement and mood.The existing action identification method based on exercise data is used later, it is continuous Movement accurately identify, obtain the action message of the data.Again by the mood model obtained before and mood data library Matching correspondence is carried out, is subject to the auxiliary of action message in the process, finally extracts the mood of input data.Detailed process such as Figure 12 It is shown.
The system mainly has following steps.
● Human Modeling
It is that the artis of human body is modeled first, human body can be counted as the rigidity for having inner link System.It includes bone and artis, and the relative motion of bone and artis constitutes the variation of human body attitude, i.e. usually institute The description movement said.In the numerous artis of human body, according to the weight to emotion influence, it is handled as follows:
1) ignore finger and toe.Hand information only indicates indignation when clenching fist, and common exercise data exists There is no that the simulation and estimation of strength can not be carried out in the case where pressure sensor, it is believed that the information content of hand is smaller, importance It is lower, simplification appropriate must be carried out.For toe, amount of correlated information is almost nil.Therefore, hand and foot are simplified to by the present embodiment For a point, to reduce unrelated interruptions.
2) backbone of human body is abstracted as 3 neck, chest and abdomen joints.Backbone can be bigger with movable range, and The composition of bone is more complicated and cumbersome.This 3 chosen on backbone have the point of above the fold differentiation to do the mould of backbone It is quasi-.
One manikin can be summed up by above step, wherein above the waist including head, neck, chest, abdomen, 2 big Arm and 2 forearms, and the lower part of the body includes 2 thighs, 2 shanks.This model include 13 sections rigid body and 9 freedom degrees, such as Shown in Figure 13.
● emotional state is extracted
For a variety of emotional states of selection, the table that every kind of emotional state is carried out in the case of human normal is had chosen respectively It reaches, and limbs is reacted and carry out detailed analysis.
Since human body is abstracted into the movement that the parameter expected first for rigid model is gravity center of human body.Human body The movement of center of gravity is extremely abundant, can carry out diversified description, but description should be than gravity motion needed for mood Describe it is more specific, more accurately.Center of gravity can be encoded to 3 kinds of situations --- forward, backward and natural mode.In addition to center of gravity Movement except, what is next considered is the rotation of artis, and movement variation can occur for human body, and relevant with mood Artis includes that (emotion expression service of the human body lower part of the body is extremely limited, so temporarily first not making to locate for head, thoracic cavity, shoulder and ancon Reason).Corresponding movement is the bending of the bending on head, the rotation in thoracic cavity, the swing of upper arm and direction of extension and ancon, this A little parameters combine the movement of upper center of gravity, in total include the movement of 7 freedom degrees, so that it may give expression to the dynamic of people's upper part of the body Make.An easy Expression and Action standard can be made of the set of this parameter.The sample size done referring to Ai Keman is 61 The experiment of people, every kind of mood being directed in mood set can be indicated according to the parameter of rotational parameters and gravity motion. What the positive and negative values of number indicated is the direction of motion of the position relative to coordinate system, and positive number numerical value is indicated in right-hand rule coordinate In system, which travels forward, and negative numerical value indicates that the direction of motion at the position is negative sense.
5. the Emotion identification based on physiological signal
The variation of light when blood flows in human body is utilized in the Emotion identification of physiological signal:Blood when heartbeat Liquid can be by blood vessel, and bigger by the blood volume of blood vessel, the light being absorbed by the blood is also more, the light of application on human skin surface reflection It is fewer.Therefore, heart rate can be estimated (as shown in figure 14 by the time frequency analysis to image:Based on human body phenomenon:Blood The blood volume of pipe is bigger, and the light being absorbed by the blood is also more, the fewer image figure of the light of application on human skin surface reflection).
So-called Lagrange visual angle is exactly the angle of the motion profile of interested pixel (particle) from tracking image Hand analysis.2005, Liu et al. people proposed a kind of amplification action technology for image earliest, and this method is first to target Characteristic point is clustered, and is then tracked these motion profiles of point at any time, is finally increased the motion amplitude of these points.So And there are following deficiencies for the method at Lagrangian visual angle:
● it needs that the motion profile of particle is accurately tracked and estimated, needs to expend more computing resource;
● the considerations of independently carrying out, lacking to general image to the tracking of particle is easy to appear image and is not closed, To influence amplified effect;
Amplification to target object movement is exactly the motion profile for modifying particle, since the position of particle is become Change, it is also necessary to which the position original to particle carries out background filling, equally will increase the complexity of algorithm.
Different from Lagrangian visual angle, Euler visual angle does not track explicitly and the movement of estimation particle, but by visual angle It is fixed on a place, such as entire image.Later, it is assumed that entire image is all becoming, only the frequency of these variable signals, vibration The characteristics such as width are different, and variable signal interested to the present embodiment is just in wherein.In this way, being reformed into the amplification of " change " Precipitation and enhancing to frequency range interested.Technical detail is illustrated in detail below.
1) space filtering
The first step for Euler's image zoom technology (hereinafter referred to as EVM) that the present embodiment proposes is to carry out sky to video sequence Between filter, to obtain the base band of different spatial frequencys.Do so be because:
● help to reduce noise.Image shows different SNR (signal-to-noise ratio) under different space frequency.It is general next It says, spatial frequency is lower, and signal-to-noise ratio is higher instead.Therefore, it is distorted in order to prevent, these base band should use different times magnifications Number.The image of top, i.e. spatial frequency are minimum, the highest image of signal-to-noise ratio, and maximum amplification factor can be used, next layer Amplification factor is sequentially reduced;
● convenient for approaching picture signal.The higher image of spatial frequency (such as original video image) is likely difficult to use Taylor Series expansion is approached.Obscure because the result in this case, approached just will appear, directly amplifies and just will appear obvious mistake Very.In this case, the present embodiment reduces distortion by introducing a space wavelength lower limit value.If the sky of current base band Between wavelength be less than this lower limit value, just reduce amplification factor.
Since the purpose of space filtering is only simply by multiple adjacent pixels " spelling " at one piece, it is possible to using low Bandpass filter carries out.In order to accelerate arithmetic speed, down-sampling operation can also be carried out in passing.It is familiar with the friend of image processing operations Friend can should reflect quickly:The combination of the two things is exactly pyramid.In fact, linear EVM is exactly using drawing This pyramid of pula or gaussian pyramid carry out Multiresolution Decomposition.
2) time-domain filtering
After having obtained the base band of different space frequency, the bandpass filtering in time domain, mesh next are carried out to each base band Be to extract interested part variable signal.For example, if the heart rate signal to be amplified, can choose 0.4~4Hz (24~240bpm) carries out bandpass filtering, this frequency range is exactly the range of the heart rate of people.But, there are many kinds of bandpass filters, Common just has ideal bandpass filter, Butterworth (Butterworth) bandpass filter, Gaussian band-pass filter, etc.. Which should be selected?This obtains and is selected according to the purpose of amplification.If necessary to carry out subsequent time frequency analysis to amplification result (such as the frequency extracted heart rate, analyze musical instrument), then should select the filter of narrow passband, such as ideal bandpass filter, because This kind of filter can directly intercept out interested frequency range, and avoid amplifying other frequency ranges;If you do not need to amplification result Time frequency analysis is carried out, can choose the filter of broad passband, such as Butterworth bandpass filter, second order IIR filter etc., Because this kind of filter can preferably mitigate ringing.
3) amplify and synthesize
By two step of front, the part of " change " has been found, that is, it is " change " this problem that, which is solved,.Next it inquires into How " change " this problem is amplified.One important foundation is:Previous step bandpass filtering as a result, being exactly to interested variation Approach.
Figure 15, which is demonstrated using above method, amplifies cosine wave α times of process and result.Wherein, the song of black Line indicates original signal f (x), and blue curve indicates that the signal f (x+ δ) after variation, the curve of cyan are indicated to this signal Taylor series are approachedThe curve of green indicates the part for the variation that we separate.This part is put Big α times again add-back original signal just obtain amplified signal, in Figure 15 red curve indicate this amplified signal f (x)+ (1+α)B(x,t))。
Finally optimize spatio-temporal filtering effect using deep learning, it is assumed that the frequency and heart rate of heartbeat bring signal intensity connect Closely, after the information of rgb space being converted to the space YIQ (ntsc), to the processing of two color spaces and with suitable bandpass filtering Device finds out signal.The peak value number of statistical signal variation, i.e., the physiology heart rate of approximate the people.
6. based on more wheel dialog semantics and Emotion Understanding
Traditional semantic understanding does not account for interactive environment largely, and at most answer type is putd question in single-wheel in other words.Mesh Before, sentiment analysis is still based on some traditional algorithms in the main approaches of conventional machines in study, for example, SVM, information Entropy, CRF etc..Sentiment analysis based on machine learning, it is advantageous that having the ability modeled to various features.It manually to mark The single word of note is as feature, and the deficiency of corpus is often exactly the bottleneck of performance.
Once there is " interaction ", emotion and mood analysis just become difficult very much.First:Interaction be a lasting process and It is not fixed in short-term.And this inherently changes the evaluation method of Judgment by emotion.When no interactions, for example commodity are commented By, if judge this section of words be after what emotional semantic classification can realized value, be clearly classification task.But in dialogue With regard to not quite alike, affective state continues is becoming, and analyzing any single a word is not no great significance, this is no longer one A simple classification task.For lasting process, simple solution is the function for adding a gain and decaying, but this A function is very difficult accurate, and theoretical foundation is few, evaluate this function write good or not is also difficult.Second:Interactive presence will Most status information all conceals.Can see on bright face less than 5%, only tip of the iceberg (be similar to it is hidden The mode of markov goes to understand).And the both sides of interaction default other side and know many information.For example it links up between Subjective and Objective Relation condition, mutual demand purpose, emotional state, social relationships, environment, the content chatted before, and all have Common sense, personality, three sights etc..Then following some phenomenons can be found:Common information is more mostly just more difficult between two people, because The effect of hidden state is bigger, and the dimension of hidden state is more.There is different exchange normal forms between different people.This model The variation of formula depends on other various environmental informations (including time, place, relation condition, mutual mood, common warp It goes through, chat habit of oneself etc.).Even identical people, the exchange normal form between them are also the mistake of a dynamic change Journey, for example two people are during love, the exchange way between them can because the heating and cooling of emotion and not Together.Third:Interaction is related to the jump of information.When a people oneself says that when often all compares and has logic, even It passes through.But chat and personal statement are entirely two pieces thing, and chat has biggish jumping characteristic.This uncertain information-jump Exponentially increase the difficulty of sentiment analysis.
Above 3 main aspects are exactly that joined interaction factor sentiment analysis why to become so difficult judgement, It is that evaluation method changes, and this evaluation method is very complicated, there is nothing referential first.Again from the second third reason It can be seen that this data dimension for machine learning is too sparse, (dominant state just only has text, and expression etc. is most State is all hiding), jumping characteristic, therefore this mode by counting are added, wants accuracy rate to be done height, degree of difficulty can Think and knows.
Therefore this invention proposes that emphasis improves dialogue management, reinforces the understanding of language and the attention machine to emotion word System, the basic semantic and mood that can effectively hold in more wheel dialogues capture.Overall process (as shown in figure 16) is one The process that circulating more wheels interaction understands.
Wherein the innovative point of the present embodiment is mainly at 2 aspects:One is to the input language when front-wheel in tradition It joined the attention mechanism of Emotion identification on the basis of the language generation model of seq2seq, the other is adding in dialogue management The mood tracking in the more wheel dialogues in front in time series is entered.
In first module, framework is as shown in figure 17:Language to the input language when front-wheel in traditional seq2seq is raw At the attention mechanism that joined Emotion identification on the basis of model.
In the framework, each current user's word language is input into a two-way LSTM encoder (encoder) inner, then it is different from traditional language generation model, joined the attention to the mood in current sentence here Power.Then current screen to different emotional state inputs is merged with the encoder output of the user spoken utterances generated just now, altogether With being input in decoder, the language of the existing user of such decoder is there has also been current mood, the system pair that generates later Words response is exactly personalization, specific to the output of active user's emotional state.
This invention proposes that the 2nd innovations for more wheel dialogue Emotion identifications are that a kind of simple dialogue state updates Method:The information state of mood sensing updates (Sentiment Aware Information State Update, ISU) plan Slightly.SAISU strategy is to the opportunity that dialogue state is updated, at the time of arbitrarily having new information;Specifically, work as user, or Any participant in person's system, or dialogue, if there is new information generates, then, dialogue state will be updated. The update is the mood sensing more taken turns based on front.It is detailed in Figure 18.
Figure 18 expression, the dialogue state s at t+1 momentt+1, state s dependent on moment t beforet, and moment t is before System behavior atAnd the corresponding user behavior of current time t+1 and mood ot+1.Mode can be written as follow:
st+1←st+at+ot+1
When dialogue state updates, it is assumed that each more new capital is determining.Therefore, this is it is assumed that have led to, for preceding One moment same system state, same system behavior and identical current time user emotion state, necessarily lead to, identical Current time system mode.
7. the multi-modal mood semantic fusion based on timing:
In recent years, with the development in multisource and heterogeneous Information fusion treatment field, mood shape will can be referred to from multi-class The feature of state is merged.It is mutually supported using different classes of signal, by carrying out fusion treatment to complementary information, at information Reason quality is not the simple compromise balance to multiple data sources, and often will be good than any member, available to change greatly very much It is kind.In nearest international mood calculating and intelligent interaction academic conference, this concept of mood multimode analysis has been related to.Cause This, people start with mutual between the emotional information in multiple channels such as human face expression, voice, eye movement, posture and physiological signal Benefit property studies identification problem, i.e., based on multi-modal Emotion identification.Multimodal information fusion identification is known relative to single signal Not, recognition accuracy undoubtedly can be improved.In order to improve the discrimination of mood and the robustness of identification, it is necessary to according to not Same application environment selects different data sources;For different data sources, using effective theory and method, research efficiently, Stable Emotion identification algorithm etc., these are also the hot spot of the field future studies.
Minority system starts comprehensive 1 to 2 single modes to carry out mood detection at present.Such as following classification:
● the Emotion identification based on audio visual
The most common multimodal recognition method is based on view, the method for the sense of hearing, and these two types of features acquisition information are more convenient, Voice mood identification simultaneously and facial expression recognition have complementarity on recognition performance, so the most universal.Japan, which revitalizes, to be learned In the cross-cultural multi-modal perception studies that can be supported, concern be exactly emotion expression service when facial expression and mood sound relationship. The system is adaptively adjusted the weight of voice and human face action characteristic parameter in bimodal Emotion identification, this method for Emotion identification rate is up to 84% or more.Wherein, using vision and the sense of hearing as input state, asynchronous constraint is carried out in state layer, this Discrimination is improved 12.5% and 1 respectively by kind fusion method
1.6%.
● the Emotion identification based on more physiological signals
More physiological signal fusions also have extensive application, and in 2004, Lee et al. just became using including heart rate, skin temperature Change, more physiological signals including electrodermal activity monitor the pressure state of people.Document is mainly from electrocardio, heart rate signal It extracts useful feature and carries out category identification.Three kinds of electrocardio, breathing, body temperature physiological signals are carried out feature extraction by Wu Xue Kui et al. And tagsort.Canentol et al. is by a variety of mood physiological characteristic phases such as electrocardio, blood volume pulse, electrodermal activity, breathing In conjunction with progress Emotion identification.Wagner et al. passes through fusion flesh streaming current, electrocardio, skin resistance and the physiology for breathing four channels 92% fusion recognition rate of gain of parameter.It is merged in document by more physiological signals, recognition accuracy is increased to from 30% 97.5%.
The Emotion identification combined based on voice electrocardio
In terms of voice and electrocardio combination, the method for document utilization Weighted Fusion and Feature Space Transformation to voice signal with Electrocardiosignal is merged.The average recognition rate that single mode mood classifier based on electrocardiosignal and based on voice signal obtains Respectively 71% and 80%, and the discrimination of multi-modal classifier then reaches 90% or more.
The breakthrough Emotion identification for having got through 5 big single modes of the present embodiment, innovative utilization deep neural network will The information of multiple single modes is encoded by neural network, carries out comprehensive descision after the association and understanding of depth, to significantly mention High accuracy rate, and the requirement to environment and hardware reduces finally has been widened and has been suitable for the overwhelming majority and is normally applied field Scape, especially some special screnes, such as criminal investigation, hearing etc..
The main framework of model is as shown in figure 19:The present embodiment using deep neural network by the information of multiple single modes by Neural network encodes, carries out comprehensive descision after the association and understanding of depth.
General frame, which considers Emotion identification, according to the relevant institute's espressiove in front and back, to be moved on a continuous time axis Work, text, voice and physiology make a judgement to current point in time.Therefore on the basis of classical seq2seq neural network On invented this method.Seq2Seq is set forth in 2014, independently elaborates its main thought by two articles earliest, point It is not Google Brain team《Sequence to Sequence Learning with Neural Networks》With Yoshua Bengio team《Learning Phrase Representation using RNN Encoder-Decoder for Statistical Machine Translation》.This two articles mention perfectly in harmonyly aiming at the problem that machine translation Similar resolving ideas is gone out, thus Seq2Seq is generated.The main thought that Seq2Seq is solved the problems, such as is by depth nerve net One sequence as input is mapped as by network model (the most commonly used is LSTM, length memory network, a kind of Recognition with Recurrent Neural Network) One sequence as output, this process are made of two links of coding input and decoded output.Seq2seq basic model is worked as When being applied to analyze based on the Emotion identification on continuous time axis, it needs the variation of unique innovation, could be preferably Solve particular problem.So in Emotion identification, other than common seq2seq model problem to be treated, it is also necessary to Several key features are paid attention to below:1, the relationship between the respective different time points of multiple single modes;2, phase between multi-modal With the inherent influence and relationship on time point;3, comprehensive multi-modal mood totally identification identification.These all do not have in the prior art It is addressed.
Specifically model includes 5 recurrent neural network (RNN, recurrent neural network) first.In reality The present invention just uses the representative of long-short term memory (LSTM) this RNN in the system of border.Each RNN is each single mode The intermediate nerve network representation form of state Emotion Understanding is organized in temporal sequence.Wherein in each time point (in Figure 19 One blue strip) a neural network unit both be from previously described single mode subsystem neural network middle layer Correspondence time point output.After the neural network (a blue strip in Figure 19) at the single time point of each RNN Output be transported to multi-modal fusion association judge RNN.Therefore each time point of multi-modal RNN summarizes each single mode RNN current point in time on neural network output.Combine it is multi-modal after, the output of each time point be it is final should The emotion judgment result (orange arrows in Figure 19) at time point.
Software and hardware system design application scenarios of the invention are to provide people to analyst professional in psychological consultation field Software tool is studied and judged in the analysis of object expression and mood changes.Total system includes following four part composition:Micro- expression point Software, dedicated analysis equipment, high-definition camera, printer are studied and judged in analysis.
Figure 20 is integral product system architecture diagram of the present invention.
Real-time recording is carried out by the face of " high-definition camera " to analyzed personage, and is provided through network-accessible Video flowing." dedicated analysis equipment " deploys the product of the invention, it is only necessary to double-click software shortcut icon with regard to openable software circle Face;In program operation process, video address and expression warning value can be managed for configuration as needed.The invention is auxiliary in psychology It leads in consultation process and records, analyzes, studying and judging the facial expression and heart rate data of personage, when terminating, provide " data analysis Result form ".This analysis result can be printed as document by " printer " by " data analysis result report " by operator, with Convenient for achieving.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the present invention in any form, though So the present invention has been disclosed as a preferred embodiment, and however, it is not intended to limit the invention, any technology people for being familiar with this profession Member, without departing from the scope of the present invention, when the technology contents using the disclosure above make a little change or modification For the equivalent embodiment of equivalent variations, but anything that does not depart from the technical scheme of the invention content, according to the technical essence of the invention Any simple modification to the above embodiments, equivalent variations and modification, all of which are still within the scope of the technical scheme of the invention.

Claims (10)

1. a kind of speech signal analysis subsystem based on multi-modal Emotion identification system, it includes data acquisition equipment, output Equipment, it is characterised in that:It further includes mood analyzing software system, and the mood analyzing software system is by adopting the data The data that collection equipment obtains carry out synthetic analysis and evaluation, and finally result is exported to the output equipment;The mood analysis Software systems include the Emotion identification subsystem based on voice signal.
2. the speech signal analysis subsystem according to claim 1 based on multi-modal Emotion identification system, feature exist In:The Emotion identification subsystem based on voice signal, the parameters,acoustics such as fundamental frequency, duration, sound quality and clarity are moods Phonetic feature amount establishes mood speech database, constantly extracts the basic skills that new phonetic feature amount is voice mood identification.
3. the speech signal analysis subsystem according to claim 2 based on multi-modal Emotion identification system, feature exist In:The Emotion identification subsystem based on voice signal, based on neural network MLP (Multilayer Perception model) to voice signal The model of Emotion identification is carried out, firstly, cutting (segmentation) is carried out to continuous voice signal, to obtain discrete Sound small units, these small units overlap, thus the analysis active cell for making model better, and before understanding Context voice unit afterwards;Model extraction speech energy (energy) calibration curve information later;Again in next step, subsystem extracts base Frequently (pitch) calibration curve information, tonality feature are portrayed by fundamental frequency feature and are constructed, by using autocorrelation method It goes to extract fundamental curve.
4. the speech signal analysis subsystem according to claim 1 or 2 or 3 based on multi-modal Emotion identification system, It is characterized in that:The mood analyzing software system further includes Emotion identification subsystem based on face-image expression, based on text Sentiment analysis subsystem, the Emotion identification subsystem based on human body attitude, the Emotion identification subsystem based on physiological signal of semanteme System, and understand that subsystem is associated with the multi-modal mood semantic fusion based on timing based on more wheel dialog semantics and judge subsystem System.
5. the speech signal analysis subsystem according to claim 4 based on multi-modal Emotion identification system, feature exist In:The Emotion identification subsystem based on face-image expression is that people can generate specific table under specific emotional state Feelings mode, the motion information based on dynamic image sequence and facial expression image, optical flow estimation and benchmark optical flow algorithm based on region Sports ground information is effectively obtained from complex background and multi-pose expression sequence;
The sentiment analysis subsystem based on text semantic, the analysis of text mood can be divided into three word, sentence, chapter grades Not, the method based on word is analysis emotional characteristics word, according to threshold decision word polarity or calculates the similar of lexical semantic Degree;Method based on sentence is to sample mood label to each sentence, extracts evaluates word or obtains evaluation phrase and is divided Analysis;Method based on chapter is that the whole mood proneness analysis of chapter is carried out on the basis of sentence mood trend analysis;
The Emotion identification subsystem based on human body attitude extracts the Typical examples under the various emotional states of body, to each Posture carries out the nuance that discriminant analysis goes out close mood, feature database is established, according to the duration of human action, frequency etc. Kinetic property is as judging basis, and therefrom extracts physical motion information is identified;
The sentiment analysis subsystem based on text semantic is known based on the improved mood of depth convolutional neural networks CNN Other method, subsystem carry out mood point to the text in problem domain using the lexical semantic vector generated in target domain Class, its input are the sentences or document indicated with matrix, and every a line of matrix corresponds to a participle element, and every a line is Indicate the vector of a word, these vectors are all the forms of word embeddings (a kind of high-dimensional vector indicate), from upper One module obtains, or the index according to word in vocabulary;
The second layer of subsystem is convolutional neural networks layer;
The third layer of subsystem is that time-based party layer is found out in the characteristic information that previous convolutional layer extracts Their incidence relations on a timeline summarize the corresponding change on the time dimension in each eigenmatrix in preceding layer It concludes, to form the characteristic information being more concentrated;
The 4th layer of subsystem is last full connection prediction interval, is the characteristic information for the concentration for obtaining preceding layer first, into Row fully intermeshing and group, which merge, searches for all possible respective weights combination, to find the coefficient mode between them; Next interior layer is Dropout layers, refers to and the weight of the certain hidden layer nodes of network is allowed not work at random in model training, no Work those of node temporarily not think be network structure a part, but its weight must remain (only temporarily not Update) because it may work again when next sample input, next one interior layer is tanh (hyperbola letter Number), this is a nonlinear logic conversion, the last one interior layer is softmax, it is commonly to activate letter in more classification Number, is that logic-based returns, and each probability for needing the possibility classification predicted is sharpened by it, so that prediction Classification show one's talent;
The Emotion identification subsystem based on human body attitude, the Emotion abstract based on action recognition refers to be inputted according to data Source, carries out the characterization and modeling of exercise data first, then carries out the modeling of mood, obtains two sets of characterizations about movement and mood Data;The existing action identification method based on exercise data is used later, its continuous movement is accurately identified, is obtained To the action message of the data;It is corresponding that the mood model obtained before and mood data library are subjected to matching again, are subject in the process The auxiliary of action message finally extracts the mood of input data;Specially:
Human Modeling
It is that the artis of human body is modeled first, human body is regarded as the rigid system for having inner link, it Comprising bone and artis, the relative motion of bone and artis constitutes the variation of human body attitude, i.e., usually described retouches Movement is stated, in the numerous artis of human body, according to the weight to emotion influence, ignores finger and toe, the backbone of human body is taken out As summing up a manikin for three neck, chest and abdomen joints, wherein above the waist including head, neck, chest, abdomen, two Large arm and two forearms, and the lower part of the body includes two thighs, two shanks;
Emotional state is extracted
For a variety of emotional states of selection, the expression that every kind of emotional state is carried out in the case of human normal is had chosen respectively, and Limbs are reacted and carry out detailed analysis;Since human body is abstracted into for rigid model, it is the movement of gravity center of human body first, is divided into Forward, backward and natural mode;Other than the movement of center of gravity, the followed by rotation of artis, human body generation movement variation, and Artis relevant with mood includes head, thoracic cavity, shoulder and ancon, corresponding movement be the bending on head, the rotation in thoracic cavity, on The bending of the swing of arm and direction of extension and ancon, these parameters combine the movement of upper center of gravity, in total include seven freedom The movement of degree gives expression to the movement of people's upper part of the body.
6. the speech signal analysis subsystem according to claim 5 based on multi-modal Emotion identification system, feature exist In:The Emotion identification subsystem based on face-image expression, based on the ensemble mould based on VGG16 and RESNET50 Type.
7. the speech signal analysis subsystem according to claim 4 based on multi-modal Emotion identification system, feature exist In:The Emotion identification subsystem based on physiological signal is a kind of contactless physiological signal Emotion identification, the life of mood Reason mechanism include the body physiological of mood sensing (brain electricity) and mood react (electrocardio, heart rate, myoelectricity, electrodermal response, breathing, Vascular pressure etc.), mood sensing is the main generation mechanism of mood, the different physiological reactions of brain are reflected by EEG signals, Due to the particularity of its signal, identified that time-frequency composes entropy, FRACTAL DIMENSION by three kinds of time domain, frequency domain and time-frequency domain features Number etc. is all as the characteristic quantity for measuring brain activity.
8. the speech signal analysis subsystem according to claim 7 based on multi-modal Emotion identification system, feature exist In:The Emotion identification subsystem based on physiological signal, is utilized blood in human body in the Emotion identification of physiological signal The variation of light when flowing:Blood can pass through blood vessel, the light that is absorbed by the blood bigger by the blood volume of blood vessel when heartbeat Line is also more, and the light of application on human skin surface reflection is fewer, estimates heart rate by the time frequency analysis to image;
The first step is to carry out space filtering to video sequence, to obtain the base band of different spatial frequencys;
Second step is the bandpass filtering carried out in time domain to each base band, extracts interested part variable signal;
Third step amplification and synthesis, the peak value number of statistical signal variation, i.e., the physiology heart rate of approximate the people.
9. the speech signal analysis subsystem according to claim 4 based on multi-modal Emotion identification system, feature exist In:It is described to understand subsystem based on more wheel dialog semantics, language is inputted in the language generation mould of traditional seq2seq to when front-wheel It joined the attention mechanism of Emotion identification on the basis of type, joined in dialogue management in the more wheel dialogues in front in time series Mood tracking;It is inner that each current user's word language is input into a two-way LSTM encoder (encoder), then Current screen to different emotional state inputs merges with the encoder output of the user spoken utterances generated just now, is input to solution jointly In code device, there has also been current mood, the system dialog response generated later is exactly the language of the existing user of such decoder It is personalized, specific to the output of active user's emotional state;The information state of mood sensing updates (Sentiment Aware Information State Update, ISU) strategy, be to the opportunity that dialogue state is updated arbitrarily have new information when It carves;When dialogue state updates, more new capital is determining every time, for previous moment same system state, same system behavior, and Identical current time user emotion state, necessarily leads to, identical current time system mode.
10. the speech signal analysis subsystem according to claim 4 based on multi-modal Emotion identification system, feature exist In:The multi-modal mood semantic fusion association based on timing judges subsystem, and each RNN recurrent neural network is each list The intermediate nerve network representation form of mode Emotion Understanding is organized in temporal sequence, wherein in each time point a mind The output at the correspondence time point of the middle layer of the neural network of single mode subsystem both is from through network unit;By each RNN Output after the neural network at the single time point of recurrent neural network is transported to multi-modal fusion association and judges RNN recurrence mind Through network, each time point of multi-modal RNN recurrent neural network summarizes working as the RNN recurrent neural network of each single mode On preceding time point neural network output, combine it is multi-modal after, the output of each time point be finally the time point Emotion judgment result.
CN201810612829.5A 2018-06-14 2018-06-14 Voice signal analysis subsystem based on multi-modal emotion recognition system Active CN108899050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810612829.5A CN108899050B (en) 2018-06-14 2018-06-14 Voice signal analysis subsystem based on multi-modal emotion recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810612829.5A CN108899050B (en) 2018-06-14 2018-06-14 Voice signal analysis subsystem based on multi-modal emotion recognition system

Publications (2)

Publication Number Publication Date
CN108899050A true CN108899050A (en) 2018-11-27
CN108899050B CN108899050B (en) 2020-10-02

Family

ID=64345234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810612829.5A Active CN108899050B (en) 2018-06-14 2018-06-14 Voice signal analysis subsystem based on multi-modal emotion recognition system

Country Status (1)

Country Link
CN (1) CN108899050B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508783A (en) * 2018-12-28 2019-03-22 杭州翼兔网络科技有限公司 Mood incorporates roughly model construction and the automatic progress rough acquisition methods of mood into
CN109770925A (en) * 2019-02-03 2019-05-21 闽江学院 A kind of fatigue detection method based on depth time-space network
CN109784023A (en) * 2018-11-28 2019-05-21 西安电子科技大学 Stable state vision inducting brain electricity personal identification method and system based on deep learning
CN110047518A (en) * 2019-04-29 2019-07-23 湖南检信智能科技有限公司 A kind of speech emotional analysis system
CN110556129A (en) * 2019-09-09 2019-12-10 北京大学深圳研究生院 Bimodal emotion recognition model training method and bimodal emotion recognition method
CN110569720A (en) * 2019-07-31 2019-12-13 安徽四创电子股份有限公司 audio and video intelligent identification processing method based on audio and video processing system
CN110781719A (en) * 2019-09-02 2020-02-11 中国航天员科研训练中心 Non-contact and contact cooperative mental state intelligent monitoring system
CN110826637A (en) * 2019-11-11 2020-02-21 广州国音智能科技有限公司 Emotion recognition method, system and computer-readable storage medium
CN110865705A (en) * 2019-10-24 2020-03-06 中国人民解放军军事科学院国防科技创新研究院 Multi-mode converged communication method and device, head-mounted equipment and storage medium
CN110931006A (en) * 2019-11-26 2020-03-27 深圳壹账通智能科技有限公司 Intelligent question-answering method based on emotion analysis and related equipment
CN111476258A (en) * 2019-01-24 2020-07-31 杭州海康威视数字技术股份有限公司 Feature extraction method and device based on attention mechanism and electronic equipment
CN111553460A (en) * 2019-02-08 2020-08-18 富士通株式会社 Information processing apparatus, arithmetic processing device, and method of controlling information processing apparatus
CN111839490A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact heart rate monitoring method and system
CN112069897A (en) * 2020-08-04 2020-12-11 华南理工大学 Knowledge graph-based voice and micro-expression recognition suicide emotion sensing method
CN112083806A (en) * 2020-09-16 2020-12-15 华南理工大学 Self-learning emotion interaction method based on multi-modal recognition
CN112163467A (en) * 2020-09-11 2021-01-01 杭州海康威视数字技术股份有限公司 Emotion analysis method and device, electronic equipment and machine-readable storage medium
CN112204653A (en) * 2019-03-29 2021-01-08 谷歌有限责任公司 Direct speech-to-speech translation through machine learning
CN112446938A (en) * 2020-11-30 2021-03-05 重庆空间视创科技有限公司 Multi-mode-based virtual anchor system and method
CN112618911A (en) * 2020-12-31 2021-04-09 四川音乐学院 Music feedback adjusting system based on signal processing
CN113100768A (en) * 2021-04-14 2021-07-13 中国人民解放军陆军特色医学中心 Computer vision incapability and damage effect evaluation system
CN113421546A (en) * 2021-06-30 2021-09-21 平安科技(深圳)有限公司 Cross-tested multi-mode based speech synthesis method and related equipment
CN113421545A (en) * 2021-06-30 2021-09-21 平安科技(深圳)有限公司 Multi-modal speech synthesis method, device, equipment and storage medium
CN114821401A (en) * 2022-04-07 2022-07-29 腾讯科技(深圳)有限公司 Video auditing method, device, equipment, storage medium and program product
WO2023019612A1 (en) * 2021-08-16 2023-02-23 Hong Kong Applied Science and Technology Research Institute Company Limited Apparatus and method for speech-emotion recognition with quantified emotional states
CN116311539A (en) * 2023-05-19 2023-06-23 亿慧云智能科技(深圳)股份有限公司 Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
CN116662742A (en) * 2023-06-28 2023-08-29 北京理工大学 Brain electrolysis code method based on hidden Markov model and mask empirical mode decomposition
ES2957419A1 (en) * 2022-06-03 2024-01-18 Neurologyca Science & Marketing Sl System and method for real-time detection of emotional states using artificial vision and natural language listening (Machine-translation by Google Translate, not legally binding)
CN117727298A (en) * 2024-02-09 2024-03-19 广州紫麦科技有限公司 Deep learning-based portable computer voice recognition method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510424A (en) * 2009-03-12 2009-08-19 孟智平 Method and system for encoding and synthesizing speech based on speech primitive
US20160322065A1 (en) * 2015-05-01 2016-11-03 Smartmedical Corp. Personalized instant mood identification method and system
CN106503646A (en) * 2016-10-19 2017-03-15 竹间智能科技(上海)有限公司 Multi-modal emotion identification system and method
CN106598948A (en) * 2016-12-19 2017-04-26 杭州语忆科技有限公司 Emotion recognition method based on long-term and short-term memory neural network and by combination with autocoder
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN107705807A (en) * 2017-08-24 2018-02-16 平安科技(深圳)有限公司 Voice quality detecting method, device, equipment and storage medium based on Emotion identification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510424A (en) * 2009-03-12 2009-08-19 孟智平 Method and system for encoding and synthesizing speech based on speech primitive
US20160322065A1 (en) * 2015-05-01 2016-11-03 Smartmedical Corp. Personalized instant mood identification method and system
CN106503646A (en) * 2016-10-19 2017-03-15 竹间智能科技(上海)有限公司 Multi-modal emotion identification system and method
CN106598948A (en) * 2016-12-19 2017-04-26 杭州语忆科技有限公司 Emotion recognition method based on long-term and short-term memory neural network and by combination with autocoder
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN107705807A (en) * 2017-08-24 2018-02-16 平安科技(深圳)有限公司 Voice quality detecting method, device, equipment and storage medium based on Emotion identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
党宏社等: "信息融合技术在情绪识别领域的研究展望", 《计算机应用研究》 *

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784023A (en) * 2018-11-28 2019-05-21 西安电子科技大学 Stable state vision inducting brain electricity personal identification method and system based on deep learning
CN109508783B (en) * 2018-12-28 2021-07-20 沛誉(武汉)科技有限公司 Method for constructing rough emotion classifying model and automatically performing rough emotion acquisition
CN109508783A (en) * 2018-12-28 2019-03-22 杭州翼兔网络科技有限公司 Mood incorporates roughly model construction and the automatic progress rough acquisition methods of mood into
CN111476258B (en) * 2019-01-24 2024-01-05 杭州海康威视数字技术股份有限公司 Feature extraction method and device based on attention mechanism and electronic equipment
CN111476258A (en) * 2019-01-24 2020-07-31 杭州海康威视数字技术股份有限公司 Feature extraction method and device based on attention mechanism and electronic equipment
CN109770925A (en) * 2019-02-03 2019-05-21 闽江学院 A kind of fatigue detection method based on depth time-space network
CN109770925B (en) * 2019-02-03 2020-04-24 闽江学院 Fatigue detection method based on deep space-time network
CN111553460A (en) * 2019-02-08 2020-08-18 富士通株式会社 Information processing apparatus, arithmetic processing device, and method of controlling information processing apparatus
CN111553460B (en) * 2019-02-08 2023-12-05 富士通株式会社 Information processing apparatus, arithmetic processing device, and method of controlling information processing apparatus
CN112204653B (en) * 2019-03-29 2024-04-02 谷歌有限责任公司 Direct speech-to-speech translation through machine learning
CN112204653A (en) * 2019-03-29 2021-01-08 谷歌有限责任公司 Direct speech-to-speech translation through machine learning
CN110047518A (en) * 2019-04-29 2019-07-23 湖南检信智能科技有限公司 A kind of speech emotional analysis system
CN110569720A (en) * 2019-07-31 2019-12-13 安徽四创电子股份有限公司 audio and video intelligent identification processing method based on audio and video processing system
CN110781719A (en) * 2019-09-02 2020-02-11 中国航天员科研训练中心 Non-contact and contact cooperative mental state intelligent monitoring system
CN110556129A (en) * 2019-09-09 2019-12-10 北京大学深圳研究生院 Bimodal emotion recognition model training method and bimodal emotion recognition method
CN110556129B (en) * 2019-09-09 2022-04-19 北京大学深圳研究生院 Bimodal emotion recognition model training method and bimodal emotion recognition method
CN110865705B (en) * 2019-10-24 2023-09-19 中国人民解放军军事科学院国防科技创新研究院 Multi-mode fusion communication method and device, head-mounted equipment and storage medium
CN110865705A (en) * 2019-10-24 2020-03-06 中国人民解放军军事科学院国防科技创新研究院 Multi-mode converged communication method and device, head-mounted equipment and storage medium
CN110826637A (en) * 2019-11-11 2020-02-21 广州国音智能科技有限公司 Emotion recognition method, system and computer-readable storage medium
CN110931006A (en) * 2019-11-26 2020-03-27 深圳壹账通智能科技有限公司 Intelligent question-answering method based on emotion analysis and related equipment
CN111839490A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact heart rate monitoring method and system
CN112069897B (en) * 2020-08-04 2023-09-01 华南理工大学 Knowledge-graph-based speech and micro-expression recognition suicide emotion perception method
CN112069897A (en) * 2020-08-04 2020-12-11 华南理工大学 Knowledge graph-based voice and micro-expression recognition suicide emotion sensing method
CN112163467A (en) * 2020-09-11 2021-01-01 杭州海康威视数字技术股份有限公司 Emotion analysis method and device, electronic equipment and machine-readable storage medium
CN112163467B (en) * 2020-09-11 2023-09-26 杭州海康威视数字技术股份有限公司 Emotion analysis method, emotion analysis device, electronic equipment and machine-readable storage medium
CN112083806A (en) * 2020-09-16 2020-12-15 华南理工大学 Self-learning emotion interaction method based on multi-modal recognition
CN112446938A (en) * 2020-11-30 2021-03-05 重庆空间视创科技有限公司 Multi-mode-based virtual anchor system and method
CN112446938B (en) * 2020-11-30 2023-08-18 重庆空间视创科技有限公司 Multi-mode-based virtual anchor system and method
CN112618911A (en) * 2020-12-31 2021-04-09 四川音乐学院 Music feedback adjusting system based on signal processing
CN113100768A (en) * 2021-04-14 2021-07-13 中国人民解放军陆军特色医学中心 Computer vision incapability and damage effect evaluation system
CN113100768B (en) * 2021-04-14 2022-12-16 中国人民解放军陆军特色医学中心 Computer vision incapability and damage effect evaluation system
CN113421545B (en) * 2021-06-30 2023-09-29 平安科技(深圳)有限公司 Multi-mode voice synthesis method, device, equipment and storage medium
CN113421545A (en) * 2021-06-30 2021-09-21 平安科技(深圳)有限公司 Multi-modal speech synthesis method, device, equipment and storage medium
CN113421546B (en) * 2021-06-30 2024-03-01 平安科技(深圳)有限公司 Speech synthesis method based on cross-test multi-mode and related equipment
CN113421546A (en) * 2021-06-30 2021-09-21 平安科技(深圳)有限公司 Cross-tested multi-mode based speech synthesis method and related equipment
WO2023019612A1 (en) * 2021-08-16 2023-02-23 Hong Kong Applied Science and Technology Research Institute Company Limited Apparatus and method for speech-emotion recognition with quantified emotional states
US11810596B2 (en) 2021-08-16 2023-11-07 Hong Kong Applied Science and Technology Research Institute Company Limited Apparatus and method for speech-emotion recognition with quantified emotional states
CN114821401A (en) * 2022-04-07 2022-07-29 腾讯科技(深圳)有限公司 Video auditing method, device, equipment, storage medium and program product
ES2957419A1 (en) * 2022-06-03 2024-01-18 Neurologyca Science & Marketing Sl System and method for real-time detection of emotional states using artificial vision and natural language listening (Machine-translation by Google Translate, not legally binding)
CN116311539B (en) * 2023-05-19 2023-07-28 亿慧云智能科技(深圳)股份有限公司 Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
CN116311539A (en) * 2023-05-19 2023-06-23 亿慧云智能科技(深圳)股份有限公司 Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
CN116662742A (en) * 2023-06-28 2023-08-29 北京理工大学 Brain electrolysis code method based on hidden Markov model and mask empirical mode decomposition
CN117727298A (en) * 2024-02-09 2024-03-19 广州紫麦科技有限公司 Deep learning-based portable computer voice recognition method and system
CN117727298B (en) * 2024-02-09 2024-04-19 广州紫麦科技有限公司 Deep learning-based portable computer voice recognition method and system

Also Published As

Publication number Publication date
CN108899050B (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN108899050A (en) Speech signal analysis subsystem based on multi-modal Emotion identification system
CN108877801A (en) More wheel dialog semantics based on multi-modal Emotion identification system understand subsystem
CN108805087A (en) Semantic temporal fusion association based on multi-modal Emotion identification system judges subsystem
CN108805089A (en) Based on multi-modal Emotion identification method
CN108805088A (en) Physiological signal analyzing subsystem based on multi-modal Emotion identification system
Wang et al. Human emotion recognition by optimally fusing facial expression and speech feature
Cummins et al. Speech analysis for health: Current state-of-the-art and the increasing impact of deep learning
Imani et al. A survey of emotion recognition methods with emphasis on E-Learning environments
Chen et al. A hierarchical bidirectional GRU model with attention for EEG-based emotion classification
CN110556129B (en) Bimodal emotion recognition model training method and bimodal emotion recognition method
Singh et al. A multimodal hierarchical approach to speech emotion recognition from audio and text
Chen et al. Speech emotion recognition: Features and classification models
CN103996155A (en) Intelligent interaction and psychological comfort robot service system
CN103366618B (en) Scene device for Chinese learning training based on artificial intelligence and virtual reality
CN111583964B (en) Natural voice emotion recognition method based on multimode deep feature learning
CN108334583A (en) Affective interaction method and device, computer readable storage medium, computer equipment
CN108227932A (en) Interaction is intended to determine method and device, computer equipment and storage medium
CN110110169A (en) Man-machine interaction method and human-computer interaction device
Schels et al. Multi-modal classifier-fusion for the recognition of emotions
Xu et al. Multi-type features separating fusion learning for Speech Emotion Recognition
CN107437090A (en) The continuous emotion Forecasting Methodology of three mode based on voice, expression and electrocardiosignal
CN116226372A (en) Bi-LSTM-CNN-based multi-modal voice emotion recognition method
Du et al. A novel emotion-aware method based on the fusion of textual description of speech, body movements, and facial expressions
Gladys et al. Survey on multimodal approaches to emotion recognition
Rammohan et al. Speech signal-based modelling of basic emotions to analyse compound emotion: Anxiety

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant