CN103892792B - Emotion recognition model generation device and method - Google Patents
Emotion recognition model generation device and method Download PDFInfo
- Publication number
- CN103892792B CN103892792B CN201210567969.8A CN201210567969A CN103892792B CN 103892792 B CN103892792 B CN 103892792B CN 201210567969 A CN201210567969 A CN 201210567969A CN 103892792 B CN103892792 B CN 103892792B
- Authority
- CN
- China
- Prior art keywords
- solution
- emotion recognition
- recognition model
- optimal solution
- optimal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
An emotion recognition model generation device comprises a signal collecting module, a feature extracting module, a selecting module and a setting module. The signal collecting module collects various physiological signals of a human body. The feature extracting module extracts six time domain features of each physiological signal to form a primitive feature set. The selecting module selects an optimal feature subset from the primitive feature sets. The setting module sets an emotion recognition model according to the optimal feature subset. The generated emotion recognition model is high in emotion recognition rate. The invention further provides an emotion recognition model generation method.
Description
Technical field
The present invention relates to emotion recognition technology, more particularly to emotion recognition model generating means and its generation emotion recognition mould
The method of type.
Background technology
Emotion recognition is to confer to a kind of human-computer interaction technology of machine recognition human emotion's ability, has been increasingly becoming man-machine
The study hotspot in interaction field.At present the research field of emotion recognition is included based on the emotion recognition of facial expression, based on voice
The emotion recognition of signal, based on the emotion recognition of word, the emotion recognition based on limb motion and the emotion based on physiological signal
Identification.Emotion recognition wherein based on physiological signal is the most reliable but but also the most difficult.
It is how from substantial amounts of primitive character collection based on a step the most key in the emotion recognition system of physiological signal
In pick out a limited number of character subsets and map that on emotion model.This crucial step is exactly feature selection,
It not only can effectively remove redundancy feature, reduce the model training time, improve precision of prediction, and can also select can
Represent the character subset of some particular emotions.
In the patent of Application No. CN200910150458.4, voice signal has been used to carry out emotion recognition.Than language
Message number, the physiological signal of human body is less susceptible to extraneous factor and human body subjective consciousness to control, therefore more accurately, can
Lean on.However, in that patent, the identification of emotion needs 12 features.
Document " Using GA-based Feature Selecton for Emotion Recognition from
Physiological Signals " carry out emotion recognition using many physiological signals, but only used from 28 tested 5
Physiological signal is planted, it is that genetic algorithm is classified with reference to KNN to excite the method for material, feature selection and emotional semantic classification as emotion with picture
Device, but the discrimination of emotion is low, is only 78% to the highest discrimination of emotion.
The content of the invention
In view of this, it is necessary to a kind of emotion recognition model generating means are provided and its side of emotion recognition model is generated
Method, improves the discrimination of emotion.
The emotion recognition model generating means that the present invention is provided, including signal acquisition module, characteristic extracting module, selection mould
Block and module is set up, wherein, signal acquisition module is used to gather various physiological signals of human body;Characteristic extracting module is used to carry
6 temporal signatures of each physiological signal are taken, primitive character collection is formed, wherein, 6 temporal signatures are:Physiology
The average of signal, the standard deviation of physiological signal, the average of the first-order difference absolute value of physiological signal, normalized signal first-order difference
The average of absolute value, the average of the second differnce absolute value of primary signal and normalized signal second differnce absolute value;Select
Module is used to from the primitive character concentrate to select optimal feature subset;Module is set up for building according to the optimal feature subset
Vertical emotion recognition model.
The method of the generation emotion recognition model that the present invention is provided, comprises the following steps:Various physiology letter of collection human body
Number;6 temporal signatures of each physiological signal are extracted, primitive character collection is formed, wherein, 6 temporal signatures are:
The average of physiological signal, the standard deviation of physiological signal, the average of the first-order difference absolute value of physiological signal, normalized signal single order
The average of difference absolute value, the average of the second differnce absolute value of primary signal and normalized signal second differnce absolute value;
Concentrate from the primitive character and select optimal feature subset;Emotion recognition model is set up according to the optimal feature subset.
Emotion recognition model generating means that the present invention is provided and its method for generating emotion recognition model, by from original
Optimal feature subset is selected in feature set, and emotion recognition model is set up according to optimal feature subset, using the emotion of the present invention
Identification model, effectively raises the discrimination of emotion.
Description of the drawings
Fig. 1 is the module map of emotion recognition model generating means in an embodiment of the present invention;
Fig. 2 is the method that an embodiment of the present invention generates emotion recognition model using emotion recognition model generating means
Flow chart;
Fig. 3 is the concrete steps flow chart of step S30 in Fig. 2.
Specific embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from start to finish
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, is only used for explaining the present invention, and is not considered as limiting the invention.
In describing the invention, term " interior ", " outward ", " longitudinal direction ", " horizontal ", " on ", D score, " top ", " bottom " etc. refer to
The orientation or position relationship for showing be based on orientation shown in the drawings or position relationship, be for only for ease of description the present invention rather than
It is required that the present invention with specific azimuth configuration and operation, therefore must be not considered as limiting the invention.
Fig. 1 is referred to, Fig. 1 show the module map of emotion recognition model generating means 10 in an embodiment of the present invention.
In the present embodiment, emotion recognition model generating means 10 include:Signal acquisition module 102, feature extraction mould
Block 104, selecting module 106, set up module 108, memorizer 110 and processor 112.Wherein, signal acquisition module 102, spy
Levy extraction module 104, selecting module 106 and set up module 108 and store in the memory 110, processor 112 is deposited for execution
Storage functional module in the memory 110.
Signal acquisition module 102 is used to gather various physiological signals of human body.In the present embodiment, various physiological signals
Hold the physiological signals such as beating, brain electricity, breathing and facial myoelectricity including skin conductivity, heart rate, blood.
In the present embodiment, with vidclip as material is aroused, happiness, sadness, tranquil 3 kinds of emotions are excited, using U.S.
The MP150 polygraphs of BIOPAC companies of state acquire quilt without patient history of 150 ages between 19-25 year
Examination(Participant)6 kinds of physiological signals when film is watched, this 6 kinds of physiological signals include:Skin conductivity, heart rate, blood hold beating,
Brain electricity, breathing, facial myoelectricity, after film viewing terminates, tested meeting reports them the emotion when film is watched by questionnaire(It is flat
Quiet, glad or sadness), and the intensity that emotion is excited(1st, it is extremely weak, 2, weak, 3, general, 4, strong, 5, extremely strong).By questionnaire,
Data of the emotion intensity more than 3 are have selected, 110 tested valid data have been finally given.
Characteristic extracting module 104 is used to extract 6 temporal signatures of each physiological signal, forms primitive character
Collection, wherein, 6 temporal signatures are:The average of physiological signal, the standard deviation of physiological signal, the first-order difference of physiological signal
The average of absolute value, the average of normalized signal first-order difference absolute value, the average of the second differnce absolute value of physiological signal with
And normalized signal second differnce absolute value.
In the present embodiment, the average of physiological signal is:
Wherein X is signal, and N is sampled point.
In the present embodiment, the standard deviation of physiological signal is:
In the present embodiment, the average of physiological signal first-order difference absolute value is:
In the present embodiment, the average of normalized signal first-order difference absolute value is:
WhereinIt is XnNormalized signal.
In the present embodiment, the average of the second differnce absolute value of physiological signal:
In the present embodiment, the average of normalized signal second differnce absolute value is:
Selecting module 106 is used to from the primitive character concentrate to select optimal feature subset.In the present embodiment, select
Module 106 includes:Initialization submodule 1060, acquisition submodule 1062, solve submodule 1064, judging submodule 1066 and
Update submodule 1068.
In the present embodiment, initialization submodule 1060 is used to for the scale of Formica fusca population to be set as the primitive character
Iterationses are set as fixed value, and initialization information prime matrix by the characteristic number of concentration.
In the present embodiment, the institute after initialization submodule 1060 initializes Pheromone Matrix, in Pheromone Matrix
Some pheromone values are initialized to τmax=50。
Acquisition submodule 1062 is used to obtain the flag state of each temporal signatures according to pseudorandom ratio rules.
In the present embodiment, the pseudorandom ratio rules are:
Wherein s represents the flag state of feature i, τijRepresent that temporal signatures i exists
State j(J=1 represents selected, and j=0 represents not selected)When pheromone concentration, q is to choose from equiprobability between [0,1]
A random number, q0(0≤q0≤ 1) it is a parameter.
The acquisition submodule 1062 is in q≤q0When according to pheromone value τi0And τi1Size obtain the temporal signatures i
Flag state, wherein, if τi0< τi1, then flag state s=0, if τi0> τi1, then flag state s=1;In q > q0When according toThe flag state of the temporal signatures i is obtained, the acquisition submodule 1062 produces a random number r, ifThen Formica fusca
Feature i is labeled as 0 by k, ifThen feature i is labeled as 1 by Formica fusca k.
Solving submodule 1064 is used to obtain character subset by using Formica fusca and flag state solution.
In the present embodiment, it is described solve submodule 1064 be additionally operable to according to the classification accuracy rate of the character subset and
Characteristic Number obtains fitness value, and the first optimal solution is selected in sequence.
In the present embodiment, the fitness function F of the solution that Formica fusca k buildskIt is defined as:
Fk=Rk/(1+λ·Nk)
Wherein RkThe classification accuracy rate of the solution that Formica fusca k builds, NkIt is the Characteristic Number that included of solution that Formica fusca k builds, λ is
NkShared weight.In the present embodiment, λ=0.01.
In the present embodiment, FkValue it is bigger, then prove that corresponding character subset is more outstanding.
The solution submodule 1064 enters row variation to first optimal solution and obtains multiple variation solutions using variation rule.
In the present embodiment, the variation rule changes in first optimal solution extremely for the solution submodule 1064
The flag state of few temporal signatures, then seeks the first optimal solution after the change variation solution.
It is described solve submodule 1064 according to it is the plurality of variation solution and first optimal solution classification accuracy rate and
Characteristic Number obtains fitness value, and the second optimal solution is selected in sequence.
It is described to solve the neighborhood solution that submodule 1064 is exchanged in the neighborhood of the second optimal solution described in rule search using neighborhood.
In the present embodiment, the neighborhood exchange regulation searches for second optimal solution for the solution submodule 1064
Neighborhood in neighborhood solution.
The solution submodule 1064 is according to the neighborhood solution and the classification accuracy rate and feature of second optimal solution
Number obtains fitness value, and the 3rd optimal solution is selected in sequence.
Judging submodule 1066 is used to judge whether iterationses reach fixed value.
In the present embodiment, it is described to solve submodule 1064 when the iterationses reach fixed value by the described 3rd
Optimal solution is exported as optimal feature subset.
Update submodule 1068 be used for when the iterationses are not reaching to fixed value according to the 3rd optimal solution more
New described information prime matrix.
In the present embodiment, Pheromone update adopts below equation:
τij(t+1)=(1- ρ) τij(t)+1/Fbest
Wherein, ρ=0.08, Fbest=Rbest/(1+λ·Nbest), λ=0.01.
In the present embodiment, be marked as " 0 " of 0 temporal signatures then in fresh information prime matrix OK because this when
Characteristic of field is not selected, so pheromone is only evaporated, does not discharge;1 temporal signatures are marked as then in fresh information prime matrix
" 1 " OK, because the temporal signatures it is selected, so while pheromone is evaporated, also want release pheromone, to the temporal signatures
Pheromone strengthened.
Module 108 is set up for setting up emotion recognition model according to the optimal feature subset.
Fig. 2 is referred to, Fig. 2 show an embodiment of the present invention using emotion recognition model generating means 10 to generate feelings
The flow chart of the method for sense identification model.
In the present embodiment, the method for generating emotion recognition model is comprised the following steps:
In step S10, signal acquisition module 102 gathers various physiological signals of human body.
In the present embodiment, various physiological signals include:Skin conductivity, heart rate, blood hold beating, brain electricity, breathing and
The physiological signals such as facial myoelectricity.
In the present embodiment, with vidclip as material is aroused, happiness, sadness, tranquil 3 kinds of emotions are excited, using U.S.
The MP150 polygraphs of BIOPAC companies of state acquire quilt without patient history of 150 ages between 19-25 year
Examination(Participant)6 kinds of physiological signals when film is watched, this 6 kinds of physiological signals include:Skin conductivity, heart rate, blood hold beating,
Brain electricity, breathing, facial myoelectricity, after film viewing terminates, tested meeting reports them the emotion when film is watched by questionnaire(It is flat
Quiet, glad or sadness), and the intensity that emotion is excited(1st, it is extremely weak, 2, weak, 3, general, 4, strong, 5, extremely strong).By questionnaire,
Data of the emotion intensity more than 3 are have selected, 110 tested valid data have been finally given.
In step S20, characteristic extracting module 104 extracts 6 temporal signatures of each physiological signal, is formed original
Feature set, wherein, 6 temporal signatures are:The average of physiological signal, the standard deviation of physiological signal, the single order of physiological signal
The average of difference absolute value, the average of normalized signal first-order difference absolute value, the second differnce absolute value of physiological signal it is equal
Value and normalized signal second differnce absolute value.
In the present embodiment, the average of physiological signal is:
Wherein X is signal, and N is sampled point.
In the present embodiment, the standard deviation of physiological signal is:
In the present embodiment, the average of physiological signal first-order difference absolute value is:
In the present embodiment, the average of normalized signal first-order difference absolute value is:
WhereinIt is XnNormalized signal.
In the present embodiment, the average of the second differnce absolute value of physiological signal:
In the present embodiment, the average of normalized signal second differnce absolute value is:
In step S30, selecting module 106 is concentrated from the primitive character and selects optimal feature subset.
In step S40, set up module 108 and emotion recognition model is set up according to the optimal feature subset.
Fig. 3 is referred to, Fig. 3 show the concrete steps flow chart of step S30 in Fig. 2.
In the present embodiment, step S30 is comprised the following steps:
In step S300, the scale of Formica fusca population is set as the spy that the primitive character is concentrated by initialization submodule 1060
Number is levied, iterationses are set as into fixed value.
In step S302, the initialization information prime matrix of initialization submodule 1060.
In the present embodiment, the institute after initialization submodule 1060 initializes Pheromone Matrix, in Pheromone Matrix
Some pheromone values are initialized to τmax=50。
In step S304, acquisition submodule 1062 obtains the labelling of each temporal signatures according to pseudorandom ratio rules
State.
In the present embodiment, the pseudorandom ratio rules are:
Wherein s represents the flag state of feature i, τijRepresent that temporal signatures i exists
State j(J=1 represents selected, and j=0 represents not selected)When pheromone concentration, q is to choose from equiprobability between [0,1]
A random number, q0(0≤q0≤ 1) it is a parameter.
Therefore comprise the following steps in step S304:
In q≤q0When according to pheromone value τi0And τi1Size obtain the flag state of the temporal signatures i, wherein, if
τi0< τi1, then flag state s=0, if τi0> τi1, then flag state s=1.
In q > q0When according toObtain the flag state of the temporal signatures i.In the present embodiment, the acquisition submodule
Block 1062 produces a random number r, ifThen feature i is labeled as 0 by Formica fusca k, ifThen Formica fusca k is by feature i labelling
For 1.
In step S306, solve submodule 1064 and obtain character subset by using Formica fusca and flag state solution.
In step S308, solve submodule 1064 and fitted according to the classification accuracy rate and Characteristic Number of the character subset
Angle value is answered, and the first optimal solution is selected in sequence.
In the present embodiment, the fitness function F of the solution that Formica fusca k buildskIt is defined as:
Fk=Rk/(1+λ·Nk)
Wherein RkThe classification accuracy rate of the solution that Formica fusca k builds, NkIt is the Characteristic Number that included of solution that Formica fusca k builds, λ is
NkShared weight.In the present embodiment, λ=0.01.
In the present embodiment, FkValue it is bigger, then prove that corresponding character subset is more outstanding.
In step S310, solution submodule 1064 enters row variation and obtains multiple using variation rule to first optimal solution
Variation solution.
In the present embodiment, temporal signatures described at least one during the variation rule is to change first optimal solution
Flag state, then to the first optimal solution after the change ask variation solution.
In step S312, submodule 1064 is solved according to the classification of the plurality of variation solution and first optimal solution just
Really rate and Characteristic Number obtain fitness value, and the second optimal solution is selected in sequence.
In step S314, solve submodule 1064 and exchanged in the neighborhood of the second optimal solution described in rule search using neighborhood
Neighborhood solution.
In the present embodiment, the neighborhood exchange regulation searches for second optimal solution for the solution submodule 1064
Neighborhood in neighborhood solution.
In step S316, submodule 1064 is solved according to the neighborhood solution and the classification accuracy rate of second optimal solution
And Characteristic Number obtains fitness value, and the 3rd optimal solution is selected in sequence.
In step S318, judging submodule 1066 judges whether iterationses reach fixed value.
If the iterationses reach fixed value, in step S320, submodule 1064 is solved by the 3rd optimal solution
As optimal feature subset, and export.
If the iterationses are not reaching to fixed value, in step 322, submodule 1068 is updated according to the described 3rd most
Excellent solution updates described information prime matrix.
In the present embodiment, renewal submodule 1066 is in 0 temporal signatures fresh information prime matrix is marked as
" 0 " OK, because the temporal signatures are not selected, pheromone is only evaporated, and is not discharged;Update submodule 1066 be marked as
" 1 " in 1 temporal signatures fresh information prime matrix OK, because the temporal signatures it is selected, so while pheromone is evaporated,
Release pheromone is also wanted, the pheromone to the temporal signatures is strengthened.
The method of emotion recognition model generating means 10 and its generation emotion recognition model in embodiment of the present invention, leads to
Cross from primitive character to concentrate and select optimal feature subset, and emotion recognition model is set up according to optimal feature subset, using this
Bright emotion recognition model, effectively raises the discrimination of emotion.
Although the present invention is described with reference to current better embodiment, those skilled in the art should be able to manage
Solution, above-mentioned better embodiment is only used for illustrating the present invention, any in the present invention not for limiting protection scope of the present invention
Spirit and spirit within, any modification, equivalence replacement, improvement for being done etc., should be included in the present invention right protect
Within the scope of shield.
Claims (12)
1. a kind of emotion recognition model generating means, including:
Signal acquisition module, for gathering various physiological signals of human body;
Characteristic extracting module, for extracting 6 temporal signatures of each physiological signal, forms primitive character collection, wherein,
6 temporal signatures are:The average of physiological signal, the standard deviation of physiological signal, the first-order difference absolute value of physiological signal
Average, the average of normalized signal first-order difference absolute value, the average of the second differnce absolute value of physiological signal and standardization
Signal second differnce absolute value;
Selecting module, for concentrating from the primitive character optimal feature subset is selected;The selecting module includes:Initial beggar
Module, for the scale of Formica fusca population to be set as into the characteristic number that the primitive character is concentrated, iterationses is set as to fix
Value, and initialization information prime matrix;Acquisition submodule, for obtaining each temporal signatures according to pseudorandom ratio rules
Flag state;Submodule is solved, for solving by using Formica fusca and flag state character subset is obtained, wherein, it is described to ask
Solution submodule is additionally operable to obtain the first fitness value, and choosing of sorting according to the classification accuracy rate and Characteristic Number of the character subset
Go out the first optimal solution, and enter row variation to first optimal solution using variation rule and obtain multiple variation solutions, according to described many
The classification accuracy rate and Characteristic Number of individual variation solution and first optimal solution obtains the second fitness value, and sequence selects the
Two optimal solutions, using neighborhood exchange rule search described in the second optimal solution neighborhood in neighborhood solution, according to the neighborhood solution with
And the classification accuracy rate and Characteristic Number of second optimal solution obtains the 3rd fitness value, and the 3rd optimal solution is selected in sequence;
Judging submodule, for judging whether iterationses reach fixed value, wherein the solution submodule reaches in the iterationses
To during fixed value using the 3rd optimal solution as optimal feature subset, and export;
Module is set up, for setting up emotion recognition model according to the optimal feature subset.
2. emotion recognition model generating means as claimed in claim 1, it is characterised in that the selecting module also includes:
Submodule is updated, for updating the letter according to the 3rd optimal solution when the iterationses are not reaching to fixed value
Breath prime matrix.
3. emotion recognition model generating means as claimed in claim 2, it is characterised in that the pseudorandom ratio rules are:
Wherein s represents the flag state of feature i, τijRepresent temporal signatures i in state j
When pheromone concentration, when j=1 represents selected, j=0 represents not selected, q be between [0,1] equiprobability choose one
Individual random number, q0For a parameter, wherein 0≤q0≤1。
4. emotion recognition model generating means as claimed in claim 3, it is characterised in that the acquisition submodule is in q≤q0When
According to pheromone concentration value τi0And τi1Size obtain the flag state of the temporal signatures i, in q>q0When according toObtain institute
State the flag state of temporal signatures i.
5. emotion recognition model generating means as claimed in claim 1, it is characterised in that the variation rule is the solution
Submodule changes the flag state of temporal signatures described at least one in first optimal solution, then to the change after the
One optimal solution seeks variation solution.
6. emotion recognition model generating means as claimed in claim 1, it is characterised in that the neighborhood exchange regulation is described
Solve the neighborhood solution in the neighborhood of the second optimal solution described in sub-block searches.
7. a kind of method for generating emotion recognition model, comprises the following steps:
Various physiological signals of collection human body;
6 temporal signatures of each physiological signal are extracted, primitive character collection is formed, wherein, 6 temporal signatures
For:The average of physiological signal, the standard deviation of physiological signal, average, the normalized signal of the first-order difference absolute value of physiological signal
The average of first-order difference absolute value, the average of the second differnce absolute value of physiological signal and normalized signal second differnce are absolute
Value;
Concentrate from the primitive character and select optimal feature subset;The step also includes following sub-step:
The scale of Formica fusca population is set as into the characteristic number that the primitive character is concentrated, iterationses are set as into fixed value;
Initialization information prime matrix;
The flag state of each temporal signatures is obtained according to pseudorandom ratio rules;
Solve by using Formica fusca and flag state and obtain character subset;
First fitness value is obtained according to the classification accuracy rate and Characteristic Number of the character subset, and sequence to select first optimum
Solution;
Enter row variation to first optimal solution using variation rule and obtain multiple variation solutions;
Second fitness is obtained according to the classification accuracy rate and Characteristic Number of the plurality of variation solution and first optimal solution
Value, and the second optimal solution is selected in sequence;
The neighborhood solution in the neighborhood of the second optimal solution described in rule search is exchanged using neighborhood;
3rd fitness value is obtained according to the classification accuracy rate and Characteristic Number of the neighborhood solution and second optimal solution, and
The 3rd optimal solution is selected in sequence;
Judge whether iterationses reach fixed value;
When the iterationses reach fixed value using the 3rd optimal solution as optimal feature subset, and export;
Emotion recognition model is set up according to the optimal feature subset.
8. the method for generating emotion recognition model as claimed in claim 7, it is characterised in that step is " from the primitive character
Concentration selects optimal feature subset " also include following sub-step:
Described information prime matrix is updated according to the 3rd optimal solution when the iterationses are not reaching to fixed value.
9. the method for generating emotion recognition model as claimed in claim 8, it is characterised in that the pseudorandom ratio rules
For:
Wherein s represents the flag state of feature i, τijRepresent temporal signatures i in state j
When pheromone concentration, when j=1 represents selected, j=0 represents not selected, q be between [0,1] equiprobability choose one
Individual random number, q0For a parameter, wherein 0≤q0≤1。
10. the method for generating emotion recognition model as claimed in claim 9, it is characterised in that the step is " according to pseudorandom
Ratio rules obtain the flag state of each temporal signatures " comprise the following steps:
In q≤q0When according to pheromone concentration value τi0And τi1Size obtain the flag state of the temporal signatures i;
In q>q0When according toObtain the flag state of the temporal signatures i.
11. methods for generating emotion recognition model as claimed in claim 7, it is characterised in that the variation rule is change
The flag state of temporal signatures described at least one in first optimal solution, then asks the first optimal solution after the change
Variation solution.
The 12. as claimed in claim 7 methods for generating emotion recognition models, it is characterised in that the neighborhood exchange regulation is
Neighborhood solution in the interior neighborhood for searching for second optimal solution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210567969.8A CN103892792B (en) | 2012-12-24 | 2012-12-24 | Emotion recognition model generation device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210567969.8A CN103892792B (en) | 2012-12-24 | 2012-12-24 | Emotion recognition model generation device and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103892792A CN103892792A (en) | 2014-07-02 |
CN103892792B true CN103892792B (en) | 2017-05-10 |
Family
ID=50984615
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210567969.8A Active CN103892792B (en) | 2012-12-24 | 2012-12-24 | Emotion recognition model generation device and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103892792B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106691475B (en) * | 2016-12-30 | 2020-03-27 | 中国科学院深圳先进技术研究院 | Emotion recognition model generation method and device |
WO2018120088A1 (en) * | 2016-12-30 | 2018-07-05 | 中国科学院深圳先进技术研究院 | Method and apparatus for generating emotional recognition model |
CN107918487A (en) * | 2017-10-20 | 2018-04-17 | 南京邮电大学 | A kind of method that Chinese emotion word is identified based on skin electrical signal |
CN109685149B (en) * | 2018-12-28 | 2021-04-27 | 江苏智慧工场技术研究院有限公司 | Method for constructing emotion fine scoring model and automatically acquiring emotion fine |
CN109685156B (en) * | 2018-12-30 | 2021-11-05 | 杭州灿八科技有限公司 | Method for acquiring classifier for recognizing emotion |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101887721A (en) * | 2010-07-19 | 2010-11-17 | 东南大学 | Electrocardiosignal and voice signal-based bimodal emotion recognition method |
CN101930735A (en) * | 2009-06-23 | 2010-12-29 | 富士通株式会社 | Speech emotion recognition equipment and speech emotion recognition method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101840644B1 (en) * | 2011-05-31 | 2018-03-22 | 한국전자통신연구원 | System of body gard emotion cognitive-based, emotion cognitive device, image and sensor controlling appararus, self protection management appararus and method for controlling the same |
-
2012
- 2012-12-24 CN CN201210567969.8A patent/CN103892792B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930735A (en) * | 2009-06-23 | 2010-12-29 | 富士通株式会社 | Speech emotion recognition equipment and speech emotion recognition method |
CN101887721A (en) * | 2010-07-19 | 2010-11-17 | 东南大学 | Electrocardiosignal and voice signal-based bimodal emotion recognition method |
Non-Patent Citations (4)
Title |
---|
Using GA-based Feature Selection for Emotion Recognition from Physiological Signals;Y. Gu et al;《2008 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS2008)》;20090211;第1-4页 * |
基于脉搏信号的情感识别研究;张慧玲;《西南大学硕士学位论文》;20110915;第8-40页 * |
基于量子粒子群算法的心电信号情感状态识别研究;曹军;《西南大学硕士学位论文》;20121015;第4、7、8、11、12、15、16、27、28页 * |
蚁群算法在呼吸信号情感识别中的应用研究;林时来等;《计算机工程与应用》;20110106;第47卷(第2期);第169-172页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103892792A (en) | 2014-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111329474B (en) | Electroencephalogram identity recognition method and system based on deep learning and information updating method | |
CN103892792B (en) | Emotion recognition model generation device and method | |
CN105005777B (en) | Audio and video recommendation method and system based on human face | |
CN109497990B (en) | Electrocardiosignal identity recognition method and system based on canonical correlation analysis | |
CN112101329B (en) | Video-based text recognition method, model training method and model training device | |
CN109934089A (en) | Multistage epileptic EEG Signal automatic identifying method based on supervision gradient lifter | |
CN107194158A (en) | A kind of disease aided diagnosis method based on image recognition | |
McCall et al. | Macro-class Selection for Hierarchical k-NN Classification of Inertial Sensor Data. | |
US20230030419A1 (en) | Machine Learning Model Training Method and Device and Electronic Equipment | |
CN103631941A (en) | Electroencephalogram-based target image retrieval system | |
CN104063721B (en) | A kind of human behavior recognition methods learnt automatically based on semantic feature with screening | |
Bu | Human motion gesture recognition algorithm in video based on convolutional neural features of training images | |
CN110688888B (en) | Pedestrian attribute identification method and system based on deep learning | |
CN108205684A (en) | Image disambiguation method, device, storage medium and electronic equipment | |
CN110633634A (en) | Face type classification method, system and computer readable storage medium for traditional Chinese medicine constitution | |
CN106991409A (en) | A kind of Mental imagery EEG feature extraction and categorizing system and method | |
CN112307975A (en) | Multi-modal emotion recognition method and system integrating voice and micro-expressions | |
CN111126280A (en) | Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method | |
Jiao et al. | Golf swing classification with multiple deep convolutional neural networks | |
CN109086794A (en) | A kind of driving behavior mode knowledge method based on T-LDA topic model | |
CN115827995A (en) | Social matching method based on big data analysis | |
CN113749619A (en) | Mental fatigue assessment method based on K-TRCA | |
CN109086351A (en) | A kind of method and user tag system obtaining user tag | |
CN107045624A (en) | A kind of EEG signals pretreatment rolled into a ball based on maximum weighted and sorting technique | |
CN113435335A (en) | Microscopic expression recognition method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |