CN107169413A - A kind of human facial expression recognition method of feature based block weight - Google Patents
A kind of human facial expression recognition method of feature based block weight Download PDFInfo
- Publication number
- CN107169413A CN107169413A CN201710234709.1A CN201710234709A CN107169413A CN 107169413 A CN107169413 A CN 107169413A CN 201710234709 A CN201710234709 A CN 201710234709A CN 107169413 A CN107169413 A CN 107169413A
- Authority
- CN
- China
- Prior art keywords
- geometric properties
- weight
- characteristic
- feature
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000008921 facial expression Effects 0.000 title claims abstract description 33
- 230000007935 neutral effect Effects 0.000 claims abstract description 27
- 238000004458 analytical method Methods 0.000 claims abstract description 17
- 241000228740 Procrustes Species 0.000 claims abstract description 15
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 15
- 230000009467 reduction Effects 0.000 claims abstract description 9
- 238000011017 operating method Methods 0.000 claims abstract description 4
- 238000005457 optimization Methods 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 7
- 230000001537 neural effect Effects 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 abstract description 13
- 238000004364 calculation method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005267 amalgamation Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of human facial expression recognition method of feature based block weight.The operating procedure of this method is as follows:1)Extract the Gabor textural characteristics and geometric properties of expression picture;2)PCA algorithms are used to reduce characteristic dimension the Gabor textural characteristics of extraction, geometric properties piecemeal alignment to extraction, geometric properties are divided into mouth, left eye, three characteristic blocks of right eye, and Procrustes Analysis methods are respectively adopted each geometric properties is alignd;3)Gabor textural characteristics after PCA dimensionality reductions are merged with three geometric properties blocks after Procrustes Analysis, fusion feature is constituted;4)Fusion feature is input to the Bp neutral nets of characteristic block weight, neutral net is trained, seeks suitable each layer weight coefficient.The present invention improves the general character of expression geometric properties, solves the problem of facial different characteristic form, different zones feature are different to Expression Recognition contribution rate.
Description
Technical field
The present invention relates to a kind of expression recognition technology, particularly a kind of weight of each characteristic block feature based block
And weight Bp(Backpropagation)The method of neutral net.
Background technology
The greatest problem that face human facial expression recognition research is faced is how the accuracy rate of raising human facial expression recognition, by
In the influence of different zones, human face's size, the colour of skin, the culture of race etc., present human facial expression recognition method is caused not have
Standby preferable versatility, does not possess robustness to different people.
The feature extraction of facial expression is very crucial for the identification of expression, and different feature extracting method is from different angles
Degree is indicated to feature, however, different characteristic is different for the identification contribution rate of human face expression.In order to distinguish different characteristic,
The feature importance of facial different zones, method of many scholars based on weight analysis assigns weight factor to every dimensional feature, and
Maximization between class distance is taken, minimize the optimization principles such as inter- object distance weight factor is found, distinguish according to this
Different characteristic improves the discrimination of facial expression to the contribution rate of Expression Recognition.But these methods are all asked in face of following 3
Topic:
1st, the intrinsic dimensionality that facial expression image is extracted is up to thousands of, per dimensional feature weight, inevitably results in weight factor number
Amount is more, and calculating pressure will necessarily additionally be increased by finding weight factor, cause real-time not good enough.
2nd, weight individually is carried out to every dimensional feature, inevitably results in each feature and lose original representation.
3rd, it is two independent processes that the optimization of weight factor, which is found with grader, and the quality of weight factor will be by classification
The detection of device, beneficial to being only of correctly classifying of grader.
Based on requirements above, the present invention proposes a kind of human facial expression recognition method of feature based block weight, for asking
Topic 1,2, proposes to carry out weight to the feature feature based block level of various forms of features, facial different zones.For asking
Topic 3, proposes the Bp neutral nets of weight, by the optimization of the optimization of weight factor and each layer weight of neutral net and threshold value simultaneously
Carry out.
The content of the invention
The defect existed for prior art, it is an object of the invention to propose a kind of face of feature based block weight
Expression recognition method, solves various forms of features, the feature of facial different zones different to human facial expression recognition contribution rate
Problem.
To achieve these goals, idea of the invention is that:
The human facial expression recognition method of this feature based block weight, including facial Gabor characteristic are extracted, facial geometric feature
Extract, piecemeal aligns, the Bp neutral nets of feature based block weight.
Gabor filter is built, the Gabor textural characteristics of facial expression is extracted, is asked for Gabor characteristic dimension is too high
Topic, Feature Dimension Reduction is carried out using PCA.The position of facial key point is extracted as geometric properties using Face++ function libraries, for
Geometric properties, because facial positions, the difference of size are, it is necessary to which facial geometric feature is alignd, reduction positioning is inaccurate, size is big
Small not first-class influence, many scholars align the geometric properties of face using Procrustes Analysis, achieve good
Effect, and it is known that the mankind judge expression, mainly by the different shape of mouth, eye, the change of mouth and eye
It is independent of each other, does not interfere with each other, therefore, this method proposes the geometric properties of face being divided into left eye, right eye, three, face
Geometric properties block, is respectively adopted Procrustes Analysis alignment, is different from the overall alignment of face, and this method reduction is each
Interference between individual characteristic block, is alignd compared to by whole facial geometric properties, the alignment effect of sample geometric properties
More preferably.The operation advantageously accounts for the low problem of discrimination caused by different human face's sizes, face organ are in different size.
Different zones for face, different facial expression feature representations is different to Expression Recognition contribution rate asks
Inscribe, traditional method will carry out weight per dimensional feature, and it is excellent to combine the principles such as maximization between class distance, minimum inter- object distance
Change iteration weight factor, but there are three shortcomings in this method:1st, destroy and deposited between original character representation form and feature
Relation, each feature carries out weightization and necessarily loses overall advantage.2nd, weight individually is carried out to every dimensional feature, it is inevitable
Each feature can be caused to lose original representation.3rd, the optimization of feature weight and grader are the processes of a separation.Cause
This, for the 1st, 2 shortcomings:This method proposes the concept of feature based block weight, by the Gabor characteristic of facial expression, a left side
Eye geometric properties, right eye portion geometric properties, mouth geometric properties are based on as four independent characteristic blocks to each characteristic block
Characteristic block carries out weight.For the 3rd shortcoming:The Bp neutral nets of feature based block weight are proposed, with reference to Bp nerve nets
Network, adds one layer of weight layer, weight layer realizes the weight to each characteristic block, by spy before the input layer of neutral net
The weight process for levying block is combined with grader, passes through the training of training sample, chess game optimization weight layer weight factor, realization pair
The weight of characteristic block.
Conceived according to foregoing invention, the present invention uses following technical proposals:
A kind of human facial expression recognition method of feature based block weight, it is characterised in that operating procedure is as follows:1)Extract expression
The Gabor textural characteristics and geometric properties of picture;2)PCA algorithms are used to reduce characteristic dimension the Gabor textural characteristics of extraction,
Geometric properties are divided into mouth, left eye, three characteristic blocks of right eye, and be respectively adopted by the geometric properties piecemeal alignment to extraction
Procrustes Analysis methods are alignd each geometric properties;3)By the Gabor textural characteristics after PCA dimensionality reductions with
Three geometric properties blocks after Procrustes Analysis are merged, and constitute fusion feature;4)Fusion feature is input to
The Bp neutral nets of characteristic block weight, are trained to neutral net, seek suitable each layer weight coefficient.
The Gabor textures and geometric properties of above-mentioned extraction expression picture be:Facial expression image is extracted using Gabor filter
Gabor textural characteristics, using Face++ function libraries extract facial expression image geometric properties.
Above-mentioned geometric properties block aligns:By geometric properties be divided into left eye geometric properties block, right eye geometric properties block and
Mouth geometric properties block, Procrustes Analysis are then respectively adopted to each characteristic block and carry out registration process.
Above-mentioned Fusion Features are:Gabor characteristic and each geometric properties block are subjected to arrangement group in the way of column vector
Close.
The Bp neural net methods of above-mentioned characteristic block weight are:Weight is added before the input layer of neutral net
Layer, weight layer includes four weight factors, and this four weight factors are instructed together with the parameter progress of each layer of Bp neutral nets
Practice optimization, realize the weight to four characteristic blocks.
The present invention compared with prior art, enters with following obvious prominent substantive distinguishing features and notable technology
Step:The general character of expression geometric properties is improved, different characteristic representation, the feature of facial different zones is solved and expression is known
The problem of other contribution rate is different, and then improve the recognition correct rate of facial expression.
Brief description of the drawings
Fig. 1 is the overall flow block diagram of the embodiment of the present invention.
Fig. 2 is the weight Bp neural network structure figures of the embodiment of the present invention.
Fig. 3 is the calculation flow chart of the weight Bp neutral net input feature vectors of the embodiment of the present invention.
Embodiment
The preferred embodiment of the present invention is elaborated below in conjunction with the accompanying drawings.
Embodiment one:
Referring to Fig. 1, the human facial expression recognition method of this feature based block weight, it is characterised in that operating procedure is as follows:1)Carry
Take the Gabor textural characteristics and geometric properties of expression picture;2)It is special using the reduction of PCA algorithms to the Gabor textural characteristics of extraction
Dimension is levied, the geometric properties piecemeal of extraction is alignd, geometric properties are divided into mouth, left eye, three characteristic blocks of right eye, and respectively
Each geometric properties is alignd using Procrustes Analysis methods;3)Gabor textures after PCA dimensionality reductions is special
Levy and merged with three geometric properties blocks after Procrustes Analysis, constitute fusion feature;4)Fusion feature is defeated
Enter the Bp neutral nets to characteristic block weight, neutral net is trained, seek suitable each layer weight coefficient.
Embodiment two:
The present embodiment and embodiment one are essentially identical, and special feature is as follows:
The Gabor textures and geometric properties of described extraction expression picture be:Facial expression image is extracted using Gabor filter
Gabor textural characteristics, the geometric properties of facial expression image are extracted using Face++ function libraries.
Described geometric properties block aligns:By geometric properties be divided into left eye geometric properties block, right eye geometric properties block and
Mouth geometric properties block, Procrustes Analysis are then respectively adopted to each characteristic block and carry out registration process.
Described Fusion Features are:Gabor characteristic and each geometric properties block are subjected to arrangement group in the way of column vector
Close.
The Bp neural net methods of described characteristic block weight are:Weight is added before the input layer of neutral net
Layer, weight layer includes four weight factors, and this four weight factors are instructed together with the parameter progress of each layer of Bp neutral nets
Practice optimization, realize the weight to four characteristic blocks.
Embodiment three:
Such as Fig. 1, the Gabor characteristic of facial expression is extracted using Gabor filter, because the intrinsic dimensionality of Gabor characteristic is more,
In the character representation of higher-dimension, these are generally characterized by linear dependence and comprising the more smaller change of useless or use
Amount, therefore, feature selecting is carried out using PCA algorithms to the Gabor characteristic of extraction.Facial expression image is extracted using Face++ function libraries
Facial geometric feature, due to human face structure, the difference of size, eyes, the position of face, size are different, by extraction
Facial expression geometric properties are divided into mouth, left eye, right eye geometric properties block, then individually enter each characteristic block of facial characteristics
Row Procrustes Analysis.Feature after the Gabor characteristic of extraction is alignd with geometric properties piecemeal is merged, group
Into fusion feature, amalgamation mode is as follows:
In above formula,FThe feature after fusion is represented,F g Represent the Gabor characteristic after PCA dimensionality reductions.F l ,F r ,F m Warp is represented respectively
Left eye, right eye, the mouth geometric properties crossed after Procrustes Analysis.
Each characteristic block in fusion feature is directed to, feature based block defines weight, respectively to the feature of regional
Overall to assign certain weight, the feature of extraction is divided into for four independent characteristic blocks by this method:Gabor textural characteristics block, a left side
Eye geometric properties block, right eye portion geometric properties block and mouth geometric properties block, regard this four part as independent entirety respectively,
Assign its certain weight.The definition rule of weight factor is as follows:
Bp neutral nets are when receiving input feature value, by each characteristic variable fair play, and actually distinct face
Primary expression regions are different to the contribution rate of Expression Recognition, and therefore, this method proposes the Bp neutral nets of characteristic block weight, such as
Fig. 2, characteristic block weight neutral net adds one layer of weight layer before the input layer of former Bp neutral nets, and weight layer is by upper
Four weight factors composition that two, face formula is defined, respectively defines the weight factor of four characteristic blocks.Weight layer will first
To each characteristic block weight, the feature after weight is then input to input layer, secondly hidden layer, last output layer, this
It is exactly the forward-propagating of input feature vector.Afterwards, according to calculation error, the weight and threshold value of output layer are updated, but update implicit
Layer, input layer, weight layer weights, that is, error dorsad propagation.Specific flow chart such as Fig. 3:The weight and threshold of network
After value initialization, carry out the weight computing of characteristic block first to input feature vector, the defeated of neutral net is entered into afterwards
Enter layer, successively calculate the gap of the result of each layer, analyses and comparison reality output and desired output, then dorsad update the power of each layer
Weight, the calculation of weight layer is as follows:
Four characteristic blocks are multiplied by corresponding weight factor respectively, four characteristic blocks are carried out with the weight of feature based block.Such as
Lower formula:
In upper formula, the fusion feature after block weight is characterized.FIt is the output of weight layer for the input feature vector of weight layer,
And the input of input layer will be used as.Afterwards by progressively according to the calculation procedure unfolding calculation of Bp neutral nets, to the weight of each layer
Coefficient is iterated renewal, until meeting error requirements.
Claims (5)
1. a kind of human facial expression recognition method of feature based block weight, it is characterised in that operating procedure is as follows:
1) the Gabor textural characteristics and geometric properties of expression picture are extracted;
2) use PCA algorithms to reduce characteristic dimension the Gabor textural characteristics of extraction, the geometric properties piecemeal of extraction alignd,
Geometric properties are divided into mouth, left eye, three characteristic blocks of right eye, and Procrustes Analysis methods are respectively adopted by respectively
Individual geometric properties are alignd;
3) the Gabor textural characteristics after PCA dimensionality reductions are melted with three geometric properties blocks after Procrustes Analysis
Close, constitute fusion feature;
4) fusion feature is input to the Bp neutral nets of characteristic block weight, neutral net is trained, seeks suitable
Each layer weight coefficient.
2. the human facial expression recognition method of feature based block weight according to claim 1, it is characterised in that the step
It is rapid 1) extract expression picture Gabor textures and geometric properties be:The Gabor lines of facial expression image are extracted using Gabor filter
Feature is managed, the geometric properties of facial expression image are extracted using Face++ function libraries.
3. the human facial expression recognition method of feature based block weight according to claim 1, it is characterised in that the step
It is rapid 2) in geometric properties block alignment be:Geometric properties are divided into left eye geometric properties block, right eye geometric properties block and mouth several
What characteristic block, Procrustes Analysis are then respectively adopted to each characteristic block and carry out registration process.
4. the human facial expression recognition method of feature based block weight according to claim 1, it is characterised in that the step
It is rapid 3) in Fusion Features be:Gabor characteristic and each geometric properties block are subjected to permutation and combination in the way of column vector.
5. the human facial expression recognition method of feature based block weight according to claim 1, it is characterised in that the step
It is rapid 4) in the Bp neural net methods of characteristic block weight be:Weight layer, weight are added before the input layer of neutral net
Layer includes four weight factors, and this four weight factors are trained into optimization together with the parameter progress of each layer of Bp neutral nets,
Realize the weight to four characteristic blocks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710234709.1A CN107169413B (en) | 2017-04-12 | 2017-04-12 | Facial expression recognition method based on feature block weighting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710234709.1A CN107169413B (en) | 2017-04-12 | 2017-04-12 | Facial expression recognition method based on feature block weighting |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107169413A true CN107169413A (en) | 2017-09-15 |
CN107169413B CN107169413B (en) | 2021-01-12 |
Family
ID=59849968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710234709.1A Expired - Fee Related CN107169413B (en) | 2017-04-12 | 2017-04-12 | Facial expression recognition method based on feature block weighting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107169413B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108288023A (en) * | 2017-12-20 | 2018-07-17 | 深圳和而泰数据资源与云技术有限公司 | The method and apparatus of recognition of face |
CN110020580A (en) * | 2018-01-08 | 2019-07-16 | 三星电子株式会社 | Identify the method for object and facial expression and the method for training facial expression |
WO2020244434A1 (en) * | 2019-06-03 | 2020-12-10 | 腾讯科技(深圳)有限公司 | Method and apparatus for recognizing facial expression, and electronic device and storage medium |
CN112464699A (en) * | 2019-09-06 | 2021-03-09 | 富士通株式会社 | Image normalization method, system and readable medium for face analysis |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101276421A (en) * | 2008-04-18 | 2008-10-01 | 清华大学 | Method and apparatus for recognizing human face combining human face part characteristic and Gabor human face characteristic |
CN101388075A (en) * | 2008-10-11 | 2009-03-18 | 大连大学 | Human face identification method based on independent characteristic fusion |
CN101620669A (en) * | 2008-07-01 | 2010-01-06 | 邹采荣 | Method for synchronously recognizing identities and expressions of human faces |
CN101719223A (en) * | 2009-12-29 | 2010-06-02 | 西北工业大学 | Identification method for stranger facial expression in static image |
CN101799919A (en) * | 2010-04-08 | 2010-08-11 | 西安交通大学 | Front face image super-resolution rebuilding method based on PCA alignment |
CN103020654A (en) * | 2012-12-12 | 2013-04-03 | 北京航空航天大学 | Synthetic aperture radar (SAR) image bionic recognition method based on sample generation and nuclear local feature fusion |
CN104517104A (en) * | 2015-01-09 | 2015-04-15 | 苏州科达科技股份有限公司 | Face recognition method and face recognition system based on monitoring scene |
CN105117708A (en) * | 2015-09-08 | 2015-12-02 | 北京天诚盛业科技有限公司 | Facial expression recognition method and apparatus |
CN105512273A (en) * | 2015-12-03 | 2016-04-20 | 中山大学 | Image retrieval method based on variable-length depth hash learning |
CN105892287A (en) * | 2016-05-09 | 2016-08-24 | 河海大学常州校区 | Crop irrigation strategy based on fuzzy judgment and decision making system |
-
2017
- 2017-04-12 CN CN201710234709.1A patent/CN107169413B/en not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101276421A (en) * | 2008-04-18 | 2008-10-01 | 清华大学 | Method and apparatus for recognizing human face combining human face part characteristic and Gabor human face characteristic |
CN101620669A (en) * | 2008-07-01 | 2010-01-06 | 邹采荣 | Method for synchronously recognizing identities and expressions of human faces |
CN101388075A (en) * | 2008-10-11 | 2009-03-18 | 大连大学 | Human face identification method based on independent characteristic fusion |
CN101719223A (en) * | 2009-12-29 | 2010-06-02 | 西北工业大学 | Identification method for stranger facial expression in static image |
CN101799919A (en) * | 2010-04-08 | 2010-08-11 | 西安交通大学 | Front face image super-resolution rebuilding method based on PCA alignment |
CN103020654A (en) * | 2012-12-12 | 2013-04-03 | 北京航空航天大学 | Synthetic aperture radar (SAR) image bionic recognition method based on sample generation and nuclear local feature fusion |
CN104517104A (en) * | 2015-01-09 | 2015-04-15 | 苏州科达科技股份有限公司 | Face recognition method and face recognition system based on monitoring scene |
CN105117708A (en) * | 2015-09-08 | 2015-12-02 | 北京天诚盛业科技有限公司 | Facial expression recognition method and apparatus |
CN105512273A (en) * | 2015-12-03 | 2016-04-20 | 中山大学 | Image retrieval method based on variable-length depth hash learning |
CN105892287A (en) * | 2016-05-09 | 2016-08-24 | 河海大学常州校区 | Crop irrigation strategy based on fuzzy judgment and decision making system |
Non-Patent Citations (2)
Title |
---|
ZHANGERDONG等: "《Facial expression recognition research based on blocked local feature》", 《PROCEEDINGS OF 2016 7TH INTERNATIONAL CONFERENCE ON MECHATRONICS,CONTROL AND MATERIALS(ICMCM 2016)》》 * |
张静: "《基于面部图像分块处理和PCA算法的表情识别研究》", 《中国优秀硕士论文辑信息科技辑》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108288023A (en) * | 2017-12-20 | 2018-07-17 | 深圳和而泰数据资源与云技术有限公司 | The method and apparatus of recognition of face |
CN108288023B (en) * | 2017-12-20 | 2020-10-16 | 深圳和而泰数据资源与云技术有限公司 | Face recognition method and device |
CN110020580A (en) * | 2018-01-08 | 2019-07-16 | 三星电子株式会社 | Identify the method for object and facial expression and the method for training facial expression |
CN110020580B (en) * | 2018-01-08 | 2024-06-04 | 三星电子株式会社 | Method for identifying object and facial expression and method for training facial expression |
WO2020244434A1 (en) * | 2019-06-03 | 2020-12-10 | 腾讯科技(深圳)有限公司 | Method and apparatus for recognizing facial expression, and electronic device and storage medium |
CN112464699A (en) * | 2019-09-06 | 2021-03-09 | 富士通株式会社 | Image normalization method, system and readable medium for face analysis |
CN112464699B (en) * | 2019-09-06 | 2024-08-20 | 富士通株式会社 | Image normalization method, system and readable medium for face analysis |
Also Published As
Publication number | Publication date |
---|---|
CN107169413B (en) | 2021-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107169413A (en) | A kind of human facial expression recognition method of feature based block weight | |
CN107679491A (en) | A kind of 3D convolutional neural networks sign Language Recognition Methods for merging multi-modal data | |
CN110163110A (en) | A kind of pedestrian's recognition methods again merged based on transfer learning and depth characteristic | |
CN104036255B (en) | A kind of facial expression recognizing method | |
CN109034210A (en) | Object detection method based on super Fusion Features Yu multi-Scale Pyramid network | |
CN107808132A (en) | A kind of scene image classification method for merging topic model | |
CN107491726A (en) | A kind of real-time expression recognition method based on multi-channel parallel convolutional neural networks | |
CN104850825A (en) | Facial image face score calculating method based on convolutional neural network | |
CN106504064A (en) | Clothes classification based on depth convolutional neural networks recommends method and system with collocation | |
CN107423678A (en) | A kind of training method and face identification method of the convolutional neural networks for extracting feature | |
CN107463920A (en) | A kind of face identification method for eliminating partial occlusion thing and influenceing | |
CN106648103A (en) | Gesture tracking method for VR headset device and VR headset device | |
CN106909887A (en) | A kind of action identification method based on CNN and SVM | |
CN107085704A (en) | Fast face expression recognition method based on ELM own coding algorithms | |
CN108764041A (en) | The face identification method of facial image is blocked for lower part | |
CN107341463A (en) | A kind of face characteristic recognition methods of combination image quality analysis and metric learning | |
CN104182772A (en) | Gesture recognition method based on deep learning | |
CN104268593A (en) | Multiple-sparse-representation face recognition method for solving small sample size problem | |
CN109712095B (en) | Face beautifying method with rapid edge preservation | |
CN106709453A (en) | Sports video key posture extraction method based on deep learning | |
CN106778852A (en) | A kind of picture material recognition methods for correcting erroneous judgement | |
CN107944399A (en) | A kind of pedestrian's recognition methods again based on convolutional neural networks target's center model | |
CN108921047A (en) | A kind of multi-model ballot mean value action identification method based on cross-layer fusion | |
CN109871905A (en) | A kind of plant leaf identification method based on attention mechanism depth model | |
CN108596264A (en) | A kind of community discovery method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210112 |
|
CF01 | Termination of patent right due to non-payment of annual fee |