CN101369310A - Robust human face expression recognition method - Google Patents
Robust human face expression recognition method Download PDFInfo
- Publication number
- CN101369310A CN101369310A CNA2008102232116A CN200810223211A CN101369310A CN 101369310 A CN101369310 A CN 101369310A CN A2008102232116 A CNA2008102232116 A CN A2008102232116A CN 200810223211 A CN200810223211 A CN 200810223211A CN 101369310 A CN101369310 A CN 101369310A
- Authority
- CN
- China
- Prior art keywords
- formula
- human face
- robust
- expression
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a robust face expression identification method, wherein a face image is reconstructed through robust principal component analysis, and significance analysis is performed on difference images of the original face images and the reconstructed face images, to detect a blocked area, then the images at the blocked area are reconstructed to remove screen, finally expression sorting is performed on the face images after removing the screen, to obtain expression identification result. The invention has better ability of removing the screen for each face screen, which is of importance for increasing face expression recognition rate under screen, and is a feasible robust face expression identification method.
Description
(1) technical field
The present invention relates to a kind of mode identification method, especially relate to a kind of facial expression recognizing method of robust.Belonging to human facial expression information extracts and the identification field.
(2) background technology
Human face expression identification generally is divided into the identification of facial action and the identification of emotion.For example, part Study person discerns based on the single of face action coded system from human face expression and mixes motor unit and carry out.And most researchers from human face expression, discern the people happiness, surprised, sad, emotion such as fear.Because it is non-rigid motion that human face expression changes, and is subjected to influences such as individual difference, visual angle change, illumination, human face expression identification is a difficult task, and there be limited evidence currently of has the human face expression recognition system that can be applied to actual environment.
The identification of in the past human face expression often is confined to controlled condition, and for example background is single, illumination is consistent, no head motion etc., and therefore, the human face expression under the controlled condition is discerned can reach high recognition.But seldom there is the researcher that the robust human face Expression Recognition under the uncontrolled condition is studied.Since 21 century, a few studies person begins one's study to the facial expression recognizing method that blocks, illumination, posture, image resolution ratio etc. have robustness.Wherein, for blocking that the facial expression recognizing method with robustness mainly contains how much masks of method, local space of adopting local feature and the method extracted based on the method for the facial movement model of state and based on the Gabor wavelet character etc., but seldom there is the researcher people's face to be removed the Expression Recognition of carrying out robust after blocking.Have Robust identifying method under the situation of blocking less and do not remove the deficiency of shelter at people face, the present invention proposes a kind of new robust human face expression recognition method.
(3) summary of the invention
The objective of the invention is: have the situation of blocking not possess the deficiency of robustness at existing facial expression recognizing method to face, propose a kind of facial expression recognizing method of robust, it can make people's face that the higher Expression Recognition effect of acquisition under the situation of blocking is being arranged.
The facial expression recognizing method of a kind of robust of the present invention, by the robust principal component analysis (PCA) facial image is reconstructed, and the error image of the facial image after original facial image and the reconstruct carried out significance analysis, detect occlusion area, be reconstructed with removal according to image then and block occlusion area, facial image after at last removal being blocked carries out expression classification, obtains the Expression Recognition result.
The facial expression recognizing method of a kind of robust of the present invention, its step is as follows:
Step 1: not containing the L class human face expression image normalization that blocks with N is data matrix C
i∈ R
M * n(i=1 ... M), as training sample, the many classification AdaBoost method training facial expression classifier that adopt husky Pierre people such as (Schapire) to propose.
Step 2: M comprised contain that to block and do not contain the L class human face expression image normalization that blocks be data matrix A
i∈ R
M * n(i=1 ... M), as training sample.Make s=m * n, with A
iExpand into one dimension column data vector d
i∈ R
S * 1(i=1 ... M), constitute input matrix D=[d
1d
2D
M] ∈ R
S * M, (Robust Principle Component Analysis, RPCA) method obtain robust mean vector μ ∈ R in the robust principal component analysis (PCA) of adopting Fei Nanduo (Fernando) to propose
S * 1With robust latent vector B ∈ R
S * k, k<M.
Step 3: with human face expression image normalization data matrix P ∈ R to be identified
M * n
Step 4: P is expanded into one dimension column data vector d ∈ R
S * 1, suc as formula the reconstruct vector d of (1) compute vector d
Rec∈ R
S * 1, and it is deformed into data matrix P ' ∈ R
M * n
d
Rec=μ+BB
T(d-μ) formula (1)
Step 5: calculate facial image matrix P ' and error image matrix E ∈ R after the reconstruct with the facial image matrix P of expression to be identified
M * n, as the formula (2).
E=|P '-P| formula (2)
Step 6: establishing scanning window R, high (1≤h<m), wide is that (1≤w<n), the upper left corner coordinate of window are (x1, y1) (0≤x1<n, 0≤y1<m), h, w, x1, y1 are traveled through, and satisfy constraint condition as the formula (3) to w for h.Scanning window R to error image carries out conspicuousness detection (as the formula (4)), significantly is worth H
E, R
0≤x1+w≤n and 0≤y1+h≤m and 2*w*h<m*n formula (3)
P wherein
E, R(e
i) refer to that error image matrix E is e in scanning window R value
i(0≤e
i≤ 255) probability.
Step
7: to the remarkable value H of all scanning window R
E, RGet maximum significantly value H
Max=max{H
E, R, and judge occlusion area.As the formula (5), if significantly be worth H
MaxGreater than the threshold value H that presets
0, with H
MaxRelevant zone is judged as occlusion area, does not have occlusion area otherwise be judged to be.
Step 8: the occlusion area to human face expression image array P is reconstructed, as the formula (6).If R
OcclusionBe not empty, jump to step 4; If R
OcclusionBe sky, continue execution in step 9.
Step 9:, obtain the human face expression recognition result with the input of human face expression image array P as step 1 training gained facial expression classifier.
Good effect of the present invention and advantage are:
1. the present invention has carried out removing and blocked processing containing the human face expression image that blocks, and is significant for the human face expression discrimination that raising is blocked under the situation;
2. the present invention has preferably to remove to the various faces situation of blocking and blocks ability, is a kind of facial expression recognizing method of feasible robust.
(4) description of drawings
Fig. 1 method step block scheme
(5) specific implementation method
See shown in Figure 1, the facial expression recognizing method of a kind of robust of the present invention, its step is as follows:
Step 1: not containing the L class human face expression image normalization that blocks with N is data matrix C
i∈ R
M * n(i=1 ... M), as training sample, the many classification AdaBoost method training facial expression classifier that adopt husky Pierre people such as (Schapire) to propose.
Step 2: M comprised contain that to block and do not contain the L class human face expression image normalization that blocks be data matrix A
i∈ R
M * n(i=1 ... M), as training sample.Make s=m * n, with A
iExpand into one dimension column data vector d
i∈ R
S * 1(i=1 ... M), constitute input matrix D=[d
1d
2D
M] ∈ R
S * M, (Robust Principle Component Analysis, RPCA) method obtain robust mean vector μ ∈ R in the robust principal component analysis (PCA) of adopting Fei Nanduo (Fernando) to propose
S * 1With robust latent vector B ∈ R
S * k, k<M.
Step 3: with human face expression image normalization data matrix P ∈ R to be identified
M * n
Step 4: P is expanded into one dimension column data vector d ∈ R
S * 1, suc as formula the reconstruct vector d of (1) compute vector d
Rec∈ R
S * 1, and it is deformed into data matrix P ' ∈ R
M * n
d
Rec=μ+BB
T(d-μ) formula (1)
Step 5: calculate facial image matrix P ' and error image matrix E ∈ R after the reconstruct with the facial image matrix P of expression to be identified
M * n, as the formula (2).
E=|P '-P| formula (2)
Step 6: establishing scanning window R, high (1≤h<m), wide is that (1≤w<n), the upper left corner coordinate of window are (x1, y1) (0≤x1<n, 0≤y1<m), h, w, x1, y1 are traveled through, and satisfy constraint condition as the formula (3) to w for h.Scanning window R to error image carries out conspicuousness detection (as the formula (4)), significantly is worth H
E, R
0≤x1+w≤n and 0≤y1+h≤m and 2*w*h<m*n formula (3)
P wherein
E, R(e
i) refer to that error image matrix E is e in scanning window R value
i(0≤e
i≤ 255) probability.
Step 7: to the remarkable value H of all scanning window R
E, RGet maximum significantly value H
Max=max{H
E, R, and judge occlusion area.As the formula (5), if significantly be worth H
MaxGreater than the threshold value H that presets
0, with H
MaxRelevant zone is judged as occlusion area, does not have occlusion area otherwise be judged to be.
Step 8: the occlusion area to human face expression image array P is reconstructed, as the formula (6).If R
OcclusionBe not empty, jump to step 4; If R
OcclusionBe sky, continue execution in step 9.
Step 9:, obtain the human face expression recognition result with the input of human face expression image array P as step 1 training gained facial expression classifier.
Claims (1)
1. the facial expression recognizing method of a robust is characterized in that, this recognition methods step is as follows:
Step 1: not containing the L class human face expression image normalization that blocks with N is data matrix C
i∈ R
M * n(i=1 ... M), as training sample, adopting husky Pierre is many classification AdaBoost method training facial expression classifier that Schapire proposes;
Step 2: M comprised contain that to block and do not contain the L class human face expression image normalization that blocks be data matrix A
i∈ R
M * n(i=1 ... M), as training sample, make s=m * n, with A
iExpand into one dimension column data vector d
i∈ R
S * 1(i=1 ... M), constitute input matrix D=[d
1d
2D
M] ∈ R
S * M, the expense south of employing is that the robust principal component analysis (PCA) that Fernando proposes is Robust Principle Component Analysis more, the RPCA method obtains robust mean vector μ ∈ R
S * 1With robust latent vector B ∈ R
S * k, k<M;
Step 3: with human face expression image normalization data matrix P ∈ R to be identified
M * n
Step 4: P is expanded into one dimension column data vector d ∈ R
S * 1, suc as formula the reconstruct vector d of (1) compute vector d
Rec∈ R
S * 1, and it is deformed into data matrix P ' ∈ R
M * n
d
Rec=μ+BB
T(d-μ) formula (1)
Step 5: calculate facial image matrix P ' and error image matrix E ∈ R after the reconstruct with the facial image matrix P of expression to be identified
M * n, as the formula (2);
E=|P '-P| formula (2)
Step 6: establish scanning window R high for h (1≤h<m), wide is that (1≤w<n), the upper left corner coordinate of window are (x1, y1) (0≤x1<n, 0≤y1<m), h, w, x1, y1 are traveled through, and satisfy constraint condition as the formula (3) to w; Scanning window R to error image carries out the conspicuousness detection, as the formula (4), significantly is worth H
E, R
0≤x1+w≤n and 0≤y1+h≤m and 2*w*h<m*n formula (3)
Wherein
PE, R(e
i) refer to that error image matrix E is e in scanning window R value
i(0≤e
i≤ 255) probability;
Step 7: to the remarkable value H of all scanning window R
E, RGet maximum significantly value H
Max=max{H
E, R, and judge occlusion area, and as the formula (5), if significantly be worth H
MaxGreater than the threshold value H that presets
0, with H
MaxRelevant zone is judged as occlusion area, does not have occlusion area otherwise be judged to be;
Step 8: the occlusion area to human face expression image array P is reconstructed, as the formula (6); If R
OcclusionBe not empty, jump to step 4; If R
OcclusionBe sky, continue execution in step 9;
Step 9:, obtain the human face expression recognition result with the input of human face expression image array P as step 1 training gained facial expression classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008102232116A CN101369310B (en) | 2008-09-27 | 2008-09-27 | Robust human face expression recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008102232116A CN101369310B (en) | 2008-09-27 | 2008-09-27 | Robust human face expression recognition method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101369310A true CN101369310A (en) | 2009-02-18 |
CN101369310B CN101369310B (en) | 2011-01-12 |
Family
ID=40413120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008102232116A Expired - Fee Related CN101369310B (en) | 2008-09-27 | 2008-09-27 | Robust human face expression recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101369310B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101980242A (en) * | 2010-09-30 | 2011-02-23 | 徐勇 | Human face discrimination method and system and public safety system |
CN102622584A (en) * | 2012-03-02 | 2012-08-01 | 成都三泰电子实业股份有限公司 | Method for detecting mask faces in video monitor |
CN102855496A (en) * | 2012-08-24 | 2013-01-02 | 苏州大学 | Method and system for authenticating shielded face |
CN103927554A (en) * | 2014-05-07 | 2014-07-16 | 中国标准化研究院 | Image sparse representation facial expression feature extraction system and method based on topological structure |
CN104751108A (en) * | 2013-12-31 | 2015-07-01 | 汉王科技股份有限公司 | Face image recognition device and face image recognition method |
CN105825183A (en) * | 2016-03-14 | 2016-08-03 | 合肥工业大学 | Face expression identification method based on partially shielded image |
CN107705295A (en) * | 2017-09-14 | 2018-02-16 | 西安电子科技大学 | A kind of image difference detection method based on steadiness factor method |
CN108108685A (en) * | 2017-12-15 | 2018-06-01 | 北京小米移动软件有限公司 | The method and apparatus for carrying out face recognition processing |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1313962C (en) * | 2004-07-05 | 2007-05-02 | 南京大学 | Digital human face image recognition method based on selective multi-eigen space integration |
CN1987891A (en) * | 2005-12-23 | 2007-06-27 | 北京海鑫科金高科技股份有限公司 | Quick robust human face matching method |
-
2008
- 2008-09-27 CN CN2008102232116A patent/CN101369310B/en not_active Expired - Fee Related
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101980242B (en) * | 2010-09-30 | 2014-04-09 | 徐勇 | Human face discrimination method and system and public safety system |
CN101980242A (en) * | 2010-09-30 | 2011-02-23 | 徐勇 | Human face discrimination method and system and public safety system |
CN102622584A (en) * | 2012-03-02 | 2012-08-01 | 成都三泰电子实业股份有限公司 | Method for detecting mask faces in video monitor |
CN102855496B (en) * | 2012-08-24 | 2016-05-25 | 苏州大学 | Block face authentication method and system |
CN102855496A (en) * | 2012-08-24 | 2013-01-02 | 苏州大学 | Method and system for authenticating shielded face |
CN104751108A (en) * | 2013-12-31 | 2015-07-01 | 汉王科技股份有限公司 | Face image recognition device and face image recognition method |
CN104751108B (en) * | 2013-12-31 | 2019-05-17 | 汉王科技股份有限公司 | Facial image identification device and facial image recognition method |
CN103927554A (en) * | 2014-05-07 | 2014-07-16 | 中国标准化研究院 | Image sparse representation facial expression feature extraction system and method based on topological structure |
CN105825183A (en) * | 2016-03-14 | 2016-08-03 | 合肥工业大学 | Face expression identification method based on partially shielded image |
CN105825183B (en) * | 2016-03-14 | 2019-02-12 | 合肥工业大学 | Facial expression recognizing method based on partial occlusion image |
CN107705295A (en) * | 2017-09-14 | 2018-02-16 | 西安电子科技大学 | A kind of image difference detection method based on steadiness factor method |
CN108108685A (en) * | 2017-12-15 | 2018-06-01 | 北京小米移动软件有限公司 | The method and apparatus for carrying out face recognition processing |
CN108108685B (en) * | 2017-12-15 | 2022-02-08 | 北京小米移动软件有限公司 | Method and device for carrying out face recognition processing |
Also Published As
Publication number | Publication date |
---|---|
CN101369310B (en) | 2011-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101369310B (en) | Robust human face expression recognition method | |
Niu et al. | HMM-based segmentation and recognition of human activities from video sequences | |
Sivaraman et al. | A general active-learning framework for on-road vehicle recognition and tracking | |
CN109800643B (en) | Identity recognition method for living human face in multiple angles | |
CN104239856B (en) | Face identification method based on Gabor characteristic and self adaptable linear regression | |
CN102521561B (en) | Face identification method on basis of multi-scale weber local features and hierarchical decision fusion | |
CN102508547A (en) | Computer-vision-based gesture input method construction method and system | |
CN105069447A (en) | Facial expression identification method | |
CN102103698A (en) | Image processing apparatus and image processing method | |
CN111860274A (en) | Traffic police command gesture recognition method based on head orientation and upper half body skeleton characteristics | |
CN106709419B (en) | Video human behavior recognition method based on significant trajectory spatial information | |
CN101561867A (en) | Human body detection method based on Gauss shape feature | |
Choi et al. | Driver drowsiness detection based on multimodal using fusion of visual-feature and bio-signal | |
Cao et al. | Online motion classification using support vector machines | |
Samad et al. | Extraction of the minimum number of Gabor wavelet parameters for the recognition of natural facial expressions | |
Mohamed et al. | Adaptive extended local ternary pattern (aeltp) for recognizing avatar faces | |
Wang et al. | Pyramid-based multi-scale lbp features for face recognition | |
Kim et al. | Optimal feature selection for pedestrian detection based on logistic regression analysis | |
Ahammad et al. | Recognizing Bengali sign language gestures for digits in real time using convolutional neural network | |
CN103661102A (en) | Method and device for reminding passersby around vehicles in real time | |
Jalilian et al. | Persian sign language recognition using radial distance and Fourier transform | |
CN101216878A (en) | Face identification method based on general non-linear discriminating analysis | |
Ishihara et al. | Gesture recognition using auto-regressive coefficients of higher-order local auto-correlation features | |
Zhou et al. | Feature extraction based on local directional pattern with svm decision-level fusion for facial expression recognition | |
Karahoca et al. | Human motion analysis and action recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110112 Termination date: 20120927 |