CN103514442B - Video sequence face identification method based on AAM model - Google Patents

Video sequence face identification method based on AAM model Download PDF

Info

Publication number
CN103514442B
CN103514442B CN201310445776.XA CN201310445776A CN103514442B CN 103514442 B CN103514442 B CN 103514442B CN 201310445776 A CN201310445776 A CN 201310445776A CN 103514442 B CN103514442 B CN 103514442B
Authority
CN
China
Prior art keywords
face
training
picture
pca
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310445776.XA
Other languages
Chinese (zh)
Other versions
CN103514442A (en
Inventor
徐向民
陈晓仕
黄卓彬
林旭斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Bo Wei Intelligent Technology Co., Ltd.
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201310445776.XA priority Critical patent/CN103514442B/en
Publication of CN103514442A publication Critical patent/CN103514442A/en
Application granted granted Critical
Publication of CN103514442B publication Critical patent/CN103514442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a video sequence face identification method based on an AAM model. The method comprises the training step (1) and the identification step (2), wherein the training step (1) comprises PCA protection and (1-2) LDA protection, projection is carried out on feature vectors which are subjected to dimensionality reduction of the PCA protection through a WLDA, and therefore the best classification feature of each training picture is obtained; the identification step (2) comprises (2-1) Adaboost detection, (2-2) AAM tracking and gesture correction, (2-3) PCA projection, (2-4) LDA protection aiming to obtain the best classification feature of a face picture to be identified, (2-5) nearest-neighbor classifier decision which judges the face picture of types where gamma1 is located to the be the identification result, and the gamma1 is the smallest euclidean distance between the best classification feature of the face picture to be identified and the best classification feature of all training pictures. The video sequence face identification method has the advantages of being capable of accurately identifying a face under the condition that the face gesture is variable, and is strong in robustness.

Description

A kind of video sequence face identification method based on AAM model
Technical field
The present invention relates to a kind of face identification method, particularly to a kind of video sequence recognition of face based on AAM model Method.
Background technology
The information age maked rapid progress in this information expansion, computer technology, the mankind start to wish that computer becomes a kind of Can with natural language between the machine that exchanged, and thirst for developing man machine interface and artificial intelligence's skill of novel concept Art, thus enable people to eliminate the reliance on the interactive devices such as keyboard, mouse and the display device of traditional computer.However, it is real Now so naturally man-machine interaction requires that computer can quickly and accurately obtain identity, state, intention and the phase of user The characteristic information closing.Because the bulk information content that face is contained is an important information transmission window, computer passes through Obtain identity and the relevant information of object using the uniqueness of face face, pass through the expression shape change that face enriches simultaneously The state of conveyed object and intention, make one to build up an intelligentized bridge and computer between, and these must have been studied The image processing techniquess related to face of effect.
At present in existing face identification method, face feature extraction method is mainly based upon geometric properties, based on template Coupling, based on subspace with based on methods such as neutral nets.In the extracting method based on subspace, principal component analysiss PCA and The methods such as Fisher linear discriminant are the more commonly used methods, and they obtain higher discrimination in still image.But In video sequence, current face identification method must could obtain preferable recognition effect in the case of user's cooperation, if In the case of user is ill-matched in identification process, recognition effect is likely to occur and is greatly reduced.In addition in the video sequence, by Changeable etc. in face posture leads to the discrimination of these methods to have different degrees of decline.
Movable appearance model (Active Appearance Model, AAM) is be widely used in area of pattern recognition one Plant Feature Points Extraction.Human face characteristic positioning method based on AAM, during setting up faceform, not only considers that local is special Reference ceases, and considers global shape and texture information, by uniting to face shape feature and textural characteristics Meter analysis, sets up face mixed model, i.e. final corresponding AAM model.
Content of the invention
It is an object of the invention to overcoming shortcoming and the deficiency of prior art, provide a kind of video sequence based on AAM model Row face identification method.The method also accurately can identify face in the case that face posture is changeable, has very strong Robustness.
The purpose of the present invention is achieved through the following technical solutions:A kind of video sequence recognition of face side based on AAM model Method, including training stage and cognitive phase;
(1)The described training stage includes:
(1-1)PCA projects:
First training picture is normalized, average face is calculated according to the training picture after normalization, by all normalizings Training picture after change and average face do difference operation, obtain the first difference;
Then according to first difference build covariance matrix, by the feature of K eigenvalue of maximum before covariance matrix to Amount composition PCA projection matrix WPCA, as feature quantity space;
Finally the first difference is projected to lower dimensional space by PCA projection matrix WPCA, obtain the characteristic vector after dimensionality reduction;
(1-2)LDA projects:
Calculate first the mean vector m of characteristic vector after the PCA dimensionality reduction that obtains of projection for all training picture samples with And i-th class train the mean vector mi of characteristic vector after the PCA dimensionality reduction that obtains of projection for the picture sample;
Then according to mean vector m, mi and calculating training sample within class scatter matrix SWWith inter _ class relationship matrix SB, Calculating matrix SW -1SBCharacteristic vector, by choose SW -1SBFirst L maximum characteristic vector constitute LDA projection matrix WLDA
Finally by LDA projection matrix WLDAProject to projecting the characteristic vector after dimensionality reduction by PCA, obtain every Optimal classification feature Γ of training pictureij
(2)Described cognitive phase includes:
(2-1)Adaboost detects:The subregion that test video frame comprises face is identified by Adaboost algorithm;
(2-2)AAM follows the tracks of and posture correction:Training obtains AAM model first;Then pass through the AAM model that training obtains Face subregion is tracked;The net shape parameter obtaining when finally using AAM model training is entered to the subregion of face Row posture corrects, and obtains the face subregion after posture correction;
(2-3)PCA projects:
First the face subregion picture after posture obtained above correction is normalized, then with the training stage The average face obtaining during PCA projection does difference operation, obtains the second difference;
Then above-mentioned second matrix of differences is projected to the PCA projection matrix W that the training stage obtainsPCA, after obtaining dimensionality reduction Characteristic vector η;
(2-4)LDA projects:By step(2-3)In eigenvector projection after the dimensionality reduction that obtains obtain to the training stage LDA projection matrix WLDA, obtain the optimal classification feature of facial image to be identified;
(2-5)Nearest neighbor classifier decision-making:
Calculate European between each training picture optimal classification feature and other training picture optimal classification features first Distance, therefrom selects Euclidean distance value F of maximum;Set threshold value b, the size of this threshold value b is maximum Euclidean distance The half of value F;
Then calculation procedure(2-4)The optimal classification feature of the facial image to be identified obtaining is each with what the training stage obtained Minimum euclid distance γ of the optimal classification feature of individual training picture1
Finally by minimum euclid distance γ1Being compared with threshold value b, if being more than threshold value b, judging this people to be identified Face image is non-training storehouse picture;If being less than threshold value b, by minimum euclid distance γ1Place characteristic of division place class Face picture is judged to recognition result.
Preferably, described step(1-1)In calculated average face f be:
f = 1 M Σ i = 1 C Σ j = 1 N x ij , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
Wherein xijFor the training picture after normalization, C is the classification sum of the training picture after normalization, and N is every apoplexy due to endogenous wind The training picture sample sum comprising, M is training picture sample sum, wherein M=N*C;
Training picture after normalization and average face do difference operation, the first difference d obtainingijFor:
dij=xij- f, i=1,2 ..., C, j=1,2 ... N;
According to the first matrix of differences dijBuild covariance matrix U be:
U = 1 M Σ i = 1 C Σ j = 1 N d ij d ij T , i = 1,2 , . . . , C , j = 1 , 2 , . . . N ;
Wherein the first difference is passed through PCA projection matrix WPCAProject to lower dimensional space, obtain characteristic vector η after dimensionality reductionij For:
ηij=WPCA Tdij, i=1,2 ..., C, j=1,2 ... N.
Further, described step(1-2)In all training picture samples through PCA projection after characteristic vector averages Vectorial m is:
m = 1 M Σ i = 1 C Σ j = 1 N η ij , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
I-th class trains the mean vector mi of characteristic vector after PCA projection for the picture sample to be:
m i = 1 N Σ j = 1 N η ij , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
Described step trains the within class scatter matrix S of picture sampleWWith inter _ class relationship matrix SBIt is respectively:
S W = Σ i = 1 C Σ j = 1 N ( η ij - m i ) ( η ij - m i ) T , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
S B = Σ i = 1 C n i ( m i - m ) ( m i - m ) T , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
Wherein ni is the number of the i-th class training sample;Optimal classification feature Γ of every training pictureijFor:
Γij=WLDA TWPCA Tdij, i=1,2 ..., C, j=1,2 ... N.
Preferably, described step(2-2)Middle AAM follows the tracks of as follows with the training step of the AAM model of posture correction:
(2-2-1)Choosing training object is to include positive face, left and right sides face, face of facing upward, S of face of bowing reliability sample;
(2-2-2)Described point is carried out to reliable sample, 68 feature visibility points of face are demarcated;
(2-2-3)Using Procrustes, the face after described point is alignd, obtain removing translation, yardstick and rotation Alignment face;
(2-2-4)Using principal component analytical method to step(2-2-3)The alignment face obtaining carries out shape modeling, obtains Form parameter p(I.e. coefficient of torsion)And shape;
(2-2-5)Remove average shape face from shape, then delaunay triangle division is carried out to it, then use The affine method of burst makes texture project in average shape, is finally processed with PCA, obtains parametric texture and texture mould Type;
(2-2-6)According to shape obtained above and texture model, right respectively using reversely combined aam matching algorithm Existing shape and texture model are trained, and obtain hessian matrix.
Further, described step(2-2-1)Quantity S of middle reliable sample is 100~1000.
Further, described step(2-2)The process that middle face subregion is tracked is as follows:
(2-2-7)Form parameter p according to hessian matrix and shape obtains shape by following burst mapping function Shape parameter increase △ p:
Δp = H - 1 Σ x [ ▿ I ∂ W ∂ p ] T [ T ( x ) - I ( W ( x ; p ) ) ] ;
Wherein H is hessian matrix, and W is the affine equation of burst, and T is alignment face, and I is actual picture, and x is actual figure Pixel in picture, p is the form parameter of shape in corresponding training process;
(2-2-8)Form parameter p, wherein p=p+ Δ p are updated according to form parameter increment Delta p;It is then back to step(2-2- 7), continue to calculate Δ p by above-mentioned burst mapping function, until calculated form parameter increment Delta p is less than a or iteration Number of times reaches maximum iteration time, then stop calculating;
(2-2-9)Form parameter p obtaining after updating combines shape, and application principal component analytical method is followed the tracks of Target facial image.
Further, described threshold value a is 500 to 2000.
Further, described step(2-2)In posture trimming process as follows:When accurately tracing into target facial image Afterwards, posture correction is carried out to the subregion of face according to net shape parameter p obtaining in tracking iterative process, after being reversed Shape I (W (x;P)) it is the reconstruction face consistent with positive face.
Preferably, described step(2-3)In obtain characteristic vector η after dimensionality reduction and be:
η=WPCA Tu;
Optimal classification feature Γ of facial image to be identified is:
Γ=WLDA TWPCA Tu;
Wherein u is step(2-3)In the second difference of obtaining;Wherein u=x-f;X is step(2-3)In the posture school that obtains The picture after face subregion normalization after just, f is step(1-1)In the average face that obtains.
Preferably, described step(2-5)In optimal classification feature Γ and training stage of facial image to be identified obtain Optimal classification feature Γ of each training pictureijMinimum euclid distance γ1For:
γ 1 = min i , j | | Γ - Γ ij | | , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
Wherein C is the classification sum of the training picture after normalization, and N is the training picture sample sum that every apoplexy due to endogenous wind comprises.
The present invention has such advantages as with respect to prior art and effect:
After the present inventor's face recognition method is tracked using AAM model in identification process, carries out attitude updating, obtain To a reconstruction face consistent with positive face as the face subregion after attitude updating, this process will test facial image distortion It is corrected to the facial image just to photographic head, then adopt PCA projection and LDA projection preferably to extract facial image to be identified Optimal classification feature is so that the inventive method also can accurately identify face in the case of human face posture is diverse.Tool There is strong robustness.
Brief description
Fig. 1 is the flow chart of the inventive method.
Fig. 2 is 68 feature locations of face in the inventive method.
Specific embodiment
With reference to embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention do not limit In this.
Embodiment
As shown in figure 1, present embodiment discloses a kind of video sequence face identification method based on AAM model, including instruction Practice step and identification step;
(1)Training step includes:
(1-1)PCA projects:
First pretreatment is carried out to training picture, including being converted into gray scale, every training picture pulls into column vector by row, enters Row normalization, according to the training picture x after normalizationijCalculate average face f, by the training picture x after all normalizationijWith flat All face f do difference operation, obtain the first difference dij
Wherein average face f is:
f = 1 M Σ i = 1 C Σ j = 1 N x ij , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
xijFor j-th training picture of the i-th class after normalization, C is the classification sum of the training picture after normalization, and N is The training picture sample sum that every apoplexy due to endogenous wind comprises, M is training picture sample sum, wherein M=N*C;
First difference dijFor:
dij=xij- f, i=1,2 ..., C, j=1,2 ... N;
Then according to the first difference dijBuild covariance matrix U, accounting for before gross energy more than 95% by covariance matrix The characteristic vector composition PCA projection matrix W of K eigenvalue of maximumPCA, as feature quantity space;The covariance matrix of wherein structure U is:
U = 1 M Σ i = 1 C Σ j = 1 N d ij d ij T , i = 1,2 , . . . , C , j = 1 , 2 , . . . N ;
Finally the first difference is passed through PCA projection matrix WPCAProject to lower dimensional space, obtain characteristic vector η after dimensionality reductionij For:
ηij=WPCA Tdij, i=1,2 ..., C, j=1,2 ... N;
(1-2)LDA projects:
Calculate first the mean vector m of characteristic vector after the PCA dimensionality reduction that obtains of projection for all training picture samples with And i-th class train the mean vector mi of characteristic vector after the PCA dimensionality reduction that obtains of projection for the picture sample;
m = 1 M Σ i = 1 C Σ j = 1 N η ij , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
m i = 1 N Σ j = 1 N η ij , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
Then according to mean vector m, mi and calculating training sample within class scatter matrix SWWith inter _ class relationship matrix SB, Calculating matrix SW -1SBCharacteristic vector, by choose SW -1SBFirst L maximum characteristic vector structure accounting for gross energy more than 95% Become LDA projection matrix WLDA;Wherein training sample within class scatter matrix SWWith inter _ class relationship matrix SBIt is respectively:
S W = Σ i = 1 C Σ j = 1 N ( η ij - m i ) ( η ij - m i ) T , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
S B = Σ i = 1 C n i ( m i - m ) ( m i - m ) T , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
Wherein ni is the number of the i-th class training sample;
Finally by LDA projection matrix WLDAProject to projecting the characteristic vector after dimensionality reduction by PCA, obtain every Optimal classification feature Γ of training pictureij;Wherein optimal classification feature ΓijFor:
Γij=WLDA TWPCA Tdij, i=1,2 ..., C, j=1,2 ... N;
(2)Described identification step includes:
(2-1)Adaboost detects:The subregion that test video frame comprises face is identified by Adaboost algorithm;
(2-2)AAM follows the tracks of and posture correction:Training obtains AAM model first;Then pass through the AAM model that training obtains Face subregion is tracked;The net shape parameter obtaining when finally using AAM model training is entered to the subregion of face Row posture corrects, and obtains the face subregion after posture correction;
Wherein AAM follows the tracks of as follows with the training step of the AAM model of posture correction:
(2-2-1)Choosing training object is to include positive face, left and right sides face, face of facing upward, S of face of bowing reliability sample, In embodiment, quantity S of reliable sample is 100~1000.
(2-2-2)Described point is carried out to reliable sample, 68 feature visibility points of face are demarcated;In Fig. 2 Shown.
(2-2-3)Using Procrustes(Pu Shi analyzes)Face after described point is alignd, obtains removing translation, chi Degree and the alignment face of rotation;
(2-2-4)Using principal component analytical method to step(2-2-3)The alignment face obtaining carries out shape modeling, obtains Form parameter p(I.e. coefficient of torsion)And shape;
(2-2-5)Remove average shape face from shape, then delaunay triangle division is carried out to it, then use The affine method of burst makes texture project in average shape, is finally processed with PCA, obtains parametric texture and texture mould Type;
(2-2-6)According to shape obtained above and texture model, right respectively using reversely combined aam matching algorithm Existing shape and texture model are trained, and obtain hessian matrix.
The process that face subregion is tracked is as follows:
(2-2-7)Form parameter p according to hessian matrix and shape obtains shape by following burst mapping function Shape parameter increase Δ p:
Δp = H - 1 Σ x [ ▿ I ∂ W ∂ p ] T [ T ( x ) - I ( W ( x ; p ) ) ] ;
Wherein, H is hessian matrix, and W is the affine equation of burst, and T is alignment face, and I is actual picture, and x is actual figure Pixel in picture, p is the form parameter of shape in corresponding training process;
(2-2-8)Form parameter p, wherein p=p+ Δ p are updated according to form parameter increment Delta p;It is then back to step(2-2- 7), continue to calculate Δ p by above-mentioned burst mapping function, until calculated form parameter increment Delta p is less than a or iteration Number of times reaches maximum iteration time, then stop calculating;
(2-2-9)Form parameter p obtaining after updating combines shape, and application principal component analytical method is followed the tracks of Target facial image.
Posture trimming process is as follows:After accurately tracing into target facial image, according to obtain in tracking iterative process Net shape parameter p carries out posture correction to the subregion of face, the shape I (W (x after being reversed;P)) it is and positive face one The reconstruction face causing, that is, obtain the face subregion after posture correction;
(2-3)PCA projects:
First the face subregion picture after posture obtained above correction is pulled into by row and is normalized after column vector, The average face obtaining when then projecting with the PCA of training stage does difference operation, obtains the second difference;
Then above-mentioned second matrix of differences is projected to the PCA projection matrix W that the training stage obtainsPCA, after obtaining dimensionality reduction Characteristic vector η is:
η=WPCA Tu;
(2-4)LDA projects:By step(2-3)In eigenvector projection after the dimensionality reduction that obtains obtain to the training stage LDA projection matrix WLDA, optimal classification feature Γ obtaining facial image to be identified is:
Γ=WLDA TWPCA Tu;
Wherein u is step(2-3)In the second difference of obtaining, u=x-f;X is step(2-3)In obtain posture correction after Face subregion normalization after picture, f be step(1-1)In the average face that obtains.
(2-5)Nearest neighbor classifier decision-making:
Calculate each training picture optimal classification feature in training picture library first and train picture optimal classification feature with other Between Euclidean distance, therefrom select maximum Euclidean distance value F;Set threshold value b, the size of this threshold value b is maximum Euclidean distance value F half.
Then calculation procedure(2-4)The optimal classification feature of the facial image to be identified obtaining is each with what the training stage obtained Minimum euclid distance γ of the optimal classification feature of individual training picture1
γ 1 = min i , j | | Γ - Γ ij | | , i = 1,2 , . . . , C , j = 1,2 , . . . N ;
Finally by minimum euclid distance γ1Being compared with threshold value b, if being more than threshold value b, judging this people to be identified Face image is non-training storehouse picture;If being less than threshold value b, by minimum euclid distance γ1Place characteristic of division place class Face picture is judged to recognition result.
Above-described embodiment is the present invention preferably embodiment, but embodiments of the present invention are not subject to above-described embodiment Limit, other any spirit without departing from the present invention and the change made under principle, modification, replacement, combine, simplify, All should be equivalent substitute mode, be included within protection scope of the present invention.

Claims (10)

1. a kind of video sequence face identification method based on AAM model, including training stage and cognitive phase;
(1) the described training stage includes:
(1-1) PCA projection:
First training picture is normalized, average face is calculated according to the training picture after normalization, after all normalization Training picture and average face do difference operation, obtain the first difference;
Then covariance matrix is built according to the first difference, by the characteristic vector group of K eigenvalue of maximum before covariance matrix Become PCA projection matrix WPCA, as feature quantity space;
Finally the first difference is passed through PCA projection matrix WPCAProject to lower dimensional space, obtain the characteristic vector after dimensionality reduction;
(1-2) LDA projection:
Calculate the mean vector m and i-th of characteristic vector after the dimensionality reduction that PCA projection obtains for all training picture samples first Class trains the mean vector mi of characteristic vector after the dimensionality reduction that PCA projection obtains for the picture sample;
Then according to mean vector m, mi and calculating training sample within class scatter matrix SWWith inter _ class relationship matrix SB, calculate Matrix SW -1SBCharacteristic vector, by choose SW -1SBFirst L maximum characteristic vector constitute LDA projection matrix WLDA
Finally by LDA projection matrix WLDAProject to projecting the characteristic vector after dimensionality reduction by PCA, obtain every training Optimal classification feature Γ of pictureij
(2) described cognitive phase includes:
(2-1) Adaboost detection:The subregion that test video frame comprises face is identified by Adaboost algorithm;
(2-2) AAM follows the tracks of and posture correction:Training obtains AAM model first;Then pass through to train the AAM model obtaining to people Face region is tracked;The net shape parameter obtaining when finally using AAM model training carries out appearance to the subregion of face Gesture corrects, and obtains the face subregion after posture correction;
(2-3) PCA projection:
First the face subregion picture after posture obtained above correction is normalized, the then PCA with the training stage The average face obtaining during projection does difference operation, obtains the second difference;
Then above-mentioned second matrix of differences is projected to the PCA projection matrix W that the training stage obtainsPCA, obtain the feature after dimensionality reduction Vectorial η;
(2-4) LDA projection:The LDA that eigenvector projection after the dimensionality reduction that will obtain in step (2-3) obtained to the training stage throws Shadow matrix WLDA, obtain the optimal classification feature of facial image to be identified;
(2-5) nearest neighbor classifier decision-making:
Calculate the Euclidean distance between each training picture optimal classification feature and other training picture optimal classification features first, Therefrom select Euclidean distance value F of maximum;Set threshold value b, the size of this threshold value b is maximum Euclidean distance value F Half;
Then each instruction that the optimal classification feature of the facial image to be identified that calculation procedure (2-4) obtains and training stage obtain Practice minimum euclid distance γ of the optimal classification feature of picture1
Finally by minimum euclid distance γ1Being compared with threshold value b, if being more than threshold value b, judging this face figure to be identified Picture is non-training storehouse picture;If being less than threshold value b, by minimum euclid distance γ1The face of place characteristic of division place class Picture is judged to recognition result.
2. the video sequence face identification method based on AAM model according to claim 1 is it is characterised in that described step Suddenly in (1-1), calculated average face f is:
f = 1 M Σ i = 1 C Σ j = 1 N x i j , i = 1 , 2 , ... , C , j = 1 , 2 , ... N ;
Wherein xijFor the training picture after normalization, C is the classification sum of the training picture after normalization, and N comprises for every apoplexy due to endogenous wind Training picture sample sum, M be training picture sample sum, wherein M=N*C;
Training picture after normalization and average face do difference operation, the first difference d obtainingijFor:
dij=xij- f, i=1,2 ..., C, j=1,2 ... N;
According to the first matrix of differences dijBuild covariance matrix U be:
U = 1 M Σ i = 1 C Σ j = 1 N d i j d i j T , i = 1 , 2 , ... , C , j = 1 , 2 , ... N ;
Wherein the first difference is passed through PCA projection matrix WPCAProject to lower dimensional space, obtain characteristic vector η after dimensionality reductionijFor:
ηij=WPCA Tdij, i=1,2 ..., C, j=1,2 ... N.
3. the video sequence face identification method based on AAM model according to claim 2 is it is characterised in that described step Suddenly in (1-2), the mean vector m of characteristic vector after the dimensionality reduction that PCA projection obtains for all training picture samples is:
m = 1 M Σ i = 1 C Σ j = 1 N η i j , i = 1 , 2 , ... , C , j = 1 , 2 , ... N ;
I-th class trains the mean vector mi of characteristic vector after the PCA dimensionality reduction that obtains of projection for the picture sample to be:
m i = 1 N Σ j = 1 N η i j , i = 1 , 2 , ... , C , j = 1 , 2 , ... N ;
Described step trains the within class scatter matrix S of picture sampleWWith inter _ class relationship matrix SBIt is respectively:
S W = Σ i = 1 C Σ j = 1 N ( η i j - m i ) ( η i j - m i ) T , i = 1 , 2 , ... , C , j = 1 , 2 , ... N ;
S B = Σ i = 1 C n i ( m i - m ) ( m i - m ) T , i = 1 , 2 , ... , C , j = 1 , 2 , ... N ;
Wherein ni is the number of the i-th class training sample;Optimal classification feature Γ of every training pictureijFor:
Γij=WLDA TWPCA Tdij, i=1,2 ..., C, j=1,2 ... N.
4. the video sequence face identification method based on AAM model according to claim 1 is it is characterised in that described step Suddenly in (2-2), AAM tracking is as follows with the training step of the AAM model of posture correction:
(2-2-1) choosing training object is to include positive face, left and right sides face, face of facing upward, S of face of bowing reliability sample;
(2-2-2) described point is carried out to reliable sample, 68 feature visibility points of face are demarcated;
(2-2-3) using Procrustes, the face after described point is alignd, obtain removing the alignment of translation, yardstick and rotation Face;
(2-2-4) shape modeling is carried out using the alignment face that principal component analytical method obtains to step (2-2-3), obtain shape Parameter p and shape;
(2-2-5) remove average shape face from shape, then delaunay triangle division is carried out to it, then use burst Affine method makes texture project in average shape, is finally processed with PCA, obtains parametric texture and texture model;
(2-2-6) according to shape obtained above and texture model, using reversely combined aam matching algorithm respectively to existing Shape and texture model are trained, and obtain hessian matrix.
5. the video sequence face identification method based on AAM model according to claim 4 is it is characterised in that described step Suddenly in (2-2-1), quantity S of reliable sample is 100~1000.
6. the video sequence face identification method based on AAM model according to claim 4 is it is characterised in that described step Suddenly the process that in (2-2), face subregion is tracked is as follows:
(2-2-7) form parameter p according to hessian matrix and shape obtains shape ginseng by following burst mapping function Number increment Delta p:
Δ p = H - 1 Σ x [ ▿ I ∂ W ∂ p ] T [ T ( x ) - I ( W ( x ; p ) ) ] ;
Wherein H is hessian matrix, and W is the affine equation of burst, and T is alignment face, and I is actual picture, and x is in real image Pixel, p is the form parameter of shape in corresponding training process;
(2-2-8) form parameter p, wherein p=p+ Δ p are updated according to form parameter increment Delta p;It is then back to step (2-2-7), Continue to calculate Δ p by above-mentioned burst mapping function, until calculated form parameter increment Delta p is less than threshold value a or changes Generation number reaches maximum iteration time, then stop calculating;
(2-2-9) form parameter p obtaining after updating combines shape, and application principal component analytical method obtains the mesh followed the tracks of Mark facial image.
7. the video sequence face identification method based on AAM model according to claim 6 is it is characterised in that described threshold Value a is 500 to 2000.
8. the video sequence face identification method based on AAM model according to claim 6 is it is characterised in that described step Suddenly the posture trimming process in (2-2) is as follows:After accurately tracing into target facial image, obtain according to following the tracks of in iterative process Net shape parameter p posture correction is carried out to the subregion of face, the shape I (W (x after being reversed;P)) it is and positive face Consistent reconstruction face.
9. the video sequence face identification method based on AAM model according to claim 1 is it is characterised in that described step Suddenly obtaining characteristic vector η after dimensionality reduction in (2-3) is:
η=WPCA Tu;
Optimal classification feature Γ of facial image to be identified is:
Γ=WLDA TWPCA Tu;
Wherein u is the second difference obtaining in step (2-3);Wherein u=x-f;X is the posture correction obtaining in step (2-3) The picture after face subregion normalization afterwards, f is the average face obtaining in step (1-1).
10. the video sequence face identification method based on AAM model according to claim 1 is it is characterised in that described step Suddenly in (2-5) optimal classification feature Γ of facial image to be identified and training stage obtain each training picture optimal classification Feature ΓijMinimum euclid distance γ1For:
γ 1 = m i n i , j | | Γ - Γ i j | | , i = 1 , 2 , ... , C , j = 1 , 2 , ... N ;
Wherein C is the classification sum of the training picture after normalization, and N is the training picture sample sum that every apoplexy due to endogenous wind comprises.
CN201310445776.XA 2013-09-26 2013-09-26 Video sequence face identification method based on AAM model Active CN103514442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310445776.XA CN103514442B (en) 2013-09-26 2013-09-26 Video sequence face identification method based on AAM model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310445776.XA CN103514442B (en) 2013-09-26 2013-09-26 Video sequence face identification method based on AAM model

Publications (2)

Publication Number Publication Date
CN103514442A CN103514442A (en) 2014-01-15
CN103514442B true CN103514442B (en) 2017-02-08

Family

ID=49897136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310445776.XA Active CN103514442B (en) 2013-09-26 2013-09-26 Video sequence face identification method based on AAM model

Country Status (1)

Country Link
CN (1) CN103514442B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138993B (en) * 2015-08-31 2018-07-27 小米科技有限责任公司 Establish the method and device of human face recognition model
CN106570828B (en) * 2015-10-09 2019-04-12 南京理工大学 A kind of interframe registration asymmetric correction method based on subspace projection
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN105426857B (en) * 2015-11-25 2019-04-12 小米科技有限责任公司 Human face recognition model training method and device
CN107153807A (en) * 2016-03-03 2017-09-12 重庆信科设计有限公司 A kind of non-greedy face identification method of two-dimensional principal component analysis
CN106548521A (en) * 2016-11-24 2017-03-29 北京三体高创科技有限公司 A kind of face alignment method and system of joint 2D+3D active appearance models
CN106648079A (en) * 2016-12-05 2017-05-10 华南理工大学 Human face identification and gesture interaction-based television entertainment system
CN106778714B (en) * 2017-03-06 2019-08-13 西安电子科技大学 LDA face identification method based on nonlinear characteristic and model combination
CN107506717B (en) * 2017-08-17 2020-11-27 南京东方网信网络科技有限公司 Face recognition method based on depth transformation learning in unconstrained scene
CN108111868B (en) * 2017-11-17 2020-06-09 西安电子科技大学 MMDA-based privacy protection method with unchangeable expression
CN109241890B (en) * 2018-08-24 2020-01-14 北京字节跳动网络技术有限公司 Face image correction method, apparatus and storage medium
CN111104825A (en) * 2018-10-26 2020-05-05 北京陌陌信息技术有限公司 Face registry updating method, device, equipment and medium
CN111553217A (en) * 2020-04-20 2020-08-18 哈尔滨工程大学 Driver call monitoring method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739555A (en) * 2009-12-01 2010-06-16 北京中星微电子有限公司 Method and system for detecting false face, and method and system for training false face model
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
CN102663413A (en) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI226589B (en) * 2003-04-28 2005-01-11 Ind Tech Res Inst Statistical facial feature extraction method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739555A (en) * 2009-12-01 2010-06-16 北京中星微电子有限公司 Method and system for detecting false face, and method and system for training false face model
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
CN102663413A (en) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 Multi-gesture and cross-age oriented face image authentication method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种鲁棒高效的人脸特征点跟踪方法;黄琛 等;《自动化学报》;20150515;第38卷(第5期);第788-796页 *

Also Published As

Publication number Publication date
CN103514442A (en) 2014-01-15

Similar Documents

Publication Publication Date Title
CN103514442B (en) Video sequence face identification method based on AAM model
CN101964064B (en) Human face comparison method
Yoo et al. Attentionnet: Aggregating weak directions for accurate object detection
CN101777116B (en) Method for analyzing facial expressions on basis of motion tracking
CN101968846B (en) Face tracking method
CN101499128B (en) Three-dimensional human face action detecting and tracing method based on video stream
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN101404086B (en) Target tracking method and device based on video
Tang et al. Facial landmark detection by semi-supervised deep learning
CN109359608B (en) Face recognition method based on deep learning model
US20080063263A1 (en) Method for outlining and aligning a face in face processing of an image
CN102819744B (en) Emotion recognition method with information of two channels fused
CN104392241B (en) A kind of head pose estimation method returned based on mixing
CN102654903A (en) Face comparison method
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN103279936A (en) Human face fake photo automatic combining and modifying method based on portrayal
CN101369309B (en) Human ear image normalization method based on active apparent model and outer ear long axis
CN106599810A (en) Head pose estimation method based on stacked auto-encoding
CN105631477A (en) Traffic sign recognition method based on extreme learning machine and self-adaptive lifting
CN103927554A (en) Image sparse representation facial expression feature extraction system and method based on topological structure
CN105608710A (en) Non-rigid face detection and tracking positioning method
CN101714253B (en) Interactive image segmentation correcting method based on geodesic active region models
CN104732247A (en) Human face feature positioning method
CN107784284A (en) Face identification method and system
CN105069430B (en) A kind of method for designing of multi-pose Face detector based on MSNRD feature

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20181211

Address after: 510330 Room 7,500, Haicheng West Street, Xingang East Road, Haizhu District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou Boguanwen Language Technology Co., Ltd.

Address before: 510640 No. five, 381 mountain road, Guangzhou, Guangdong, Tianhe District

Patentee before: South China University of Technology

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190925

Address after: 510000 820, room 8, 8, 116 Heng Road, Dongguan, Guangzhou, Guangdong.

Patentee after: Guangzhou Bo Wei Intelligent Technology Co., Ltd.

Address before: 510330 Room 7,500, Haicheng West Street, Xingang East Road, Haizhu District, Guangzhou City, Guangdong Province

Patentee before: Guangzhou Boguanwen Language Technology Co., Ltd.