CN100373395C - Human face recognition method based on human face statistics - Google Patents

Human face recognition method based on human face statistics Download PDF

Info

Publication number
CN100373395C
CN100373395C CNB2005101115412A CN200510111541A CN100373395C CN 100373395 C CN100373395 C CN 100373395C CN B2005101115412 A CNB2005101115412 A CN B2005101115412A CN 200510111541 A CN200510111541 A CN 200510111541A CN 100373395 C CN100373395 C CN 100373395C
Authority
CN
China
Prior art keywords
face
dimensional
people
image
attitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005101115412A
Other languages
Chinese (zh)
Other versions
CN1776712A (en
Inventor
姜嘉言
张立明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CNB2005101115412A priority Critical patent/CN100373395C/en
Publication of CN1776712A publication Critical patent/CN1776712A/en
Application granted granted Critical
Publication of CN100373395C publication Critical patent/CN100373395C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention belongs to the technical field of pattern recognition, particularly to a human face recognition method based on human face statistical knowledge. The method uses an obverse normal human face image for registration, virtual images of the human face under different postures can be obtained, a two-stage recognition strategy which separates posture recognition and identity recognition is applied additionally, and the problem of the variation of human face postures in registration and recognition is solved. The method comprises a three-dimensional deformable human face model which represents the statistical information of a human face structure, a reconstruction algorithm which reconstructs a three-dimensional human face in obverse human face images and a posture-identity two-stage recognition strategy. The present invention can still obtain higher recognition rate to side detecting images under the condition that only one obverse normal human face image is used for registration.

Description

A kind of face identification method based on human face statistics
Technical field
The invention belongs to mode identification technology, be specifically related to a kind of personal identification method based on facial image.
Technical background
Though the research of recognition of face has continued many decades,, even to this day, it still be in the area of pattern recognition one have challenging problem.Obtained tremendous development in the method research in the past based on bidimensional, comprising Eigenface[1], Fisherface[2], and AAM[3] or the like.Yet the face identification method based on bidimensional also has a series of insoluble problems, and for example when bigger variation takes place for human face posture, expression and ambient lighting (PIE), the discrimination of system will sharply descend.How to solve the identification problem of people's face under different attitudes, illumination and expression condition, remain the focus of current research.
For the recognition of face problem that attitude changes, traditional method must obtain the people's face training image under the abundant different attitudes, however these images and non-availability under a lot of situation.But,, only needed to show work that the standard faces image in a front just can be discerned, even test pattern exists tangible attitude to change for human brain.This ability of human brain can be summed up as the function of association.
In order to simulate the association function of human brain, realize the recognition of face that attitude is irrelevant, have two class methods to be suggested: " normalizing " and " expansion ".The example of " normalizing " mainly contains V.Blanz﹠amp in 2003; T.Vetter[4] faceform of the 3-d deformable that proposes, utilize it that the bidimensional facial image of input is carried out the dual coupling of shape and texture, can obtain and three-dimensional face feature that attitude is irrelevant, thereby solve the problem of attitude well.But because it need be optimized simultaneously to shape and texture, consuming time huge and easily be absorbed in local minimumly, and initial characteristic point position needs manual the acquisition, at present can not be practical.The example of " expansion " had 2004 and Yuxiao Huet.al[5 in 2005] a kind of method that [6] propose, it utilizes the 3-d deformable faceform to reconstruct the three-dimensional model of this people's face with a standard picture.Its feature is to utilize characteristic point position to mate automatically in shape, uses the texture mapping technology on the texture, thereby has shortened the reconstruction time of three-dimensional face greatly.The three-dimensional face that obtains can synthesize the virtual facial image of some different attitudes, uses so that rear end bidimensional recognition system to be provided.
The present invention compares with said method, principal feature has: (1) is in the three-dimensional facial reconstruction algorithm of [5] [6], only considered rotation, the Pan and Zoom of faceform in the plane of delineation, the present invention has considered the degree of depth rotation of model, finally make reconstruction algorithm more accurate than [5] [6], consuming time more much smaller than [4]; (2) utilize this three-dimensional facial reconstruction algorithm to obtain virtual facial image under the sufficient different attitude, and do not need from real world to obtain these images, thereby under the situation that a positive criteria facial image is only arranged, make next step two stage of attitude-identity recognition strategy to implement; (3) to carrying out the identification under this attitude after the test facial image elder generation employing gesture recognition, improve overall identification rate.
Introduce some notions related to the present invention below:
1. 3-d deformable faceform
V Blanz﹠amp; T.Vetter[4] the 3-d deformable faceform that proposes takes from 200 European three-dimensional face data, everyone face data comprise about 100,000 summits, the coordinate on each summit (x, y, z) and texture (R, G, B) known.These three-dimensional face data obtain by spatial digitizer scanning.At first need to carry out pre-service for raw data, remove non-face part, then to carry out point-to-point registration to the somebody of institute face data, setting up highdensity some point correspondence (is the same same meaning of one's words of target vertex representation down, all is nose such as No. 1000 summit for the somebody of institute face data, or the like).At last coordinate and data texturing are arranged in shape and texture vector by (1) formula:
S i = ( x i 1 , y i 1 , z i 1 , . . . , x i M , y i M , z i M ) T ; T l = ( R i 1 , G i 1 , B i 1 , . . . , R i M , G i M , B i M ) T - - - ( 1 )
Wherein i represents i people's face data, the number of vertex of M representation model.In order to obtain a more compact parametric representation, the shape vector and the texture vector of all samples carried out PCA (seeing related notion 2), thereby obtained a 3-d deformable faceform:
S = s ‾ + Σ J = 1 m s α j s J ; T = t ‾ + Σ j = 1 m s β j t j - - - ( 2 )
Wherein s is average man's face shape vector, α jBe j shape coefficient, s jBe j shape facility vector, m sShape pivot number for intercepting; In like manner t is the average texture vector, β jBe j texture coefficient, t jBe j texture feature vector, m tTexture pivot number for intercepting.By variation factor α jAnd β j, be about to shape and carry out linear combination according to different coefficients respectively with texture feature vector, just can access the three-dimensional face of difformity, different texture.
2.PCA
PCA is a kind of teacherless linear dimension reduction method commonly used, and it seeks linear subspaces, so that sample is big as much as possible in the covariance of this subspace projection.With the shape of setting up 3-d deformable people face is example, way following (supposing total N three-dimensional face data):
The shape average of three-dimensional face data s ‾ = 1 N Σ i = 1 N S i
Covariance matrix C x = 1 N Σ i = 1 N ( S i - s ‾ ) ( S i - s ‾ ) T
Constitute the base of PCA subspace, promptly shape facility vector sj can be obtained by following feature decomposition:
C xs j=λ js j?j=1,2,…,m s
3.LDA
LDA is a kind of linear dimension reduction method that the teacher is arranged commonly used, and it seeks linear subspaces, so that scatter in the class of sample projection on this subspace closely, scatter between class and disperse.With the facial image is example, and specific practice is as follows:
At first we are arranged in the form x of column vector with all bidimensional facial images according to row preface or row preface, i=1, and 2 ..., N, like this piece image correspondence sample in the higher dimensional space.We suppose that these samples are divided into the class into c, and every class has N iIndividual sample then has:
Grand mean m = 1 N Σ l = 1 N x i
All kinds of averages m i = 1 N l Σ x j ∈ X l x j ( i = 1,2 , . . . , c )
Scatter matrix in the class S w = Σ i = 1 c Σ x j ∈ X l ( x j - m i ) ( x j - m i ) T - - - ( 3 )
Scatter matrix between class S b = Σ i = 1 c N i ( m i - m ) ( m i - m ) T - - - ( 4 )
Constitute the base of LDA subspace W LDA = arg max w | W T S b W | | W T S w W | = [ w 1 w 2 · · · w m ] Can decompose by following generalized character and obtain:
S bw i=λ iS ww i
List of references
[1].M.Turk?and?A.Pentland,“Face?Recognition?Using?Eigenfaces”,Proc.IEEE?Conf.onComputer?Vision?and?Pattern?Recognition,1991
[2].P.Belhumeur,J.Hespanha&D.Kriegman,“Eigenfaces?vs.Fisherfaces:Recognition?UsingClass?Specific?Linear?Proj?ection”,IEEE?trans?onPAMI,July1997
[3].T.Cootes&C.Taylor,“Statistical?Models?of?Appearance?for?ComputerVision”,Oct2001
[4].V.Blanz?and?T.Vetter,“Face?Recognition?Based?on?Fitting?a?3D?Morphable?Model”,IEEEtrans?onPAMI,Sept2003
[5].Yuxiao?Hu,et.al“Automatic3D?Reconstruction?for?Face?Recognition”,Proc.Int’lConf.Automatic?Faceand?Gesture?Recognition,2004
[6].Dalong?Jiang,et.al“Efficient?3D?Reconstruction?for?Face?Recognition”,PatternRecogoition,2004
[7].P.Viola?&?M.Jones,“Robust?real-time?face?detection”,Proc.IEEE?Conf.on?ComputerVision,2001
[8].I.Matthews?&?S.Baker,“Active?appearance?models?revisited”,InternationalJournal?OfComputer?Vision,Nov2004
[9].Chengjun?Liu?&?H.Welchsler,“Gabor?Feature?Based?Classification?Using?the?EnhancedFisher?Linear?Discriminant?Model?for?Face?Recognition”,IEEE?trans?onImage?Processing,Apr2002
Summary of the invention
The purpose of this method is to propose a kind of standard faces image in a front that only uses and registers, just the method that can discern the facial image of side.
The inventive method comprises to be set up the more accurate fast three-dimensional facial reconstruction algorithm of a 3-d deformable faceform, and one recognition of face is divided into attitude, the two stage recognition strategy of identity.Be divided into training, registration and test three phases during concrete enforcement.Fig. 1 has shown the process flow diagram of registration and test phase.Following mask body is introduced corresponding step and three-dimensional facial reconstruction algorithm.
1 training stage:
The fundamental purpose in this stage is to obtain gesture recognition and the needed LDA base of identification.
At first with attitude pre-defined be p interval (such as-30 °, 0 ° and 30 ° etc.).Because at test phase, the identification of human face region is divided into gesture recognition and two stages of identification.Wherein, gesture recognition is exactly will be with a width of cloth test person face image classification to corresponding attitude interval; Identification in this attitude interval, finish exactly to image in the identification of people's face.Two stage like this recognition strategy can reduce because bigger attitude changes the difficulty of the identification that causes, and can more easily finish identification in the interval inside of same attitude.
For gesture recognition, the facial image of same attitude is constituted a class, by (3) (4) formula scatter matrix S in the compute classes respectively wAnd scatter matrix S between class b, and obtain LDA gesture recognition base.Each sample is projected to the feature that has obtained on these bases behind each sample dimensionality reduction.Ask for the average of same class sample, as the feature of this attitude facial image.
For identification, in each attitude interval the facial image of same identity is constituted a class respectively, by scatter matrix in (3) (4) formula difference compute classes
Figure C20051011154100081
, i=1,2 ..., scatter matrix between p and class
Figure C20051011154100082
, i=1,2 ..., p, and obtain separately LDA identification base.
2, registration phase:
(1) for the positive criteria facial image of input, utilize Adaboost method [7] to carry out people's face and detect, identify the image region that comprises people's face;
(2) utilize real-time AAM method [8] to demarcate n unique point coordinate on the human face region automatically.Unique point is selected people's point apparent in view on the face, that demarcate easily, and for example: canthus, nose, the corners of the mouth, face exterior feature or the like, the number of unique point are generally between 40 to 100.
(3) this facial image is carried out three-dimensional facial reconstruction.Specific algorithm is seen the 4th part.
(4) generate the virtual image of this people's face that some low-angle attitudes change respectively in each attitude interval, they are projected to respectively on the LDA identification base of corresponding attitude, obtained the feature behind the dimensionality reduction, these features have been asked for average, as the feature of this people's face under this attitude.
3 test phases:
(1) for the test facial image of input, utilize the Adaboost method to carry out people's face and detect, identify the image region that comprises people's face;
(2) human face region is carried out gesture recognition.Be about to it and project on the LDA gesture recognition base, obtained the feature behind the dimensionality reduction, compare with existing posture feature then, classify, obtain the attitude of this facial image with arest neighbors judgement method;
(3) human face region is carried out identification.Be about to it and project on the LDA identification base of corresponding attitude, obtained the feature behind the dimensionality reduction, compare with existing identity characteristic then, classify, obtain the identity of this facial image with arest neighbors judgement method.
4 three-dimensional facial reconstruction algorithms:
At first need be on the 3-d deformable faceform off-line calibration unique point subscript, they have the identical meaning of one's words with the unique point that AAM method is in real time demarcated on the positive criteria facial image.Fig. 2 has shown the characteristic point position on the model.Owing to all three-dimensional face data have been carried out registration when modeling, therefore the meaning of one's words of these unique points can not change in deformation process.
In the process of model deformation and spatial alternation, need know the projection coordinate of these unique points on the plane of delineation.For the 7th unique point, its three-dimensional coordinate is:
( x k , y k , z k ) T = ( x ‾ k , y ‾ k , z ‾ k ) T + Σ i = 1 m s α i ( z i k , y i k , z i k ) T - - - ( 5 )
Wherein
Figure C20051011154100092
Be this three-dimensional coordinate on average man's face shape,
Figure C20051011154100093
It is the three-dimensional coordinate of this point on f the shape facility vector.Through after the spatial alternation, its coordinate becomes:
( x ~ k , y ~ k , z ~ k ) T = R θ × R φ × R γ × s × ( x k , y k , z k ) T ( t x , t y , 0 ) T - - - ( 6 )
R θ = 1 0 0 0 cos θ - sin θ 0 sin θ cos θ , R φ = cos φ 0 sin φ 0 1 0 - sin φ 0 cos φ , R γ = cos γ - sin 0 sin γ cos γ 0 0 0 1
Representation model is around the rotation matrix of x axle, y axle and z axle respectively, and s is a scale factor, t xBe the translation on the x direction of principal axis, t yBe the translation on the y direction of principal axis.Owing to use the rectangular projection model, therefore ignored the degree of depth translation on the z direction of principal axis.This k unique point projection coordinate on the plane of delineation is:
P x k = x ~ k × ( width / edge ) + width / 2 ; P y k = y ~ k × ( height / edge ) + height / 2 - - - ( 7 )
Wherein width is the width of two dimensional image, and height is the length of two dimensional image, and edge is the length on three-dimensional vision area border.Fig. 3 has shown the perspective view of three-dimensional model.
In order to reconstruct the shape of three-dimensional face, the projection error that defines k unique point is:
e k = ( P x k - P ^ x k , P y k , P ^ y k ) ∈ R 2 - - - ( 8 ) +
Wherein K the projection coordinate of unique point on the plane of delineation on the three-dimensional face model that obtains for formula (7), It is the same unique point coordinate that registration phase was demarcated with real-time AAM method in (2) step.The projection error of all n unique point is linked to be a vectorial e=(e 1, e 2..., e n) T∈ R 2n, and the definition cost function is:
E = 1 2 e T e = 1 2 Σ k = 1 n [ ( P x k - P ^ x k ) 2 + ( P y k - P ^ y k ) 2 ]
It is p = ( α 1 , · · · , α m s , θ , φ , γ , t x , t y , s ) T ∈ R m s + 6 Function, it has considered the shape of people's face in the registered images and at three-dimensional rotation, translation and convergent-divergent.Can obtain the derivative (be Jacobian matrix, derive slightly) of error vector e by explicitly by formula (5) (6) (7) (8), utilize the Levenberg-Marquardt algorithm that it is optimized p.In the time of algorithm convergence, α iPromptly characterized the 3D shape of people's face in the registered images.
After having finished the reconstruction of three-dimensional face shape, utilize registered images that model is carried out texture: for any one summit on the three-dimensional model, its volume coordinate is
Figure C20051011154100106
, obtain its projection coordinate on the plane of delineation by (7) formula and be (P x, P y).Pixel value I (P with registered images on this coordinate x, P y) as the texture of corresponding vertex on the three-dimensional model Just finished the texture reconstruction of three-dimensional face.
After reconstruction is finished,, just can access the bidimensional projected image of this people's face under the different attitudes by changing rotation parameter θ, φ and γ.Fig. 4 has shown whole three-dimensional facial reconstruction process.Compare with method [4] [5] [6] before, this reconstruction algorithm can reach higher reconstruction precision in the short period of time.
Advantage of the present invention:
This method has adopted a 3-d deformable faceform to embody the statistical knowledge of human face structure, adopted a human face rebuilding algorithm of considering three-dimensional information, can be with the less human face rebuilding of more accurately finishing consuming time, and taked a two stage recognition strategy, promptly carry out gesture recognition earlier and carry out identification in corresponding attitude interval again.Experiment shows that this method can be finished the identification to Side Face Image under the situation that only has a positive criteria facial image to do to register, and has higher identification rate.
Description of drawings
Fig. 1 is based on the recognition of face framework of human face structure knowledge.Wherein, Fig. 1 (a) register flow path figure, Fig. 1 (b) test flow chart.
Unique point on Fig. 2 3-d deformable people face shape model (average shape) and the model.
The perspective view of Fig. 3 three-dimensional model.
Fig. 4 three-dimensional facial reconstruction process diagram.Wherein, the standard faces image of Fig. 4 (a) input and the unique point that on image, marks, Fig. 4 (b) rebuilds the three-dimensional face shape that obtains, the three-dimensional face model after Fig. 4 (c) texture mapping, the facial image under the synthetic new attitude of Fig. 4 (d).
-30 °, 0 ° of same individual and 30 ° of three width of cloth facial image in the extensive Chinese facial image database of Fig. 5.
The gray level image of 100 * 100 after 128 * 128 people's face zones that Fig. 6 Adaboost identifies and gray processing and the cutting.
Embodiment
Case study on implementation
Be example with a large-scale facial image database below, the implementation process of this method is described.Facial image database in this example comprises 2000 Chinese's faces, and everyone has-30 °, 0 ° and 30 ° of three width of cloth image.Fig. 5 has shown same individual's three width of cloth images.Our 3-d deformable faceform derives from 60 Chinese three-dimensional face data, and these data are to be obtained by optical principle is synthetic by the photo that two cameras are taken, and everyone face data comprise 26498 summits after pre-service and the registration.Finally we get 9 shape facility vectors, and have demarcated 60 unique points by hand on model, as shown in Figure 2.Implementing procedure is as follows:
1. training stage:
A) LDA gesture recognition base:
We are divided into attitude in-30 °, 0 ° and 30 ° of three intervals, every class attitude each with the detected people's face of 200 width of cloth Adaboost methods image as sample training LDA gesture recognition base.Adaboost is output as 128 * 128 colored human face subimage, in order further to reduce the influence of background to gesture recognition, before training, need detect people's face image of exporting to people's face and carry out gray processing and cutting, make it to become 100 * 100 gray level image.The cutting standard as shown in Figure 6.Finally obtain the LDA gesture recognition base of 2 10000 dimensions.Again all gray level images are projected on this group base, and the feature behind the dimensionality reduction is averaged, obtained the feature of every class attitude.
B) LDA identification base:
We obtain three-dimensional face with the three-dimensional facial reconstruction algorithm to every width of cloth front face image, and on the interval virtual image that generates 5 width of cloth small angle variation of each attitude (on the basis in former attitude lateral rotation-5 °, 0 °, 5 °, vertically rotate-5 °, 5 °) respectively.Same individual's 5 width of cloth virtual images form a class, shared 600 classes training LDA identification base.In order further to improve the robustness of this method for illumination and expression, we use the Gabor feature [9] of image, and each width of cloth facial image has formed a sample in 10240 dimension spaces like this.Finally we use the LDA identification base of preceding 250 10240 dimensions.
2. registration phase:
A) for the positive criteria facial image of input, utilize the Adaboost method to carry out people's face and detect, identify people's face zone of 128 * 128.
B) utilize real-time AAM method to demarcate 60 unique point coordinates on people's face image automatically.
C) this people's face image is carried out three-dimensional facial reconstruction.
D) lateral rotation-5 °, 0 °, 5 ° on the basis of-30 °, 0 ° and 30 ° attitudes are respectively vertically rotated-5 °, 5 °, and each generates 5 128 * 128 virtual facial image.
E) virtual facial image is extracted the Gabor feature, and project to respectively on the corresponding LDA identification base, obtain the feature of every width of cloth image 250 dimensions, ask for their mean value, as the feature of this people's face under this attitude.
3. test phase:
A) for the facial image of input, utilize the Adaboost method to carry out people's face and detect, identify people's face zone of 128 * 128.
B) this subimage is carried out gray processing and cutting (Fig. 6), obtain 100 * 100 gray level image, it is projected on the LDA gesture recognition base, and make comparisons, classify, obtain human face posture with the arest neighbors diagnostic method with existing posture feature.
C) extract the Gabor feature of 128 * 128 people's face images, and project on the LDA identification base of corresponding attitude, obtain the feature of 250 dimensions, and make comparisons, obtain people's face identity with the arest neighbors diagnostic method with existing identity characteristic.In present case, adopt this method that Side Face Image (30 ° and 30 °) is discerned, obtained 82.8% discrimination.

Claims (2)

1. face identification method based on human face statistics, it is characterized in that comprising and set up a 3-d deformable faceform, rapid three-dimensional face reconstruction algorithm, one recognition of face is divided into attitude, the recognition strategy in two stages of identity, concrete steps are divided into training, registration and test three phases:
(1) training stage:
At first attitude is defined as P interval;
For gesture recognition, the facial image of same attitude is constituted a class, by (3) (4) formula scatter matrix S in the compute classes respectively wAnd scatter matrix S between class b, and obtain LDA gesture recognition base; Each sample is projected to the feature that has obtained on these bases behind each sample dimensionality reduction; Ask for the average of same class sample, as the feature of this attitude facial image;
For identification, in each attitude interval the facial image of same identity is constituted a class respectively, by scatter matrix in (3) (4) formula difference compute classes
Figure C2005101115410002C1
I=1,2 ..., scatter matrix between p and class I=1,2 ..., p, and obtain separately LDA identification base;
(2) registration phase:
1. for the positive criteria facial image of input, utilize the Adaboost method to carry out people's face and detect, identify the image region that comprises people's face;
2. utilize real-time AAM method to demarcate n unique point coordinate on the facial image subregion automatically; The unique point number is between 40 to 100;
3. this facial image is carried out three-dimensional facial reconstruction;
4. generate the virtual image of this people's face of some low-angle attitudes variations respectively in each attitude interval, they are projected to respectively on the LDA identification base of corresponding attitude, obtained the feature behind the dimensionality reduction, these features have been asked for average, as the feature of this people's face under this attitude;
(3) test phase:
1. for the test facial image of input, utilize the Adaboost method to carry out people's face and detect, identify the image region that comprises people's face;
2. the image region to people's face carries out gesture recognition; The image region that is about to people's face projects on the LDA gesture recognition base, has obtained the feature behind the dimensionality reduction, compares with existing posture feature then, classifies with arest neighbors judgement method, obtains the attitude of this facial image;
3. the image region to people's face carries out identification; The image region that is about to people's face projects on the LDA identification base of corresponding attitude, has obtained the feature behind the dimensionality reduction, compares with existing identity characteristic then, classifies with arest neighbors judgement method, obtains the identity of this facial image;
Wherein:
Scatter matrix in the class S w = Σ i = 1 c Σ x j ∈ X i ( x j - m i ) ( x j - m i ) T - - - ( 3 )
Scatter matrix between class S b = Σ i = 1 c N i ( m i - m ) ( m i - m ) T - - - ( 4 )
Here,
Grand mean m = 1 N Σ i = 1 N x i
All kinds of averages m i = 1 N i Σ x j ∈ X i x j ( i = 1,2 , · · · , c )
x i, i=1,2 ..., N is the vector that all bidimensional facial images are arranged in by row preface or row preface, and piece image is corresponding to a sample in the higher dimensional space, and C is the class number of this sample, N iBe the sample number in every class.
2. face identification method according to claim 1 is characterized in that described three-dimensional facial reconstruction algorithm is as follows:
Off-line calibration unique point subscript on the 3-d deformable faceform at first, for k unique point, its three-dimensional coordinate is:
( x k , y k , z k ) T = ( x ‾ k , y ‾ k , z ‾ k ) T + Σ i = 1 m s α i ( x i k , y i k , z i k ) T - - - ( 5 )
Wherein Be this three-dimensional coordinate on average man's face shape, (x i k, y i k, z i k) TIt is the three-dimensional coordinate of this point on i the shape facility vector; Through after the spatial alternation, its coordinate becomes:
( x ~ k , y ~ k , z ~ k ) T = R θ × R φ × R γ × s × ( x k , y k , z k ) T + ( t x , t y , 0 ) T - - - ( 6 )
R θ = 1 0 0 0 cos θ - sin θ 0 sin θ cos θ , R φ = cos φ 0 sin φ 0 1 0 - sin φ 0 cos φ , R γ = cos γ - sin γ 0 sin γ cos γ 0 0 0 1
Representation model is around the rotation matrix of x axle, y axle and z axle respectively, and s is a scale factor, t xBe the translation on the x direction of principal axis, t yBe the translation on the y direction of principal axis; This k unique point projection coordinate on the plane of delineation is:
P x k = x ~ k × ( width / edge ) + width / 2 ; P y k = y ~ k × ( height / edge ) + height / 2 - - - ( 7 )
Wherein width is the width of two dimensional image, and height is the length of two dimensional image, and edge is the length on three-dimensional vision area border;
The projection error that defines k unique point is:
e k = ( P x k - P ^ x k , P y k - P ^ y k ) ∈ R 2 - - - ( 8 )
(P wherein Xk, P Yk) k the projection coordinate of unique point on the plane of delineation on the three-dimensional face model that obtains for formula (7),
Figure C2005101115410004C1
Be registration phase the 2. in the step with the same unique point coordinate of real-time AAM method demarcation; The projection error of all n unique point is linked to be a vectorial e=(e 1, e 2..., e n) T∈ R 2n, and the definition cost function is:
E = 1 2 e T e = 1 2 Σ k = 1 n [ ( P x k - P ^ x k ) 2 + ( P y k - P ^ y k ) 2 ]
It is ρ = ( α 1 , · · · , α m s , θ , φ , γ , t x , t y , s ) T ∈ R m s + 6 Function, it has considered the shape of people's face in the registered images and at three-dimensional rotation, translation and convergent-divergent; Can obtain the derivative of error vector e by explicitly by formula (5) (6) (7) (8), utilize the Levenberg-Marquardt algorithm that it is optimized p; In the time of algorithm convergence, α 1Promptly characterized the 3D shape of people's face in the registered images;
After having finished the reconstruction of three-dimensional face shape, utilize registered images that model is carried out texture: for any one summit on the three-dimensional model, its volume coordinate is Obtain its projection coordinate on the plane of delineation by (7) formula and be (P x, P y); With (the P of this projection coordinate x, P y) go up the pixel value I (P of registered images x, P y) as the texture of corresponding vertex on the three-dimensional model
Figure C2005101115410004C5
Just finished the texture reconstruction of three-dimensional face;
After reconstruction is finished,, just can access the bidimensional projected image of this people's face under the different attitudes by changing rotation parameter θ, φ and γ.
CNB2005101115412A 2005-12-15 2005-12-15 Human face recognition method based on human face statistics Expired - Fee Related CN100373395C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005101115412A CN100373395C (en) 2005-12-15 2005-12-15 Human face recognition method based on human face statistics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005101115412A CN100373395C (en) 2005-12-15 2005-12-15 Human face recognition method based on human face statistics

Publications (2)

Publication Number Publication Date
CN1776712A CN1776712A (en) 2006-05-24
CN100373395C true CN100373395C (en) 2008-03-05

Family

ID=36766194

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005101115412A Expired - Fee Related CN100373395C (en) 2005-12-15 2005-12-15 Human face recognition method based on human face statistics

Country Status (1)

Country Link
CN (1) CN100373395C (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100409249C (en) * 2006-08-10 2008-08-06 中山大学 Three-dimensional human face identification method based on grid
CN100414562C (en) * 2006-10-10 2008-08-27 南京搜拍信息技术有限公司 Method for positioning feature points of human face in human face recognition system
CN101196984B (en) * 2006-12-18 2010-05-19 北京海鑫科金高科技股份有限公司 Fast face detecting method
CN101383001B (en) * 2008-10-17 2010-06-02 中山大学 Quick and precise front human face discriminating method
KR101608253B1 (en) * 2011-08-09 2016-04-01 인텔 코포레이션 Image-based multi-view 3d face generation
CN105096377B (en) 2014-05-14 2019-03-19 华为技术有限公司 A kind of image processing method and device
CN104504405A (en) * 2014-12-02 2015-04-08 苏州福丰科技有限公司 Method for recognizing three-dimensional face
CN104463237B (en) * 2014-12-18 2018-03-06 中科创达软件股份有限公司 A kind of face verification method and device based on multi-pose identification
CN104850838B (en) * 2015-05-19 2017-12-08 电子科技大学 Three-dimensional face identification method based on expression invariant region
CN106803054B (en) * 2015-11-26 2019-04-23 腾讯科技(深圳)有限公司 Faceform's matrix training method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050045774A (en) * 2003-11-12 2005-05-17 (주)버추얼미디어 Apparatus and method for reconstructing 3d face from facial image with various viewpoints
US20050123202A1 (en) * 2003-12-04 2005-06-09 Samsung Electronics Co., Ltd. Face recognition apparatus and method using PCA learning per subgroup
US20050147280A1 (en) * 2000-12-01 2005-07-07 Microsoft Corporation System and method for face recognition using synthesized images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050147280A1 (en) * 2000-12-01 2005-07-07 Microsoft Corporation System and method for face recognition using synthesized images
KR20050045774A (en) * 2003-11-12 2005-05-17 (주)버추얼미디어 Apparatus and method for reconstructing 3d face from facial image with various viewpoints
US20050123202A1 (en) * 2003-12-04 2005-06-09 Samsung Electronics Co., Ltd. Face recognition apparatus and method using PCA learning per subgroup

Also Published As

Publication number Publication date
CN1776712A (en) 2006-05-24

Similar Documents

Publication Publication Date Title
CN100373395C (en) Human face recognition method based on human face statistics
Cui et al. Appearance-based hand sign recognition from intensity image sequences
Ramanathan et al. Face verification across age progression
Blanz et al. Face identification across different poses and illuminations with a 3d morphable model
CN101561874B (en) Method for recognizing face images
Shi et al. How effective are landmarks and their geometry for face recognition?
Moghaddam et al. Face recognition using view-based and modular eigenspaces
Sung et al. Example-based learning for view-based human face detection
Moghaddam et al. Probabilistic visual learning for object representation
Li et al. Face recognition using the nearest feature line method
Ullman et al. A fragment-based approach to object representation and classification
Lu Image analysis for face recognition
CN101159015B (en) Two-dimensional human face image recognizing method
Wang et al. Facial feature detection and face recognition from 2D and 3D images
Romdhani et al. Face recognition using 3-D models: Pose and illumination
Etemad et al. Face recognition using discriminant eigenvectors
CN101320484B (en) Three-dimensional human face recognition method based on human face full-automatic positioning
Xu et al. A new attempt to face recognition using 3D eigenfaces
Ullman et al. Object classification using a fragment-based representation
Gupta et al. Face detection using modified Viola jones algorithm
Lee et al. A bilinear illumination model for robust face recognition
CN101840509A (en) Measuring method for eye-observation visual angle and device thereof
Bedre et al. Comparative study of face recognition techniques: a review
US8311319B2 (en) L1-optimized AAM alignment
Biuk et al. Face recognition from multi-pose image sequence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080305

Termination date: 20101215