CN102043966A - Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation - Google Patents

Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation Download PDF

Info

Publication number
CN102043966A
CN102043966A CN 201010588311 CN201010588311A CN102043966A CN 102043966 A CN102043966 A CN 102043966A CN 201010588311 CN201010588311 CN 201010588311 CN 201010588311 A CN201010588311 A CN 201010588311A CN 102043966 A CN102043966 A CN 102043966A
Authority
CN
China
Prior art keywords
face
attitude
human face
principal component
component analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010588311
Other languages
Chinese (zh)
Other versions
CN102043966B (en
Inventor
潘翔
王玲玲
郭小虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2010105883116A priority Critical patent/CN102043966B/en
Publication of CN102043966A publication Critical patent/CN102043966A/en
Application granted granted Critical
Publication of CN102043966B publication Critical patent/CN102043966B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition method based on the combination of partial principal component analysis (PCA) and attitude estimation, comprising the following steps of: (1) previously storing a virtual three-dimensional face image in an original sample library, using a PCA method to compute a face-expanding space in the original sample library, and using a partial PCA method to combine with an eyes-mouth automatic positioning algorithm to perform attitude estimation to two two-dimensional training face images in different attitudes; (2) searching corresponding sub-face space in the face-expanding space according to the attitude estimation result; (3) generating a new virtual three-dimensional face image according to two two-dimensional training face images in different attitudes, the face-expanding space and the sub-face space; (4) using the new three-dimensional face image to update the original sample library; (5) using the partial PCA method to combine with the eyes-mouth automatic positioning algorithm to perform the attitude estimation to the to-be-recognized two-dimensional face image; (6) using the partial PCA method to recognize the to-be-recognized two-dimensional face image.

Description

Face identification method based on part principal component analysis and attitude estimation associating
Technical field
The invention belongs to the face recognition technology field, relate to and a kind ofly estimate the face identification method of the attitude robust of associating can generate virtual three-dimensional people face figure according to the two-dimension human face figure of different attitudes based on part principal component analysis and attitude.
Background technology
In recent decades, recognition of face is the focus of computer vision field research always, and it has practical application widely, can be applicable to field of human-computer interaction, video monitoring etc.The variation of aspects such as illumination, expression and attitude has increased the difficulty of recognition of face, and wherein the attitude variation is the maximum bottleneck of recognition of face, is still a challenging problem.In order to address this problem, a lot of methods have been proposed, can be divided into three broad aspect: based on single-view, based on various visual angles, based on three-dimensional model.In the single-view method, mainly be to extract invariant features, this method is applicable to small angle variation, but invalid under angle variation situation greatly.In the various visual angles method, deposit the picture of everyone all angles, each angle is made up a sorter, though this method recognition effect is good, can not obtain the photo of everyone all angles in the reality.Also propose the face identification method based on three-dimensional model in the prior art, this model can solve the variation issue of illumination and attitude preferably.Yet this method exists some problems: the first, and calculated amount is big, is not suitable for real-time processing; The second, need a large amount of 3-D scanning models to make up database, in actual applications, for example in video monitoring, getable image major part is a two dimension, generally can't obtain three-dimensional information, so the application of this method is limited.
Summary of the invention
The objective of the invention is to overcome above-mentioned deficiency and a kind of face identification method of simply estimating associating based on part principal component analysis and attitude is provided, so that the people's face under the different attitudes is discerned.
For achieving the above object, the key step of the technical solution used in the present invention is as follows:
(1) deposit virtual three-dimensional people face figure in advance in the original sample storehouse, the application principal component analysis method is calculated the expansion people face space in the described original sample storehouse; And the applying portion principal component analysis method is carried out attitude in conjunction with the automatic location algorithm of human eye and face to the two-dimentional training of human face figure of the different attitudes of two width of cloth and is estimated;
(2) the attitude estimated result that obtains according to step (1) is sought corresponding Ziren face space in expansion people face space;
(3) according to two-dimentional training of human face figure, expansion people face space and the Ziren face space of the different attitudes of described two width of cloth, generate new virtual three-dimensional people face figure;
(4) use the new virtual three-dimensional people face figure that is generated to upgrade the original sample storehouse;
(5) the applying portion principal component analysis method is carried out attitude to two-dimension human face figure to be identified and is estimated in conjunction with human eye and the automatic location algorithm of face;
(6) utilize attitude estimated result and the default look-up table that step (5) obtains, two-dimension human face figure to be identified is intercepted obtain blocking two-dimentional test person face figure, use the part principal component analysis method to block two-dimentional test person face figure and discern described.
Further, the present invention is in step (1), and it is as follows that the applying portion principal component analysis method is carried out the attitude estimation approach in conjunction with human eye and the automatic location algorithm of face to two-dimentional training of human face figure:
1) two-dimentional training of human face figure is carried out attitude with the part principal component analysis method and estimate, the attitude estimated result is designated as P1;
2) utilize attitude estimated result P1 and described default look-up table to described two-dimentional training of human face figure intercepting, obtain blocking two-dimentional training of human face figure;
3) block two-dimentional training of human face figure and adopt the part principal component analysis method to carry out attitude to estimate, the attitude estimated result is designated as P2 described;
4) calculate the poor of P1 and P2, if the absolute value of the difference of P1 and P2 is less than preset threshold value, then with the final attitude estimated result of P1 as two-dimentional training of human face figure; Otherwise described two-dimentional training of human face figure is carried out the location of eyes and face, and, the attitude estimated result is designated as P3 according to positioning result estimation attitude;
5) relatively whether P3 is consistent with P1, if consistent, then with the final attitude estimated result of P1 as two-dimentional training of human face figure; Otherwise, with the final attitude estimated result of P2 as two-dimentional training of human face figure.
Further, the present invention is in step (5), and it is as follows that the applying portion principal component analysis method is carried out the attitude estimation approach in conjunction with human eye and the automatic location algorithm of face to two-dimension human face figure to be identified:
A) two-dimension human face figure to be identified is carried out attitude with the part principal component analysis method and estimate, the attitude estimated result is designated as P4;
B) utilize attitude estimated result P4 and described default look-up table to described two-dimension human face figure intercepting to be identified, obtain blocking two-dimension human face figure to be identified;
C) block two-dimension human face figure to be identified and adopt the part principal component analysis method to carry out attitude to estimate, the attitude estimated result is designated as P5 described;
D) calculate the poor of P4 and P5, if the absolute value of the difference of P4 and P5 is less than preset threshold value, then with the final attitude estimated result of P4 as two-dimension human face figure to be identified; Otherwise described two-dimension human face figure to be identified is carried out the location of eyes and face, and, the attitude estimated result is designated as P6 according to positioning result estimation attitude;
F) relatively whether P6 is consistent with P4, if consistent, then with the final attitude estimated result of P4 as two-dimension human face figure to be identified; Otherwise, with the final attitude estimated result of P5 as two-dimension human face figure to be identified.
The present invention than the major advantage of prior art is: (1) utilizes the two-dimentional training of human face figure of two different attitudes to generate new virtual three-dimensional people face figure to upgrade the original sample storehouse, than various visual angles method, the two-dimentional training of human face figure that does not need to obtain each attitude is as the sample storehouse, thereby the present invention has extensive applicability; (2) than face identification method based on three-dimensional model, the present invention need not to utilize the 3-D scanning model to make up the sample storehouse, need not to carry out the three-dimensional feature point processing, can significantly reduce complexity of calculation, help real-time processing, do not lose colourful attitude accuracy of face identification simultaneously again; (3) than principal component analysis method, the present invention carries out attitude to two-dimension human face figure to be identified and estimates, according to the attitude results estimated, adopts the part principal component analysis method to discern, and is higher to the robustness of attitude; (4) the present invention adopts the part principal component analysis method in conjunction with human eye and the automatic location algorithm of face people's face figure to be carried out the attitude estimation, and the accuracy that attitude is estimated is higher, and the more original attitude method of estimation of computational complexity does not have large increase simultaneously; (5) at the purpose of colourful attitude recognition of face, two-dimension human face figure to be identified is intercepted, removed two-dimension human face figure to be identified and the incoherent part of virtual three-dimensional people face figure, improved discrimination according to default look-up table.(6) the present invention has good application in field of video monitoring.
Description of drawings
Fig. 1 is the schematic flow sheet based on the face identification method of part principal component analysis and attitude estimation associating;
Fig. 2 is the synoptic diagram of two-dimentional training of human face figure and virtual three-dimensional people face figure, wherein, (a) is two-dimentional training of human face figure, (b) is virtual three-dimensional people face figure;
Fig. 3 the present invention is based on the part principal component analysis method in conjunction with human eye and the automatic location algorithm of face two-dimentional training of human face figure to be carried out the process flow diagram that attitude is estimated;
Fig. 4 is the figure as a result that human eye and face are located automatically, wherein, (a) for attitude be human eye and the face positioning result of 0 ° two-dimentional training of human face figure, (b) for attitude be human eye and the face positioning result of 30 ° two-dimentional training of human face figure, (c) for attitude being human eye and the face positioning result of 60 ° two-dimentional training of human face figure, (d) is human eye and the face positioning result of 90 ° two-dimentional training of human face figure for attitude;
Fig. 5 is that 0 ° and attitude are the figure as a result that 90 ° two-dimentional training of human face figure generates virtual three-dimensional people face figure for the present invention according to attitude, wherein, figure (a) is 0 ° two-dimentional training of human face figure for attitude, figure (b) is 90 ° two-dimentional training of human face figure for attitude, serve as reasons figure (a) and scheme the figure as a result of the new virtual three-dimensional people face figure of (b) generation of figure (c).
Embodiment
Below, in conjunction with the accompanying drawings and specific embodiment, be that example further illustrates the present invention with the UPC face database.The UPC face database is made up of people's face figure of 44 people, and everyone has 27 people's face figure, is respectively (natural light under 3 kinds of different light situations, 9 the different attitudes high light of 45 ° of directions, the high light of 0 ° of direction) (0 °, ± 30 °, ± 45 °, ± 60 °, ± 90 °).Wherein face direction in the left side is assumed to positive dirction.Present embodiment only considers that the attitude under natural light changes, and does not consider other illumination variation situations.
One, as shown in Figure 1, the step of present embodiment is as follows:
Step 1: deposit virtual three-dimensional people face figure in the original sample storehouse in advance, the application principal component analysis method is calculated the expansion people face space in the described original sample storehouse:
The two-dimension human face figure of UPC face database normalizes to 122 * 100 pixels with its resolution in the present embodiment shown in Fig. 2 (a).Deposited 50 in the original sample storehouse in advance by the synthetic virtual three-dimensional people face figure of UPC face database, shown in Fig. 2 (b), formed by 25 people's virtual three-dimensional people face figure and the mirror image of virtual three-dimensional people face figure respectively.Virtual three-dimensional people face figure be by front view (FV), ± 45 °, ± 90 ° totally five people's face figure be spliced, and its resolution is normalized to 122 * 240 pixels.The fundamental purpose of principal component analysis method is to find the unit orthogonal vector base of one group of optimum by linear change, comes reconstruction sample with their linear combination, makes sample and former sample error minimum after the reconstruction.The unit orthogonal vector base of one group of optimum that the expansion people face space in the original sample storehouse is just asked.Its computing method are as follows:
S T = Σ i = 1 M ( x i - μ ) ( x i - μ ) T - - - ( 1 )
In the formula (1), S TBe total volume divergence matrix, M is the sample size in the original sample storehouse, x iBe the column vector form by the row expansion of i virtual three-dimensional people face figure in the original sample storehouse, μ is x in the original sample storehouse i(i=1,2 ..., 50) mean value.Total volume divergence matrix S TThe proper vector of orthonomalization be exactly the orthogonal basis of subspace.Adopt main proper vector just to be referred to as principal component analysis method as the method for orthogonal basis.Because principal component analysis method is based on the calculating of one-dimensional signal, thereby its key step is as follows: 50 virtual three-dimensional people face figure in the original sample storehouse are launched to be converted into the column vector that tie up N=122 * 240 by row, 122 and 240 height and width that are respectively virtual three-dimensional people face figure wherein, 50 is the quantity of sample in the original sample storehouse.Make x iBe the column vector form by the row expansion of i virtual three-dimensional people face figure in the original sample storehouse, then the original sample storehouse can be designated as x i(i=1,2 ..., 50).Obtain sample storehouse x i(i=1,2 ..., 50) average μ, i.e. average face; Obtain the difference of each sample and average face: φ i=x i-μ; Construct total volume divergence matrix: C=AA T, A=[φ 1, φ 2..., φ M]; Calculate the proper vector v of C i, because C is N * N (N=122 * 240) matrix, the N extraordinary general meeting makes calculated amount very big.Thereby change and ask A TA (the eigenvalue of M * M) iWith orthonomalization proper vector u i, the proper vector v of C iCan obtain by following formula
v i = 1 λ i Au i - - - ( 2 )
With eigenvalue iDescending order, the non-zero characteristics vector v of k eigenvalue of maximum correspondence before choosing i, wherein, as preferred implementation of the present invention, the satiable condition of choosing of k is: the energy of a preceding k eigenwert is greater than 90% of the gross energy of the eigenwert of all C.It is 20 that present embodiment is got the k value.By each proper vector v i(i=1,2 ..., 20) the subspace V that forms is exactly characteristic vector space, just expands people's face space.
Step 2: the applying portion principal component analysis method is carried out attitude in conjunction with the automatic location algorithm of human eye and face to the two-dimentional training of human face figure of the different attitudes of two width of cloth and is estimated that final attitude estimated result is designated as Q, Q ', and its main process flow diagram is seen Fig. 3.
The precision that the accuracy that attitude is estimated not only is related to synthetic new virtual three-dimensional people face figure also has influence on discrimination, because the applying portion principal component analysis method is not very high to the accuracy that two-dimentional training of human face figure carries out the attitude estimation, can further improve, therefore adopt step shown in Figure 3, specific as follows:
1) the two-dimentional training of human face figure to the different attitudes of two width of cloth carries out the attitude estimation with the part principal component analysis method, and the attitude estimated result is designated as P1, P1 ' respectively;
Its key step is that the original sample Cooley is tried to achieve optimum axis of projection z with the two-dimension principal component analysis method K '(k '=1 ..., N), specific as follows:
G t = [ E ( I - I ‾ ) T ( I - I ‾ ) ] = 1 M Σ i = 1 M ( I i - I ‾ ) T ( I i - I ‾ ) - - - ( 3 )
In the formula (3), G iBe the covariance matrix of image, M is the sample size in the original sample storehouse, I iBe i virtual three-dimensional people face figure in the original sample storehouse,
Figure BSA00000385675800053
I in the expression original sample storehouse i(i=1,2 ..., 50) mean value.Try to achieve G tEigenwert, its eigenwert is carried out descending ordering, the individual eigenvalue of maximum characteristic of correspondence of preceding k ' vector is desired optimum axis of projection z K '(k '=1 ..., N), wherein, as preferred implementation of the present invention, the satiable condition of choosing of k ' is: the energy of the individual eigenvalue of maximum of preceding k ' is greater than 90% of the gross energy of the eigenwert of all Gt.Present embodiment is got k '=16.The j that obtains in the formula (4) is exactly the attitude of two-dimentional training of human face figure:
min j { Σ k = 1 N Σ l = 1 P ( s k ( l ) - r k ( l - j ) ) 2 } - - - ( 4 )
In the formula (4), N represents the number of optimum axis of projection, and N=k ', vectorial s kRepresent that two-dimentional training of human face figure is at optimum axis of projection z K 'On projection.Each vectorial s kP element arranged, and wherein P is the width of two-dimentional training of human face figure, r kExpression
Figure BSA00000385675800061
At optimum axis of projection z k' on projection.Just can obtain attitude j and the j ' of two different two-dimentional training of human face figure by the two-dimentional training of human face figure of the different attitudes of two width of cloth.Wherein, j is the wherein attitude estimated result P1 of width of cloth two dimension training of human face figure, and j ' is the attitude estimated result P1 ' of another width of cloth two dimension training of human face figure.
2) utilize attitude estimated result P1, P1 ' and described default look-up table to described two-dimentional training of human face figure intercepting, obtain two width of cloth and block two-dimentional training of human face figure:
By comparison to virtual three-dimensional people face figure and two-dimentional training of human face figure, there is very high correlativity in some zone that can find two-dimentional training of human face figure with virtual three-dimensional people face figure, but some place is uncorrelated fully, if therefore utilized these incoherent information can reduce the accuracy of new virtual three-dimensional people face figure, reduced the accuracy of discerning simultaneously.Thereby the two-dimentional training of human face figure in the algorithm that generates new virtual three-dimensional people face figure can promptly obtain blocking two-dimentional training of human face figure by the part replacement high with virtual three-dimensional people face figure correlativity.Described default look-up table is that the two-dimentional training of human face of blocking of different attitudes figure intercepts choosing of width range, choosing of intercepting width range is relevant with the resolution of two-dimentional training of human face figure and virtual three-dimensional people face figure, be a kind of setting based on experience, unit is a pixel.Choosing of width range of intercepting is to choose by experiment in the default look-up table, makes the discrimination the best of the two-dimension human face figure to be identified of different attitudes in the UPC face database, and table 1 shows a kind of look-up table.Wherein the resolution of two-dimentional training of human face figure and two-dimension human face figure to be identified is 122 * 100 pixels in the UPC face database, and the resolution of virtual three-dimensional people face figure is 122 * 240 pixels.Consider 30 °, 45 °, between 60 ° and-30 ° ,-45 °, difference is very little between-60 °, thereby attitude is 30 ° in the look-up table, 45 °, the intercepting broadband range of 60 ° two-dimentional training of human face figure is the same, and attitude is-30 °,-45 °, the intercepting broadband range of-60 ° two-dimentional training of human face figure also is the same.
Table 1 look-up table
Estimate attitude The intercepting width range Estimate attitude The intercepting width range Estimate attitude The intercepting width range
20-80 60° 30-90 -45° 0-60
30° 30-90 90° 40-100 -60° 0-60
45° 30-90 -30° 0-60 -90° 0-50
3) described two width of cloth are blocked two-dimension human face figure and adopt 1) in the part principal component analysis method carry out attitude and estimate, the attitude estimated result is designated as P2, P2 ';
4) calculate the poor of P1 and P2, if the absolute value of the difference of P1 and P2 is less than preset threshold value (general, this threshold value can be made as 5% of two-dimentional training of human face figure width, i.e. 5 pixels), then with P1 as the final attitude estimated result Q of the two-dimentional training of human face of width of cloth figure wherein; Otherwise described wherein width of cloth two dimension training of human face figure is carried out the location of eyes and face, and, the attitude estimated result is designated as P3 according to positioning result estimation attitude; Calculate the poor of P1 ' and P2 ', if the absolute value of the difference of P1 ' and P2 ' is less than preset threshold value, then with the final attitude estimated result Q ' of P1 ' as another width of cloth two dimension training of human face figure; Otherwise described another width of cloth two dimension training of human face figure is carried out the location of eyes and face, and, the attitude estimated result is designated as P3 ' according to positioning result estimation attitude;
The method of two-dimentional training of human face figure being carried out eyes and face location is as follows:
At YC bC rIn the color space, near human eye, C bHeight and C rLow, thereby through type (5) is created eye pattern EyeMap:
EyeMap = 1 3 { ( C b 2 ) + ( 255 - C r ) 2 + ( C b / C r ) } - - - ( 5 )
In the formula (5), C bAnd C rBe YC bC rTwo color components in the color space are respectively blue composition and red composition.
At YC bC rIn the color space,, compared with other zones of face, stronger red composition is arranged, more weak blue composition, thereby C in the face zone rBe better than C b, thereby through type (6) is created mouth figure MouthMap:
MouthMap = C r 2 ( C r 2 - η · C r / C b ) 2 - - - ( 6 )
Wherein, coefficient η = 0.95 · 1 n Σ ( x , y ) ∈ FG C r ( x , y ) 2 1 n Σ ( x , y ) ∈ FG C r ( x , y ) / C b ( x , y ) - - - ( 7 )
In the formula (7), FG is meant human face region, and n is the number of pixels of human face region.
With eye pattern and mouth figure respectively by adaptive threshold with its binaryzation, expand by corrosion again, obtain the position of eyes and face, positioning result is as shown in Figure 4.Fig. 4 (a) is human eye and the face positioning result of 0 ° two-dimentional training of human face figure for attitude, Fig. 4 (b) is human eye and the face positioning result of 30 ° two-dimentional training of human face figure for attitude, Fig. 4 (c) is human eye and the face positioning result of 60 ° two-dimentional training of human face figure for attitude, and Fig. 4 (d) is human eye and the face positioning result of 90 ° two-dimentional training of human face figure for attitude.The effect of eyes and face location is fine as can be seen from Fig. 4 (a), 4 (b), 4 (c) and 4 (d), and is all effective to each different attitudes.Can the guestimate attitude according to the position of eyes and face, key step is as follows: have only eyes if detect, and its x axial coordinate is half of near a left side, the center of face also is positioned at the left side, then thinks left surface; If its x axial coordinate on the right of the center of face is positioned at, is then thought right flank near right half of; If detect two eyes, and the average x axial coordinate of two eyes is at image x axle center, and promptly about 50 pixels, and the x axial coordinate of the center of face is then thought front view (FV) also about 50 pixels; If the average x axial coordinate of two eyes is in the position of taking back at image x axle center, and the x axial coordinate of face center then thinks the left avertence face also in the position of taking back at image x axle center, and left avertence bread contains 30 ° here, 45 °, and 60 °; If the average x axial coordinate of two eyes is in the position that takes at image x axle center, and the x axial coordinate of face center thinks the right avertence face also in the position that takes at image x axle center, and right avertence bread contains-30 ° here ,-45 °, and-60 °;
5) relatively whether P3 consistent with P1, if consistent, then with P1 as the final attitude estimated result Q of the two-dimentional training of human face of width of cloth figure wherein; Otherwise, with the final attitude estimated result Q of P2 as the two dimension of the width of cloth wherein training of human face figure.Relatively whether P3 ' is consistent with P1 ', if consistent, then with the final attitude estimated result Q ' of P1 ' as another width of cloth two dimension training of human face figure; Otherwise, with the final attitude estimated result Q ' of P2 ' as another width of cloth two dimension training of human face figure.
Step 3: the final attitude estimated result Q, the Q ' that obtain according to step 2 choose corresponding Ziren face space in expansion people face SPACE V
Figure BSA00000385675800081
With
Figure BSA00000385675800082
Specific as follows:
Each row of expansion people face SPACE V are converted to 122 * 240 matrixes, then can obtain 20 expansion people face space matrixs after the conversion, each expansion people face space matrix all is similar to people's face.According to final attitude estimated result Q, choose that width range is the submatrix of Q to Q+100 (100 are the width of two-dimentional training of human face figure) in 20 expansion people face space matrixs, 122 * 100 column vector with these submatrixs by line translation, remerging is 12200 * 20 matrix, has just obtained the wherein Ziren face space of width of cloth two dimension training of human face figure
Figure BSA00000385675800083
According to final attitude estimated result Q ', choose that width range is the submatrix of Q ' to Q '+100 in 20 expansion people face space matrixs, 122 * 100 column vector with these submatrixs by line translation, remerging is 12200 * 20 matrix, has just obtained the Ziren face space of another width of cloth two dimension training of human face figure
Step 4: two-dimentional training of human face figure, expansion people face space and Ziren face space according to the different attitudes of described two width of cloth generate new virtual three-dimensional people face figure.By Ziren face SPATIAL CALCULATION coefficient a 1, a 2... a n, make two-dimentional training of human face figure reconstruction error minimum, again with coefficient a 1, a 2... a nRe-projection goes back to expansion people face space, just can obtain new virtual three-dimensional people face figure.Its step is as follows:
1) by principal component analysis method as can be known, by coefficient a 1, a 2... a nCan rebuild two-dimentional training of human face figure with subspace people's face figure, promptly
α 1 · v 1 ′ - + α 2 · v 2 ′ - + . . . + α n · v n ′ - = I input - - - - ( 8 )
In the formula (8), a 1, a 2... a nBe coefficient, It is the proper vector in Ziren face space.
2) design factor a 1, a 2... a n, make two two-dimentional training of human face figure reconstruction error minimums.
min e 2 = | | I input 1 ~ - I input 1 - | | 2 + | | I input 2 ~ - I input 2 - | | 2 - - - ( 9 )
In the formula (9),
Figure BSA00000385675800092
Be that wherein width of cloth two dimension is trained human face rebuilding figure, Be width of cloth two dimension training of human face figure wherein, Be another width of cloth two dimension training human face rebuilding figure, Be another width of cloth two dimension training of human face figure.
By finding the solution formula (9) a 1, a 2... a nValue, by a 1, a 2... a nThe vector that constitutes
Figure BSA00000385675800096
Can represent by formula (10):
α - = [ V 2 ′ - · V 2 ′ - + V 2 ′ T - · V 2 ′ - ] - 1 · [ V 1 ′ T - · I input 1 - + V 2 ′ T - · I input 2 - ] - - - ( 10 )
In the formula (10),
Figure BSA00000385675800098
It is respectively the Ziren face space of width of cloth two dimension training of human face figure and another width of cloth two dimension training of human face figure wherein.
3) with vector
Figure BSA00000385675800099
Expansion people face space is gone back in projection again, just can generate new virtual three-dimensional people face figure.
Figure BSA000003856758000910
In the formula (11), α k(k=1,2..., n) the coefficient a that tries to achieve for formula (10) 1, a 2... a n, n is a vector
Figure BSA000003856758000911
Dimension,
Figure BSA000003856758000912
Be expansion people face space, I 180 °Be the new virtual three-dimensional people face figure that generates.
According to attitude be 0 ° and attitude be the virtual three-dimensional people face figure that generates of 90 ° two-dimentional training of human face figure figure as a result as shown in Figure 5, new as can be seen virtual three-dimensional people face figure has comprised the two-dimentional training of human face figure information of each different attitudes.
Step 5: use the new virtual three-dimensional people face figure that is generated to upgrade the original sample storehouse;
Step 6: at cognitive phase, the applying portion principal component analysis method is carried out attitude to two-dimension human face figure to be identified and is estimated that final attitude estimated result is designated as Q2, and this method is consistent with step 3, and is specific as follows in conjunction with human eye and the automatic location algorithm of face:
A) two-dimension human face figure to be identified is carried out attitude with the part principal component analysis method and estimate, the attitude estimated result is designated as P4;
Its key step is as follows: the original sample Cooley is tried to achieve optimum axis of projection z with the two-dimension principal component analysis method K '(k '=1 ..., N); The j that utilizes formula (12) to obtain is exactly the attitude P4 of two-dimension human face figure to be identified:
min j { Σ k = 1 N Σ l = 1 P ( s k ( l ) - r k ( l + j ) ) 2 } - - - ( 12 )
In the formula (12), N is the number of optimum axis of projection, and N=k ', vectorial s kRepresent that two-dimension human face figure to be identified is at optimum axis of projection z K 'On projection.Each vectorial s kP element arranged, and wherein P is the width of two-dimension human face figure to be identified, r kThe mean value of virtual three-dimensional people face figure is at optimum axis of projection z in the expression original sample storehouse K 'On projection.
B) utilize attitude estimated result P4 and described default look-up table to described two-dimension human face figure intercepting to be identified, obtain blocking two-dimension human face figure to be identified;
C) the described part principal component analysis method of two-dimension human face figure to be identified in adopting a) of blocking carried out attitude and estimated, the attitude estimated result is designated as P5;
D) calculate the poor of P4 and P5, if the absolute value of the difference of P4 and P5 is less than preset threshold value (general, this threshold value is made as 5% of two-dimension human face figure width to be identified, i.e. 5 pixels), then with the final attitude estimated result Q2 of P4 as two-dimension human face figure to be identified; Otherwise described two-dimension human face figure to be identified is carried out the location of eyes and face, and, the attitude estimated result is designated as P6 according to positioning result estimation attitude;
The method of two-dimension human face figure to be identified being carried out eyes and face location is as follows:
At YC bC rIn the color space, near human eye, C bHeight and C rLow, thereby through type (5) is created eye pattern.At YC bC rIn the color space,,, stronger red composition is arranged compared with other zones of face in the face zone, more weak blue composition, thereby through type (6) is created mouth figure.With eye pattern and mouth figure respectively by adaptive threshold with its binaryzation, expand by corrosion again, obtain the position of eyes and face.Can the guestimate attitude according to the position of eyes and face, key step is as follows: have only eyes if detect, and its x axial coordinate is half of near a left side, the center of face also is positioned at the left side, then thinks left surface; If its x axial coordinate on the right of the center of face is positioned at, is then thought right flank near right half of; If detect two eyes, and the average x axial coordinate of two eyes is at image x axle center, and promptly about 50 pixels, and the x axial coordinate of the center of face is then thought front view (FV) also about 50 pixels; If the average x axial coordinate of two eyes is in the position of taking back at image x axle center, and the x axial coordinate of face center then thinks the left avertence face also in the position of taking back at image x axle center, and left avertence bread contains 30 ° here, 45 °, and 60 °; If the average x axial coordinate of two eyes is in the position that takes at image x axle center, and the x axial coordinate of face center thinks the right avertence face also in the position that takes at image x axle center, and right avertence bread contains-30 ° here ,-45 °, and-60 °;
F) relatively whether P6 is consistent with P4, if consistent, then with the final attitude estimated result Q2 of P4 as two-dimension human face figure to be identified; Otherwise, with the final attitude estimated result Q2 of P5 as two-dimension human face figure to be identified.
Step 7: the attitude estimated result Q2 and the default look-up table that utilize step (6) to obtain, two-dimension human face figure to be identified intercepted obtain blocking two-dimentional test person face figure, use the part principal component analysis method to block two-dimentional test person face figure and discern to described, specific as follows:
1), two-dimension human face figure to be identified intercepted obtain blocking two-dimentional test person face figure according to the look-up table shown in attitude results estimated Q2 and the table 1.
2) utilize the part principal component analysis method to discern to blocking two-dimentional test person face figure.Be shown below:
min i { Σ k = 1 N Σ l = 1 P ( r k ( l ) - r k i ( l + j ) ) 2 } - - - ( 13 )
In the formula (13), j is the final attitude estimated result Q2 of two-dimension human face figure to be identified, and N is the optimum axis of projection z that the two-dimension principal component analysis method obtains K '(k '=1 ..., number N), r k(l) be to block two-dimentional test person face figure at z K '(k '=1 ..., the N) projection on, each r k(l) P element arranged, wherein P is the width that blocks two-dimentional test person face figure,
Figure BSA00000385675800112
Be in the updated sample storehouse i virtual three-dimensional people face figure at z K '(k '=1 ..., the N) projection on, the i that obtains is exactly the recognition result of two-dimension human face figure to be identified.
Two, the checking of technique effect
(1) verification method:
In order to verify the effect of the present invention to colourful attitude recognition of face, the present invention utilizes the UPC face database to do twice experiment.It is 3.06GHz that dominant frequency is adopted in this experiment, in save as Intel Pentium 4 processors of 2GB.
Experiment one: to prior art based on principal component analysis method with compare based on the performance of colourful attitude recognition of face of the present invention.
In the experiment one, two-dimension human face figure to be identified is 9 angles of 24 people under the natural light, totally 216 people's face figure.In algorithm of the present invention, the sample storehouse is 24 virtual three-dimensional people face figure, and its virtual three-dimensional people face figure is according to algorithm of the present invention, is that 0 ° and attitude are that 90 ° two-dimentional training of human face figure generates by attitude; In principal component analysis method, the sample storehouse is that attitude is that 0 ° and attitude are 24 people's of 90 ° two-dimentional training of human face figure under the natural light, totally 48 two-dimentional training of human face figure, and discrimination is as shown in table 2, and required time is as shown in table 3, and chronomere be millisecond (ms).
Table 2 principal component analysis method and discrimination of the present invention
Principal component analysis method The present invention
Discrimination 39.8% 86.6%
Table 3 algorithm required time
Principal component analysis method The present invention
Time (ms) 296 406
Experiment two: to prior art based on the part principal component analysis method with compare based on the performance of colourful attitude recognition of face of the present invention.
In the experiment two, two-dimension human face figure to be identified is 9 angles of 24 people under the natural light, totally 216 people's face figure.In the present invention, the sample storehouse is 24 virtual three-dimensional people face figure, and its virtual three-dimensional people face figure is to be that 0 ° and attitude are that 90 ° two-dimentional training of human face figure generates according to the inventive method by attitude, is designated as training sample A; In the part principal component analysis method, the sample storehouse is 24 virtual three-dimensional people face figure, and its virtual three-dimensional people face figure is to be that 0 ° and attitude are that 90 ° two-dimentional training of human face figure generates according to the part principal component analysis method by attitude, is designated as training sample B.The accuracy that attitude is estimated is as shown in table 4, and discrimination is as shown in table 5, and required time is as shown in table 6, and chronomere is a millisecond (ms).
Table 4 attitude is estimated accuracy
The part principal component analysis method The present invention
Attitude is estimated accuracy 92.13% 94.44%
Table 5 part principal component analysis method and discrimination of the present invention
The part principal component analysis method The present invention
Discrimination 82.8% 86.6%
Table 6 algorithm required time
Averaging time (ms) Attitude is estimated Identification T.T.
The present invention 328 78 406
The part principal component analysis method 312 78 390
(2) experiment conclusion:
The inventive method is more than doubled with respect to the principal component analysis method discrimination of prior art as can be seen from Table 2, is a kind of face recognition algorithms of attitude robust.The present invention is based on the part principal component analysis method as can be seen by table 4 and table 5 and carry out carrying out attitude based on the part principal component analysis method and estimating to compare of attitude estimation approach and prior art in conjunction with the automatic location algorithm of human eye and face, it is more accurate that attitude is estimated, and the recognition of face rate is higher.Though by table 3 and table 6 time required for the present invention as can be seen greater than principal component analysis method, compare with the part principal component analysis method, required time is more or less the same, and still is applicable to real-time processing substantially.

Claims (3)

1. the face identification method based on part principal component analysis and attitude estimation associating is characterized in that comprising the steps:
(1) deposit virtual three-dimensional people face figure in advance in the original sample storehouse, the application principal component analysis method is calculated the expansion people face space in the described original sample storehouse; And the applying portion principal component analysis method is carried out attitude in conjunction with the automatic location algorithm of human eye and face to the two-dimentional training of human face figure of the different attitudes of two width of cloth and is estimated;
(2) the attitude estimated result that obtains according to step (1) is sought corresponding Ziren face space in expansion people face space;
(3) according to two-dimentional training of human face figure, expansion people face space and the Ziren face space of the different attitudes of described two width of cloth, generate new virtual three-dimensional people face figure;
(4) use the new virtual three-dimensional people face figure that is generated to upgrade the original sample storehouse;
(5) the applying portion principal component analysis method is carried out attitude to two-dimension human face figure to be identified and is estimated in conjunction with human eye and the automatic location algorithm of face;
(6) utilize attitude estimated result and the default look-up table that step (5) obtains, two-dimension human face figure to be identified is intercepted obtain blocking two-dimentional test person face figure, use the part principal component analysis method to block two-dimentional test person face figure and discern described.
2. the face identification method based on part principal component analysis and attitude estimation associating according to claim 1, it is characterized in that, in step (1), it is as follows that the applying portion principal component analysis method is carried out the attitude estimation approach in conjunction with human eye and the automatic location algorithm of face to two-dimentional training of human face figure:
1) two-dimentional training of human face figure is carried out attitude with the part principal component analysis method and estimate, the attitude estimated result is designated as P1;
2) utilize attitude estimated result P1 and described default look-up table to described two-dimentional training of human face figure intercepting, obtain blocking two-dimentional training of human face figure;
3) block two-dimentional training of human face figure and adopt the part principal component analysis method to carry out attitude to estimate, this attitude estimated result is designated as P2 described;
4) calculate the poor of P1 and P2, if the absolute value of the difference of P1 and P2 is less than preset threshold value, then with the final attitude estimated result of P1 as two-dimentional training of human face figure; Otherwise described two-dimentional training of human face figure is carried out the location of eyes and face, and, this attitude estimated result is designated as P3 according to positioning result estimation attitude;
5) relatively whether P3 is consistent with P1, if consistent, then with the final attitude estimated result of P1 as two-dimentional training of human face figure; Otherwise, with the final attitude estimated result of P2 as two-dimentional training of human face figure.
3. the face identification method based on part principal component analysis and attitude estimation associating according to claim 1, it is characterized in that, in step (5), it is as follows that the applying portion principal component analysis method is carried out the attitude estimation approach in conjunction with human eye and the automatic location algorithm of face to two-dimension human face figure to be identified:
A) two-dimension human face figure to be identified is carried out attitude with the part principal component analysis method and estimate, this attitude estimated result is designated as P4;
B) utilize attitude estimated result P4 and described default look-up table to described two-dimension human face figure intercepting to be identified, obtain blocking two-dimension human face figure to be identified;
C) block two-dimension human face figure to be identified and adopt the part principal component analysis method to carry out attitude to estimate, this attitude estimated result is designated as P5 described;
D) calculate the poor of P4 and P5, if the absolute value of the difference of P4 and P5 is less than preset threshold value, then with the final attitude estimated result of P4 as two-dimension human face figure to be identified; Otherwise described two-dimension human face figure to be identified is carried out the location of eyes and face, and, this attitude estimated result is designated as P6 according to positioning result estimation attitude;
F) relatively whether P6 is consistent with P4, if consistent, then with the final attitude estimated result of P4 as two-dimension human face figure to be identified; Otherwise, with the final attitude estimated result of P5 as two-dimension human face figure to be identified.
CN2010105883116A 2010-12-07 2010-12-07 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation Expired - Fee Related CN102043966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105883116A CN102043966B (en) 2010-12-07 2010-12-07 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105883116A CN102043966B (en) 2010-12-07 2010-12-07 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation

Publications (2)

Publication Number Publication Date
CN102043966A true CN102043966A (en) 2011-05-04
CN102043966B CN102043966B (en) 2012-11-28

Family

ID=43910091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105883116A Expired - Fee Related CN102043966B (en) 2010-12-07 2010-12-07 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation

Country Status (1)

Country Link
CN (1) CN102043966B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198330A (en) * 2013-03-19 2013-07-10 东南大学 Real-time human face attitude estimation method based on depth video streaming
CN103959299A (en) * 2011-09-28 2014-07-30 谷歌公司 Login to a computing device based on facial recognition
CN105678241A (en) * 2015-12-30 2016-06-15 四川川大智胜软件股份有限公司 Cascaded two dimensional image face attitude estimation method
CN106022228A (en) * 2016-05-11 2016-10-12 东南大学 Three-dimensional face recognition method based on vertical and horizontal local binary pattern on the mesh
CN106503615A (en) * 2016-09-20 2017-03-15 北京工业大学 Indoor human body detecting and tracking and identification system based on multisensor
CN109785322A (en) * 2019-01-31 2019-05-21 北京市商汤科技开发有限公司 Simple eye human body attitude estimation network training method, image processing method and device
CN110020578A (en) * 2018-01-10 2019-07-16 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111814516A (en) * 2019-04-11 2020-10-23 上海集森电器有限公司 Driver fatigue detection method
CN113157084A (en) * 2020-01-14 2021-07-23 苹果公司 Positioning user-controlled spatial selectors based on limb tracking information and eye tracking information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1325662A (en) * 2001-07-13 2001-12-12 清华大学 Method for detecting moving human face
CN1341401A (en) * 2001-10-19 2002-03-27 清华大学 Main unit component analysis based multimode human face identification method
CN1945595A (en) * 2006-10-30 2007-04-11 邹采荣 Human face characteristic positioning method based on weighting active shape building module
CN101038622A (en) * 2007-04-19 2007-09-19 上海交通大学 Method for identifying human face subspace based on geometry preservation
CN101236598A (en) * 2007-12-28 2008-08-06 北京交通大学 Independent component analysis human face recognition method based on multi- scale total variation based quotient image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1325662A (en) * 2001-07-13 2001-12-12 清华大学 Method for detecting moving human face
CN1341401A (en) * 2001-10-19 2002-03-27 清华大学 Main unit component analysis based multimode human face identification method
CN1945595A (en) * 2006-10-30 2007-04-11 邹采荣 Human face characteristic positioning method based on weighting active shape building module
CN101038622A (en) * 2007-04-19 2007-09-19 上海交通大学 Method for identifying human face subspace based on geometry preservation
CN101236598A (en) * 2007-12-28 2008-08-06 北京交通大学 Independent component analysis human face recognition method based on multi- scale total variation based quotient image

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103959299A (en) * 2011-09-28 2014-07-30 谷歌公司 Login to a computing device based on facial recognition
CN103198330B (en) * 2013-03-19 2016-08-17 东南大学 Real-time human face attitude estimation method based on deep video stream
CN103198330A (en) * 2013-03-19 2013-07-10 东南大学 Real-time human face attitude estimation method based on depth video streaming
CN105678241B (en) * 2015-12-30 2019-02-26 四川川大智胜软件股份有限公司 A kind of cascade two dimensional image face pose estimation
CN105678241A (en) * 2015-12-30 2016-06-15 四川川大智胜软件股份有限公司 Cascaded two dimensional image face attitude estimation method
CN106022228A (en) * 2016-05-11 2016-10-12 东南大学 Three-dimensional face recognition method based on vertical and horizontal local binary pattern on the mesh
CN106022228B (en) * 2016-05-11 2019-04-09 东南大学 A kind of three-dimensional face identification method based on grid local binary patterns in length and breadth
CN106503615A (en) * 2016-09-20 2017-03-15 北京工业大学 Indoor human body detecting and tracking and identification system based on multisensor
CN106503615B (en) * 2016-09-20 2019-10-08 北京工业大学 Indoor human body detecting and tracking and identification system based on multisensor
CN110020578A (en) * 2018-01-10 2019-07-16 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109785322A (en) * 2019-01-31 2019-05-21 北京市商汤科技开发有限公司 Simple eye human body attitude estimation network training method, image processing method and device
CN111814516A (en) * 2019-04-11 2020-10-23 上海集森电器有限公司 Driver fatigue detection method
CN113157084A (en) * 2020-01-14 2021-07-23 苹果公司 Positioning user-controlled spatial selectors based on limb tracking information and eye tracking information

Also Published As

Publication number Publication date
CN102043966B (en) 2012-11-28

Similar Documents

Publication Publication Date Title
CN102043966B (en) Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation
US10484663B2 (en) Information processing apparatus and information processing method
CN104036546B (en) Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model
Obdrzalek et al. Object Recognition using Local Affine Frames on Distinguished Regions.
Concha et al. Using superpixels in monocular SLAM
US20160217318A1 (en) Image processing device, image processing method, and program
US20160048970A1 (en) Multi-resolution depth estimation using modified census transform for advanced driver assistance systems
CN109215085B (en) Article statistical method using computer vision and image recognition
CN104157010A (en) 3D human face reconstruction method and device
WO2012077286A1 (en) Object detection device and object detection method
CN101866497A (en) Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
CN104573614A (en) Equipment and method for tracking face
CN103824050A (en) Cascade regression-based face key point positioning method
CN106778660B (en) A kind of human face posture bearing calibration and device
Sinha et al. Detecting and reconstructing 3d mirror symmetric objects
Argyros et al. Binocular hand tracking and reconstruction based on 2D shape matching
Fan et al. Convex hull aided registration method (CHARM)
CN103824076A (en) Detecting and extracting method and system characterized by image dimension not transforming
Azad et al. Accurate shape-based 6-dof pose estimation of single-colored objects
CN105931231A (en) Stereo matching method based on full-connection random field combination energy minimization
Caunce et al. Improved 3D Model Search for Facial Feature Location and Pose Estimation in 2D images.
Mendonça et al. Analysis and Computation of an Affine Trifocal Tensor.
Dopfer et al. 3D Active Appearance Model alignment using intensity and range data
Wang et al. 3D AAM based face alignment under wide angular variations using 2D and 3D data
Li et al. Two-phase approach—Calibration and iris contour estimation—For gaze tracking of head-mounted eye camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121128

Termination date: 20151207

EXPY Termination of patent right or utility model