CN102043966B - Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation - Google Patents

Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation Download PDF

Info

Publication number
CN102043966B
CN102043966B CN2010105883116A CN201010588311A CN102043966B CN 102043966 B CN102043966 B CN 102043966B CN 2010105883116 A CN2010105883116 A CN 2010105883116A CN 201010588311 A CN201010588311 A CN 201010588311A CN 102043966 B CN102043966 B CN 102043966B
Authority
CN
China
Prior art keywords
face
attitude
human face
principal component
component analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010105883116A
Other languages
Chinese (zh)
Other versions
CN102043966A (en
Inventor
潘翔
王玲玲
郭小虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2010105883116A priority Critical patent/CN102043966B/en
Publication of CN102043966A publication Critical patent/CN102043966A/en
Application granted granted Critical
Publication of CN102043966B publication Critical patent/CN102043966B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a face recognition method based on the combination of partial principal component analysis (PCA) and attitude estimation, comprising the following steps of: (1) previously storing a virtual three-dimensional face image in an original sample library, using a PCA method to compute a face-expanding space in the original sample library, and using a partial PCA method to combine with an eyes-mouth automatic positioning algorithm to perform attitude estimation to two two-dimensional training face images in different attitudes; (2) searching corresponding sub-face space in the face-expanding space according to the attitude estimation result; (3) generating a new virtual three-dimensional face image according to two two-dimensional training face images in different attitudes, the face-expanding space and the sub-face space; (4) using the new three-dimensional face image to update the original sample library; (5) using the partial PCA method to combine with the eyes-mouth automatic positioning algorithm to perform the attitude estimation to the to-be-recognized two-dimensional face image; (6) using the partial PCA method to recognize the to-be-recognized two-dimensional face image.

Description

Face identification method based on part principal component analysis and attitude estimation associating
Technical field
The invention belongs to the face recognition technology field, relate to and a kind ofly estimate the face identification method of the attitude robust of associating can generate virtual three-dimensional people face figure according to the two-dimension human face figure of different attitudes based on part principal component analysis and attitude.
Background technology
In recent decades, recognition of face is the focus of computer vision field research always, and it has practical application widely, can be applicable to field of human-computer interaction, video monitoring etc.The variation of aspects such as illumination, expression and attitude has increased the difficulty of recognition of face, and wherein the attitude variation is the maximum bottleneck of recognition of face, is still a challenging problem.In order to address this problem, a lot of methods have been proposed, can be divided into three broad aspect: based on single-view, based on various visual angles, based on three-dimensional model.In the single-view method, mainly be to extract invariant features, this method is applicable to small angle variation, but invalid under angle variation situation greatly.In the various visual angles method, deposit the picture of everyone all angles, each angle is made up a sorter, though this method recognition effect is good, can not obtain the photo of everyone all angles in the reality.Also propose the face identification method based on three-dimensional model in the prior art, this model can solve the variation issue of illumination and attitude preferably.Yet this method exists some problems: the first, and calculated amount is big, is not suitable for real-time processing; The second, need a large amount of 3-D scanning models to make up database, in practical application, for example in video monitoring, getable image major part is a two dimension, generally can't obtain three-dimensional information, so the limited use of this method.
Summary of the invention
The objective of the invention is to overcome above-mentioned deficiency and provide a kind of and simply estimate the face identification method of uniting, so that the people's face under the different attitudes is discerned based on part principal component analysis and attitude.
For realizing above-mentioned purpose, the key step of the technical scheme that the present invention taked is following:
(1) in the original sample storehouse, deposits virtual three-dimensional people face figure in advance, use principal component analysis method and calculate the expansion people face space in the said original sample storehouse; And the applying portion principal component analysis method combines human eye and the automatic location algorithm of face that the two-dimentional training of human face figure of the different attitudes of two width of cloth is carried out the attitude estimation respectively;
(2) each the attitude estimated result that obtains according to step (1) is sought corresponding Ziren face space respectively in expansion people face space;
(3) according to two-dimentional training of human face figure, expansion people face space and the Ziren face space of the different attitudes of said two width of cloth, generate new virtual three-dimensional people face figure;
(4) use the new virtual three-dimensional people face figure that is generated to upgrade the original sample storehouse;
(5) the applying portion principal component analysis method combines human eye and the automatic location algorithm of face, two-dimension human face figure to be identified is carried out attitude estimate;
(6) the attitude estimated result and preset look-up table that utilize step (5) to obtain; Two-dimension human face figure to be identified is carried out intercepting obtain blocking two-dimentional test person face figure, said look-up table is meant that the difference of two-dimension human face figure estimates the mapping table of the intercepting width range of attitude and two-dimension human face figure; Use the part principal component analysis method to block two-dimentional test person face figure and discern to said.
Further, the present invention is in step (1), and it is following that the applying portion principal component analysis method combines human eye and the automatic location algorithm of face that two-dimentional training of human face figure is carried out the attitude estimation approach:
1) two-dimentional training of human face figure is carried out attitude with the part principal component analysis method and estimate, the attitude estimated result is designated as P1;
2) utilize attitude estimated result P1 and preset look-up table to said two-dimentional training of human face figure intercepting, obtain blocking two-dimentional training of human face figure; Said look-up table is meant that the difference of two-dimension human face figure estimates the mapping table of the intercepting width range of attitude and two-dimension human face figure;
3) block two-dimentional training of human face figure and adopt the part principal component analysis method to carry out attitude to estimate, the attitude estimated result is designated as P2 said;
4) calculate the poor of P1 and P2, if the absolute value of the difference of P1 and P2 is less than preset threshold value, then with the final attitude estimated result of P1 as two-dimentional training of human face figure; Otherwise said two-dimentional training of human face figure is carried out the location of eyes and face, and, the attitude estimated result is designated as P3 according to positioning result estimation attitude;
5) relatively whether P3 is consistent with P1, if consistent, then with the final attitude estimated result of P1 as two-dimentional training of human face figure; Otherwise, with the final attitude estimated result of P2 as two-dimentional training of human face figure.
Further, the present invention is in step (5), and it is following that the applying portion principal component analysis method combines human eye and the automatic location algorithm of face that two-dimension human face figure to be identified is carried out the attitude estimation approach:
A) two-dimension human face figure to be identified is carried out attitude with the part principal component analysis method and estimate, the attitude estimated result is designated as P4;
B) utilize attitude estimated result P4 and said preset look-up table to said two-dimension human face figure to be identified intercepting, obtain blocking two-dimension human face figure to be identified;
C) block two-dimension human face figure to be identified and adopt the part principal component analysis method to carry out attitude to estimate, the attitude estimated result is designated as P5 said;
D) calculate the poor of P4 and P5, if the absolute value of the difference of P4 and P5 is less than preset threshold value, then with the final attitude estimated result of P4 as two-dimension human face figure to be identified; Otherwise said two-dimension human face figure to be identified is carried out the location of eyes and face, and, the attitude estimated result is designated as P6 according to positioning result estimation attitude;
F) relatively whether P6 is consistent with P4, if consistent, then with the final attitude estimated result of P4 as two-dimension human face figure to be identified; Otherwise, with the final attitude estimated result of P5 as two-dimension human face figure to be identified.
The present invention than the major advantage of prior art is: (1) utilizes the two-dimentional training of human face figure of two different attitudes to generate new virtual three-dimensional people face figure to upgrade the original sample storehouse; Than various visual angles method; The two-dimentional training of human face figure that need not obtain each attitude is as the sample storehouse, thereby the present invention has extensive applicability; (2) than face identification method based on three-dimensional model; The present invention need not to utilize the 3-D scanning model to make up the sample storehouse, need not to carry out the three-dimensional feature point processing, can significantly reduce complexity of calculation; Help real-time processing, do not lose colourful attitude accuracy of face identification simultaneously again; (3) than principal component analysis method, the present invention carries out attitude to two-dimension human face figure to be identified and estimates, according to the attitude results estimated, adopts the part principal component analysis method to discern, and is higher to the robustness of attitude; (4) the present invention adopts the part principal component analysis method to combine human eye and the automatic location algorithm of face that people's face figure is carried out the attitude estimation, and the accuracy that attitude is estimated is higher, and the more original attitude method of estimation of computational complexity does not have large increase simultaneously; (5) to the purpose of colourful attitude recognition of face, to two-dimension human face figure to be identified intercepting, removed two-dimension human face figure to be identified and the incoherent part of virtual three-dimensional people face figure, improved discrimination according to preset look-up table.(6) the present invention has good application in field of video monitoring.
Description of drawings
Fig. 1 is the schematic flow sheet based on the face identification method of part principal component analysis and attitude estimation associating;
Fig. 2 is the synoptic diagram of two-dimentional training of human face figure and virtual three-dimensional people face figure, wherein, (a) is two-dimentional training of human face figure, (b) is virtual three-dimensional people face figure;
Fig. 3 the present invention is based on the part principal component analysis method to combine human eye and the automatic location algorithm of face that two-dimentional training of human face figure is carried out the process flow diagram that attitude is estimated;
Fig. 4 is the figure as a result that human eye and face are located automatically; Wherein, (a) for attitude be human eye and the face positioning result of 0 ° two-dimentional training of human face figure; (b) for attitude being human eye and the face positioning result of 30 ° two-dimentional training of human face figure, (c) is human eye and the face positioning result of 60 ° two-dimentional training of human face figure for attitude, (d) is human eye and the face positioning result of 90 ° two-dimentional training of human face figure for attitude;
Fig. 5 is the figure as a result that 90 ° two-dimentional training of human face figure generates virtual three-dimensional people face figure for the present invention is 0 ° according to attitude with attitude; Wherein, Figure (a) is 0 ° two-dimentional training of human face figure for attitude; Figure (b) is 90 ° two-dimentional training of human face figure for attitude, the serve as reasons figure as a result of figure (a) and the new virtual three-dimensional people face figure that schemes (b) generation of figure (c).
Embodiment
Below, in conjunction with the accompanying drawings and specific embodiment, be that example further illustrates the present invention with the UPC face database.The UPC face database is made up of people's face figure of 44 people, and everyone has 27 people's face figure, is respectively (natural light under 3 kinds of different light situation; 9 the different attitudes high light of 45 ° of directions, the high light of 0 ° of direction) (0 °, ± 30 °; ± 45 °, ± 60 °, ± 90 °).Wherein face direction in the left side is assumed to positive dirction.Present embodiment only considers that the attitude under natural light changes, and does not consider other illumination variation situation.
One, as shown in Figure 1, the step of present embodiment is following:
Step 1: in the original sample storehouse, deposit virtual three-dimensional people face figure in advance, use principal component analysis method and calculate the expansion people face space in the said original sample storehouse:
The two-dimension human face figure of UPC face database normalizes to 122 * 100 pixels with its resolution in the present embodiment shown in Fig. 2 (a).Deposited 50 in the original sample storehouse in advance by the synthetic virtual three-dimensional people face figure of UPC face database, shown in Fig. 2 (b), formed by 25 people's virtual three-dimensional people face figure and the mirror image of virtual three-dimensional people face figure respectively.Virtual three-dimensional people face figure be by front view (FV), ± 45 °, ± 90 ° totally five people's face figure be spliced, and its resolution is normalized to 122 * 240 pixels.The fundamental purpose of principal component analysis method is to find one group of optimum unit orthogonal vector base through linear change, comes reconstruction sample with their linear combination, makes sample and former sample error after the reconstruction minimum.One group of optimum unit orthogonal vector base that expansion people face space in the original sample storehouse is just asked.Its computing method are following:
S T = Σ i = 1 M ( x i - μ ) ( x i - μ ) T - - - ( 1 )
In the formula (1), S TBe total volume divergence matrix, M is the sample size in the original sample storehouse, x iBe the column vector form by the row expansion of i virtual three-dimensional people face figure in the original sample storehouse, μ is x in the original sample storehouse i(i=1,2 ..., 50) mean value.Total volume divergence matrix S TThe proper vector of orthonomalization be exactly the orthogonal basis of subspace.Adopt main proper vector just to be referred to as principal component analysis method as the method for orthogonal basis.Because principal component analysis method is based on the calculating of one-dimensional signal; Thereby its key step is as follows: 50 virtual three-dimensional people face figure in the original sample storehouse are launched to be converted into the column vector that tie up N=122 * 240 by row; Wherein 122 and 240 be respectively virtual three-dimensional people face figure height and width, 50 is the quantity of sample in the original sample storehouse.Make x iBe the column vector form by the row expansion of i virtual three-dimensional people face figure in the original sample storehouse, then the original sample storehouse can be designated as x i(i=1,2 ..., 50).Obtain sample storehouse x i(i=1,2 ..., 50) average μ, i.e. average face; Obtain the difference of each sample and average face: φ i=x i-μ; Construct total volume divergence matrix: C=AA T, A=[φ 1, φ 2..., φ M]; Calculate the proper vector v of C i, because C is N * N (N=122 * 240) matrix, the N extraordinary general meeting makes calculated amount very big.Thereby change and ask A TA (the eigenvalue of M * M) iWith orthonomalization proper vector u i, the proper vector v of C iCan obtain through following formula
v i = 1 λ i Au i - - - ( 2 )
With eigenvalue iDescending order, k non-zero characteristics vector v that eigenvalue of maximum is corresponding before choosing i, wherein, as preferred implementation of the present invention, the satiable condition of choosing of k is: the energy of a preceding k eigenwert is greater than 90% of the gross energy of the eigenwert of all C.It is 20 that present embodiment is got the k value.By each proper vector v i(i=1,2 ..., 20) the subspace V that forms is exactly characteristic vector space, just expands people's face space.
Step 2: the applying portion principal component analysis method combines human eye and the automatic location algorithm of face that the two-dimentional training of human face figure of the different attitudes of two width of cloth is carried out the attitude estimation, and final attitude estimated result is designated as Q, Q ', and its main process flow diagram is seen Fig. 3.
The precision that the accuracy that attitude is estimated not only is related to synthetic new virtual three-dimensional people face figure also has influence on discrimination; Because the applying portion principal component analysis method is not very high to the accuracy that two-dimentional training of human face figure carries out the attitude estimation; Can further improve; Therefore adopt step shown in Figure 3, specific as follows:
1) the two-dimentional training of human face figure to the different attitudes of two width of cloth carries out the attitude estimation with the part principal component analysis method, and the attitude estimated result is designated as P1, P1 ' respectively;
Its key step is that the original sample Cooley is tried to achieve optimum axis of projection z with the two-dimension principal component analysis method K '(k '=1 ..., N), specific as follows:
G t = [ E ( I - I ‾ ) T ( I - I ‾ ) ] = 1 M Σ i = 1 M ( I i - I ‾ ) T ( I i - I ‾ ) - - - ( 3 )
In the formula (3), G tBe the covariance matrix of image, M is the sample size in the original sample storehouse, I iBe i virtual three-dimensional people face figure in the original sample storehouse,
Figure GDA00001847757000053
I in the expression original sample storehouse i(i=1,2 ..., 50) mean value.Try to achieve G tEigenwert, its eigenwert is carried out descending ordering, the individual eigenvalue of maximum characteristic of correspondence of preceding k ' vector is desired optimum axis of projection z K '(k '=1 ..., N), wherein, as preferred implementation of the present invention, the satiable condition of choosing of k ' is: the energy of the individual eigenvalue of maximum of preceding k ' is greater than all G tEigenwert gross energy 90%.Present embodiment is got k '=16.The j that obtains in the formula (4) is exactly the attitude of two-dimentional training of human face figure:
min j { Σ k = 1 N Σ l = 1 P ( s k ( l ) - r k ( l + j ) ) 2 } - - - ( 4 )
In the formula (4), N representes the number of optimum axis of projection, and N=k ', vectorial s kRepresent that two-dimentional training of human face figure is at optimum axis of projection z K 'On projection.Each vectorial s kP element arranged, and wherein P is the width of two-dimentional training of human face figure, r kExpression
Figure GDA00001847757000061
At optimum axis of projection z K 'On projection.Just can obtain attitude j and the j ' of two different two-dimentional training of human face figure with the two-dimentional training of human face figure of the different attitudes of two width of cloth.Wherein, j is the wherein attitude estimated result P1 of width of cloth two dimension training of human face figure, and j ' is the attitude estimated result P1 ' of another width of cloth two dimension training of human face figure.
2) utilize attitude estimated result P1, P1 ' and said preset look-up table to said two-dimentional training of human face figure intercepting, obtain two width of cloth and block two-dimentional training of human face figure:
Through comparison to virtual three-dimensional people face figure and two-dimentional training of human face figure; There is very high correlativity in some zone that can find two-dimentional training of human face figure with virtual three-dimensional people face figure; But some place is uncorrelated fully; If therefore utilized these incoherent information can reduce the accuracy of new virtual three-dimensional people face figure, reduced the accuracy of discerning simultaneously.Thereby the two-dimentional training of human face figure in the algorithm that generates new virtual three-dimensional people face figure can promptly obtain blocking two-dimentional training of human face figure by replacing with the high part of virtual three-dimensional people face figure correlativity.Said preset look-up table is choosing of blocking of different attitudes two-dimentional training of human face figure intercepting width range; Choosing of intercepting width range is relevant with the resolution of two-dimentional training of human face figure and virtual three-dimensional people face figure; Be a kind of setting based on experience, unit is a pixel.Choosing of intercepting width range is to choose through experiment in the preset look-up table, makes that the discrimination of the two-dimension human face figure to be identified of different attitudes is best in the UPC face database, and table 1 shows a kind of look-up table.Wherein the resolution of two-dimentional training of human face figure and two-dimension human face figure to be identified is 122 * 100 pixels in the UPC face database, and the resolution of virtual three-dimensional people face figure is 122 * 240 pixels.Consider 30 °, 45 °, between 60 ° and-30 °;-45 °, between-60 ° difference very little, thereby attitude is 30 ° in the look-up table; 45 °, the intercepting broadband range of 60 ° two-dimentional training of human face figure is the same, and attitude is-30 °;-45 °, the intercepting broadband range of-60 ° two-dimentional training of human face figure also is the same.
Table 1 look-up table
Estimate attitude The intercepting width range Estimate attitude The intercepting width range Estimate attitude The intercepting width range
20-80 60° 30-90 -45° 0-60
30° 30-90 90° 40-100 -60° 0-60
45° 30-90 -30° 0-60 -90° 0-50
3) said two width of cloth are blocked two-dimension human face figure and adopt 1) in the part principal component analysis method carry out attitude and estimate, the attitude estimated result is designated as P2, P2 ';
4) calculate the poor of P1 and P2, if the absolute value of the difference of P1 and P2 is less than preset threshold value (generally, this threshold value can be made as 5% of two-dimentional training of human face figure width, i.e. 5 pixels), then with P1 as the final attitude estimated result Q of the two-dimentional training of human face of width of cloth figure wherein; Otherwise said wherein width of cloth two dimension training of human face figure is carried out the location of eyes and face, and, the attitude estimated result is designated as P3 according to positioning result estimation attitude; Calculate the poor of P1 ' and P2 ', if the absolute value of the difference of P1 ' and P2 ' is less than preset threshold value, then with the final attitude estimated result Q ' of P1 ' as another width of cloth two dimension training of human face figure; Otherwise said another width of cloth two dimension training of human face figure is carried out the location of eyes and face, and, the attitude estimated result is designated as P3 ' according to positioning result estimation attitude;
The method of two-dimentional training of human face figure being carried out eyes and face location is as follows:
At YC bC rIn the color space, near human eye, C bHeight and C rLow, thereby through type (5) is created eye pattern EyeMap:
EyeMap = 1 3 { ( C b 2 ) + ( 255 - C r ) 2 + ( C b / C r ) } - - - ( 5 )
In the formula (5), C bAnd C rBe YC bC rTwo color components in the color space are respectively blue composition and red composition.
At YC bC rIn the color space,, compared with other zones of face, stronger red composition is arranged, more weak blue composition, thereby C in the face zone rBe better than C b, thereby through type (6) is created mouth figure MouthMap:
MouthMap = C r 2 ( C r 2 - η · C r / C b ) 2 - - - ( 6 )
Wherein, coefficient η = 0.95 · 1 n Σ ( x , y ) ∈ FG C r ( x , y ) 2 1 n Σ ( x , y ) ∈ FG C r ( x , y ) / C b ( x , y ) - - - ( 7 )
In the formula (7), FG is meant human face region, and n is the number of pixels of human face region.
With eye pattern and mouth figure respectively by adaptive threshold with its binaryzation, expand by corrosion again, obtain the position of eyes and face, positioning result is as shown in Figure 4.Fig. 4 (a) is human eye and the face positioning result of 0 ° two-dimentional training of human face figure for attitude; Fig. 4 (b) is human eye and the face positioning result of 30 ° two-dimentional training of human face figure for attitude; Fig. 4 (c) is human eye and the face positioning result of 60 ° two-dimentional training of human face figure for attitude, and Fig. 4 (d) is human eye and the face positioning result of 90 ° two-dimentional training of human face figure for attitude.The effect that can find out eyes and face location from Fig. 4 (a), 4 (b), 4 (c) and 4 (d) is fine, and is all effective to each different attitudes.Can the guestimate attitude according to the position of eyes and face, key step is as follows: have only eyes if detect, and its x axial coordinate is half of near a left side, the center of face also is positioned at the left side, then thinks left surface; If its x axial coordinate on the right of the center of face is positioned at, is then thought right flank near right half of; If detect two eyes, and the average x axial coordinate of two eyes is at image x axle center, and promptly about 50 pixels, and the x axial coordinate of the center of face is then thought front view (FV) also about 50 pixels; If the average x axial coordinate of two eyes is in the position of taking back at image x axle center, and the x axial coordinate of face center then thinks the left avertence face also in the position of taking back at image x axle center, and left avertence bread contains 30 ° here, 45 °, and 60 °; If the average x axial coordinate of two eyes is in the position that takes at image x axle center, and the x axial coordinate of face center thinks the right avertence face also in the position that takes at image x axle center, and right avertence bread contains-30 ° here ,-45 °, and-60 °;
5) relatively whether P3 consistent with P1, if unanimity, then with P1 as the final attitude estimated result Q of the two-dimentional training of human face of width of cloth figure wherein; Otherwise, with the final attitude estimated result Q of P2 as the two dimension of the width of cloth wherein training of human face figure.Relatively whether P3 ' is consistent with P1 ', if consistent, then with the final attitude estimated result Q ' of P1 ' as another width of cloth two dimension training of human face figure; Otherwise, with the final attitude estimated result Q ' of P2 ' as another width of cloth two dimension training of human face figure.
Step 3: According to Step 2 to get the final pose estimation results Q, Q 'in the extended face space V, select the corresponding sub-human face space
Figure GDA00001847757000081
and
Figure GDA00001847757000082
, as follows:
Convert each row of expansion people face SPACE V into 122 * 240 matrixes, then can obtain 20 expansion people face space matrixs after the conversion, each expansion people face space matrix all is similar to people's face.According to final attitude estimated result Q; Choose that width range is the submatrix of Q to Q+100 (100 are the width of two-dimentional training of human face figure) in 20 expansion people face space matrixs; S' 122 * 100 column vector with these submatrixs by line translation; Remerging is 12200 * 20 matrix; Just obtained the wherein final attitude estimated result Q ' of Ziren face space
Figure GDA00001847757000083
basis of width of cloth two dimension training of human face figure; Choose that width range is the submatrix of Q ' to Q '+100 in 20 expansion people face space matrixs; S' 122 * 100 column vector with these submatrixs by line translation; Remerging is 12200 * 20 matrix, has just obtained the Ziren face space
Figure GDA00001847757000084
of another width of cloth two dimension training of human face figure
Step 4: two-dimentional training of human face figure, expansion people face space and Ziren face space according to the different attitudes of said two width of cloth generate new virtual three-dimensional people face figure.Through Ziren face SPATIAL CALCULATION coefficient a 1, a 2A n, make two-dimentional training of human face figure reconstruction error minimum, again with coefficient a 1, a 2A nRe-projection goes back to expansion people face space, just can obtain new virtual three-dimensional people face figure.Its step is following:
1) can know by principal component analysis method, through coefficient a 1, a 2A nCan rebuild two-dimentional training of human face figure with subspace people's face figure, promptly
α 1 · v ‾ 1 ′ + α 2 · v ‾ 2 ′ + . . . + α n · v ‾ n ′ = I input - - - - ( 8 )
In the formula (8), a 1, a 2... a nBe coefficient, It is the proper vector in Ziren face space.
2) design factor a 1, a 2A n, make that two two-dimentional training of human face figure reconstruction errors are minimum.
min e 2 = | | I input 1 ~ - I input 1 - | | 2 + | | I input 2 ~ - I input 2 - | | 2 - - - ( 9 )
In the formula (9);
Figure GDA00001847757000093
is that wherein width of cloth two dimension is trained human face rebuilding figure;
Figure GDA00001847757000094
is width of cloth two dimension training of human face figure wherein;
Figure GDA00001847757000095
is another width of cloth two dimension training human face rebuilding figure, be another width of cloth two dimension training of human face figure.
Through finding the solution formula (9) a 1, a 2A nValue, by a 1, a 2A nThe vector that constitutes
Figure GDA00001847757000097
Can represent by formula (10):
α ‾ = [ V ‾ 1 ′ T · V ‾ 1 ′ + V ‾ 2 ′ T · V ‾ 2 ′ ] - 1 · [ V ‾ 1 ′ T · I input 1 - + V ‾ 2 ′ T · I input 2 - ] - - - ( 10 )
In the formula (10),
Figure GDA00001847757000099
is respectively the Ziren face space of width of cloth two dimension training of human face figure and another width of cloth two dimension training of human face figure wherein.
3) with vector
Figure GDA000018477570000910
again projection go back to expansion people face space, just can generate new virtual three-dimensional people face figure.
Figure GDA000018477570000911
In the formula (11), α k(k=1,2 ..., the coefficient a that n) tries to achieve for formula (10) 1, a 2A n, n is a vector
Figure GDA000018477570000912
Dimension,
Figure GDA000018477570000913
Be expansion people face space, I 180 °Be the new virtual three-dimensional people face figure that generates.
Being 0 ° according to attitude is that the figure as a result of the virtual three-dimensional people face figure that generates of 90 ° two-dimentional training of human face figure is as shown in Figure 5 with attitude, can see that the virtual three-dimensional people face figure that makes new advances has comprised the two-dimentional training of human face figure information of each different attitudes.
Step 5: use the new virtual three-dimensional people face figure that is generated to upgrade the original sample storehouse;
Step 6: at cognitive phase, the applying portion principal component analysis method combines human eye and the automatic location algorithm of face, two-dimension human face figure to be identified is carried out attitude estimate that final attitude estimated result is designated as Q2, and this method is consistent with step 3, and is specific as follows:
A) two-dimension human face figure to be identified is carried out attitude with the part principal component analysis method and estimate, the attitude estimated result is designated as P4;
Its key step is following: the original sample Cooley is tried to achieve optimum axis of projection z with the two-dimension principal component analysis method K '(k '=1 ..., N); The j that utilizes formula (12) to obtain is exactly the attitude P4 of two-dimension human face figure to be identified:
min j { Σ k = 1 N Σ l = 1 P ( s k ( l ) - r k ( l + j ) ) 2 } - - - ( 12 )
In the formula (12), N is the number of optimum axis of projection, and N=k ', vectorial s kRepresent that two-dimension human face figure to be identified is at optimum axis of projection z K 'On projection.Each vectorial s kP element arranged, and wherein P is the width of two-dimension human face figure to be identified, r kThe mean value of virtual three-dimensional people face figure is at optimum axis of projection z in the expression original sample storehouse K 'On projection.
B) utilize attitude estimated result P4 and said preset look-up table to said two-dimension human face figure to be identified intercepting, obtain blocking two-dimension human face figure to be identified;
C) the said part principal component analysis method of two-dimension human face figure to be identified in adopting a) of blocking carried out attitude and estimated, the attitude estimated result is designated as P5;
D) calculate the poor of P4 and P5, if the absolute value of the difference of P4 and P5 is less than preset threshold value (general, this threshold value is made as 5% of two-dimension human face figure width to be identified, i.e. 5 pixels), then with the final attitude estimated result Q2 of P4 as two-dimension human face figure to be identified; Otherwise said two-dimension human face figure to be identified is carried out the location of eyes and face, and, the attitude estimated result is designated as P6 according to positioning result estimation attitude;
The method of two-dimension human face figure to be identified being carried out eyes and face location is as follows:
At YC bC rIn the color space, near human eye, C bHeight and C rLow, thereby through type (5) is created eye pattern.At YC bC rIn the color space,,, stronger red composition is arranged compared with other zones of face in the face zone, more weak blue composition, thereby through type (6) is created mouth figure.Eye pattern and mouth figure are passed through adaptive threshold respectively with its binaryzation, expand through corrosion again, obtain the position of eyes and face.Can the guestimate attitude according to the position of eyes and face, key step is as follows: have only eyes if detect, and its x axial coordinate is half of near a left side, the center of face also is positioned at the left side, then thinks left surface; If its x axial coordinate on the right of the center of face is positioned at, is then thought right flank near right half of; If detect two eyes, and the average x axial coordinate of two eyes is at image x axle center, and promptly about 50 pixels, and the x axial coordinate of the center of face is then thought front view (FV) also about 50 pixels; If the average x axial coordinate of two eyes is in the position of taking back at image x axle center, and the x axial coordinate of face center then thinks the left avertence face also in the position of taking back at image x axle center, and left avertence bread contains 30 ° here, 45 °, and 60 °; If the average x axial coordinate of two eyes is in the position that takes at image x axle center, and the x axial coordinate of face center thinks the right avertence face also in the position that takes at image x axle center, and right avertence bread contains-30 ° here ,-45 °, and-60 °;
F) relatively whether P6 is consistent with P4, if consistent, then with the final attitude estimated result Q2 of P4 as two-dimension human face figure to be identified; Otherwise, with the final attitude estimated result Q2 of P5 as two-dimension human face figure to be identified.
Step 7: the attitude estimated result Q2 and preset look-up table that utilize step (6) to obtain; Two-dimension human face figure to be identified is carried out intercepting obtain blocking two-dimentional test person face figure; Use the part principal component analysis method to block two-dimentional test person face figure and discern to said, specific as follows:
1), two-dimension human face figure to be identified is carried out intercepting obtain blocking two-dimentional test person face figure according to the look-up table shown in attitude results estimated Q2 and the table 1.
2) utilize the part principal component analysis method to discern to blocking two-dimentional test person face figure.Be shown below:
min i { Σ k = 1 N Σ l = 1 P ( r k ( l ) - r k i ( l + j ) ) 2 } - - - ( 13 )
In the formula (13), j is the final attitude estimated result Q2 of two-dimension human face figure to be identified, and N is the optimum axis of projection z that the two-dimension principal component analysis method obtains K '(k '=1 ... number N), r k(l) be to block two-dimentional test person face figure at z K '(k '=1 ..., the projection on N), each r k(l) P element arranged, wherein P is the width that blocks two-dimentional test person face figure,
Figure GDA00001847757000112
Be in the updated sample storehouse i virtual three-dimensional people face figure at z K '(k '=1 ..., the projection on N), the i that obtains is exactly the recognition result of two-dimension human face figure to be identified.
Two, the checking of technique effect
(1) verification method:
In order to verify the effect of the present invention to colourful attitude recognition of face, the present invention utilizes the UPC face database to do twice experiment.It is 3.06GHz that dominant frequency is adopted in this experiment, in save as Intel Pentium 4 processors of 2GB.
Experiment one: to prior art based on principal component analysis method with compare based on the performance of colourful attitude recognition of face of the present invention.
In the experiment one, two-dimension human face figure to be identified is 9 angles of 24 people under the natural light, totally 216 people's face figure.In algorithm of the present invention, the sample storehouse is 24 virtual three-dimensional people face figure, and its virtual three-dimensional people face figure is according to algorithm of the present invention, and being 0 ° by attitude is that 90 ° two-dimentional training of human face figure generates with attitude; In principal component analysis method, the sample storehouse be under the natural light attitude to be 0 ° be 24 people's of 90 ° two-dimentional training of human face figure with attitude, totally 48 two-dimentional training of human face figure, discrimination is as shown in table 2, required time is as shown in table 3, chronomere be millisecond (ms).
Table 2 principal component analysis method and discrimination of the present invention
Principal component analysis method The present invention
Discrimination 39.8% 86.6%
Table 3 algorithm required time
Principal component analysis method The present invention
Time (ms) 296 406
Experiment two: to prior art based on the part principal component analysis method with compare based on the performance of colourful attitude recognition of face of the present invention.
In the experiment two, two-dimension human face figure to be identified is 9 angles of 24 people under the natural light, totally 216 people's face figure.In the present invention, the sample storehouse is 24 virtual three-dimensional people face figure, and its virtual three-dimensional people face figure is that to be 0 ° by attitude be that 90 ° two-dimentional training of human face figure generates according to the inventive method with attitude, is designated as training sample A; In the part principal component analysis method, the sample storehouse is 24 virtual three-dimensional people face figure, and its virtual three-dimensional people face figure is that to be 0 ° by attitude be that 90 ° two-dimentional training of human face figure generates according to the part principal component analysis method with attitude, is designated as training sample B.The accuracy that attitude is estimated is as shown in table 4, and discrimination is as shown in table 5, and required time is as shown in table 6, and chronomere is a millisecond (ms).
Table 4 attitude is estimated accuracy
The part principal component analysis method The present invention
Attitude is estimated accuracy 92.13% 94.44%
Table 5 part principal component analysis method and discrimination of the present invention
The part principal component analysis method The present invention
Discrimination 82.8% 86.6%
Table 6 algorithm required time
Averaging time (ms) Attitude is estimated Identification T.T.
The present invention 328 78 406
The part principal component analysis method 312 78 390
(2) experiment conclusion:
Can find out that by table 2 the inventive method is more than doubled with respect to the principal component analysis method discrimination of prior art, be a kind of face recognition algorithms of attitude robust.Can find out that by table 4 and table 5 the present invention is based on the part principal component analysis method combines human eye and the automatic location algorithm of face to carry out carrying out the attitude estimation based on the part principal component analysis method and comparing of attitude estimation approach and prior art; It is more accurate that attitude is estimated, and the recognition of face rate is higher.Though can find out the time required for the present invention greater than principal component analysis method by table 3 and table 6, to compare with the part principal component analysis method, required time is more or less the same, and still is applicable to real-time processing basically.

Claims (3)

1. the face identification method based on part principal component analysis and attitude estimation associating is characterized in that comprising the steps:
(1) in the original sample storehouse, deposits virtual three-dimensional people face figure in advance, use principal component analysis method and calculate the expansion people face space in the said original sample storehouse; And the applying portion principal component analysis method combines human eye and the automatic location algorithm of face that the two-dimentional training of human face figure of the different attitudes of two width of cloth is carried out the attitude estimation respectively;
(2) each the attitude estimated result that obtains according to step (1) is sought corresponding Ziren face space respectively in expansion people face space;
(3) according to two-dimentional training of human face figure, expansion people face space and the Ziren face space of the different attitudes of said two width of cloth, generate the new virtual three-dimensional people face figure of a width of cloth;
(4) use the new virtual three-dimensional people face figure that is generated to upgrade the original sample storehouse;
(5) the applying portion principal component analysis method combines human eye and the automatic location algorithm of face, two-dimension human face figure to be identified is carried out attitude estimate;
(6) the attitude estimated result and preset look-up table that utilize step (5) to obtain; Two-dimension human face figure to be identified is carried out intercepting obtain blocking two-dimentional test person face figure, said look-up table is meant that the difference of two-dimension human face figure estimates the mapping table of the intercepting width range of attitude and two-dimension human face figure; Use the part principal component analysis method to block two-dimentional test person face figure and discern to said.
2. the face identification method based on part principal component analysis and attitude estimation associating according to claim 1; It is characterized in that; In step (1), it is following that the applying portion principal component analysis method combines human eye and the automatic location algorithm of face that two-dimentional training of human face figure is carried out the attitude estimation approach:
1) two-dimentional training of human face figure is carried out attitude with the part principal component analysis method and estimate, the attitude estimated result is designated as P1;
2) utilize attitude estimated result P1 and preset look-up table to said two-dimentional training of human face figure intercepting, obtain blocking two-dimentional training of human face figure; Said look-up table is meant that the difference of two-dimension human face figure estimates the mapping table of the intercepting width range of attitude and two-dimension human face figure;
3) block two-dimentional training of human face figure and adopt the part principal component analysis method to carry out attitude to estimate, this attitude estimated result is designated as P2 said;
4) calculate the poor of P1 and P2, if the absolute value of the difference of P1 and P2 is less than preset threshold value, then with the final attitude estimated result of P1 as two-dimentional training of human face figure; Otherwise said two-dimentional training of human face figure is carried out the location of eyes and face, and, this attitude estimated result is designated as P3 according to positioning result estimation attitude;
5) relatively whether P3 is consistent with P1, if consistent, then with the final attitude estimated result of P1 as two-dimentional training of human face figure; Otherwise, with the final attitude estimated result of P2 as two-dimentional training of human face figure.
3. the face identification method based on part principal component analysis and attitude estimation associating according to claim 1; It is characterized in that; In step (5), it is following that the applying portion principal component analysis method combines human eye and the automatic location algorithm of face that two-dimension human face figure to be identified is carried out the attitude estimation approach:
A) two-dimension human face figure to be identified is carried out attitude with the part principal component analysis method and estimate, this attitude estimated result is designated as P4;
B) utilize attitude estimated result P4 and said preset look-up table to said two-dimension human face figure to be identified intercepting, obtain blocking two-dimension human face figure to be identified;
C) block two-dimension human face figure to be identified and adopt the part principal component analysis method to carry out attitude to estimate, this attitude estimated result is designated as P5 said;
D) calculate the poor of P4 and P5, if the absolute value of the difference of P4 and P5 is less than preset threshold value, then with the final attitude estimated result of P4 as two-dimension human face figure to be identified; Otherwise said two-dimension human face figure to be identified is carried out the location of eyes and face, and, this attitude estimated result is designated as P6 according to positioning result estimation attitude;
F) relatively whether P6 is consistent with P4, if consistent, then with the final attitude estimated result of P4 as two-dimension human face figure to be identified; Otherwise, with the final attitude estimated result of P5 as two-dimension human face figure to be identified.
CN2010105883116A 2010-12-07 2010-12-07 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation Expired - Fee Related CN102043966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105883116A CN102043966B (en) 2010-12-07 2010-12-07 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105883116A CN102043966B (en) 2010-12-07 2010-12-07 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation

Publications (2)

Publication Number Publication Date
CN102043966A CN102043966A (en) 2011-05-04
CN102043966B true CN102043966B (en) 2012-11-28

Family

ID=43910091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105883116A Expired - Fee Related CN102043966B (en) 2010-12-07 2010-12-07 Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation

Country Status (1)

Country Link
CN (1) CN102043966B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8261090B1 (en) * 2011-09-28 2012-09-04 Google Inc. Login to a computing device based on facial recognition
CN103198330B (en) * 2013-03-19 2016-08-17 东南大学 Real-time human face attitude estimation method based on deep video stream
CN105678241B (en) * 2015-12-30 2019-02-26 四川川大智胜软件股份有限公司 A kind of cascade two dimensional image face pose estimation
CN106022228B (en) * 2016-05-11 2019-04-09 东南大学 A kind of three-dimensional face identification method based on grid local binary patterns in length and breadth
CN106503615B (en) * 2016-09-20 2019-10-08 北京工业大学 Indoor human body detecting and tracking and identification system based on multisensor
CN110020578A (en) * 2018-01-10 2019-07-16 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109785322B (en) * 2019-01-31 2021-07-02 北京市商汤科技开发有限公司 Monocular human body posture estimation network training method, image processing method and device
CN111814516A (en) * 2019-04-11 2020-10-23 上海集森电器有限公司 Driver fatigue detection method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1325662A (en) * 2001-07-13 2001-12-12 清华大学 Method for detecting moving human face
CN1341401A (en) * 2001-10-19 2002-03-27 清华大学 Main unit component analysis based multimode human face identification method
CN1945595A (en) * 2006-10-30 2007-04-11 邹采荣 Human face characteristic positioning method based on weighting active shape building module
CN101038622A (en) * 2007-04-19 2007-09-19 上海交通大学 Method for identifying human face subspace based on geometry preservation
CN101236598A (en) * 2007-12-28 2008-08-06 北京交通大学 Independent component analysis human face recognition method based on multi- scale total variation based quotient image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1325662A (en) * 2001-07-13 2001-12-12 清华大学 Method for detecting moving human face
CN1341401A (en) * 2001-10-19 2002-03-27 清华大学 Main unit component analysis based multimode human face identification method
CN1945595A (en) * 2006-10-30 2007-04-11 邹采荣 Human face characteristic positioning method based on weighting active shape building module
CN101038622A (en) * 2007-04-19 2007-09-19 上海交通大学 Method for identifying human face subspace based on geometry preservation
CN101236598A (en) * 2007-12-28 2008-08-06 北京交通大学 Independent component analysis human face recognition method based on multi- scale total variation based quotient image

Also Published As

Publication number Publication date
CN102043966A (en) 2011-05-04

Similar Documents

Publication Publication Date Title
CN102043966B (en) Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation
US10553026B2 (en) Dense visual SLAM with probabilistic surfel map
Wong et al. A stratified approach for camera calibration using spheres
CN104157010A (en) 3D human face reconstruction method and device
CN106910222A (en) Face three-dimensional rebuilding method based on binocular stereo vision
US20160048970A1 (en) Multi-resolution depth estimation using modified census transform for advanced driver assistance systems
CN104820991B (en) A kind of multiple soft-constraint solid matching method based on cost matrix
CN104794722A (en) Dressed human body three-dimensional bare body model calculation method through single Kinect
CN109215085B (en) Article statistical method using computer vision and image recognition
CN102999942A (en) Three-dimensional face reconstruction method
CN106157372A (en) A kind of 3D face grid reconstruction method based on video image
CN102663820A (en) Three-dimensional head model reconstruction method
CN104463899A (en) Target object detecting and monitoring method and device
Kallwies et al. Triple-SGM: stereo processing using semi-global matching with cost fusion
Fan et al. Convex hull aided registration method (CHARM)
CN103824076A (en) Detecting and extracting method and system characterized by image dimension not transforming
CN111197976A (en) Three-dimensional reconstruction method considering multi-stage matching propagation of weak texture region
Azad et al. Accurate shape-based 6-dof pose estimation of single-colored objects
Caunce et al. Improved 3D Model Search for Facial Feature Location and Pose Estimation in 2D images.
Mendonça et al. Analysis and Computation of an Affine Trifocal Tensor.
CN107729904A (en) A kind of face pore matching process based on the limitation of 3 D deformation face
Wang et al. 3D AAM based face alignment under wide angular variations using 2D and 3D data
Shi et al. 3D face reconstruction and gaze estimation from multi-view video using symmetry prior
Li et al. Two-phase approach—Calibration and iris contour estimation—For gaze tracking of head-mounted eye camera
Lin et al. Color-aware surface registration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121128

Termination date: 20151207

EXPY Termination of patent right or utility model