CN101587543B - Face recognition method - Google Patents

Face recognition method Download PDF

Info

Publication number
CN101587543B
CN101587543B CN 200910059666 CN200910059666A CN101587543B CN 101587543 B CN101587543 B CN 101587543B CN 200910059666 CN200910059666 CN 200910059666 CN 200910059666 A CN200910059666 A CN 200910059666A CN 101587543 B CN101587543 B CN 101587543B
Authority
CN
China
Prior art keywords
people
face
light
under
facial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200910059666
Other languages
Chinese (zh)
Other versions
CN101587543A (en
Inventor
李建平
林劼
郝玉洁
廖建明
顾小丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN 200910059666 priority Critical patent/CN101587543B/en
Publication of CN101587543A publication Critical patent/CN101587543A/en
Application granted granted Critical
Publication of CN101587543B publication Critical patent/CN101587543B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

The invention relates a face recognition method, comprising the following steps: S1. quotient graph calculation; S2. lighting compensation; S3. feature extraction, which divides each face image of thelighting compensation face image data base into sub-blocks and extracting each feature; S4. model training, which trains face models at different lighting conditions based on the features extracted f rom step S3 and storing them to form face model data base; S5. feature extraction, which divides the face image to be recognized into sub-blocks and extracts features S6. face recognition, which computes the match parts according to the face model data base and the features extracted from step S5 and selects the optimal matching result; S7. post-treatment. The invention has the beneficial effects as follows: the present invention inhibits the influence on the recognition ratio by the partial feature mismatch caused by the partial image corruption, so as to solve the problem of the face recognition under the conditions of complicated lighting and partial distortion or obstruction of the face image.

Description

A kind of face identification method
Technical field
The present invention relates to a kind of biometrics identification technology, particularly a kind of face identification method.
Background technology
Existing face identification method and system are in practical application, and performance is unsatisfactory.Its main cause be because; In practical application; The variation of factors such as illumination condition, people's face anglec of rotation, expression, hair style and background has caused the mismatch between faceform's database and facial image to be identified, thereby has influenced the performance of face identification method dramatically.In addition, facial image bird caging (partial distortion) and inaccessible (occlusion) have also caused existing face identification system performance decrease.The bird caging or the obturation of these facial images are caused by ornaments such as sunglasses, scarfs often, and the existence of these ornaments is inevitably, the particularly application in anti-terrorism, and the terrorist can be with some dummy loads to cover up oneself mostly.But as the most extensive a kind of and the most natural biometrics identification technology; The recognition of face authentication will be widely used in authentication, network security strick precaution, Web bank, internet trade, the Secure Transaction of ecommerce, multimedia messages under the complicated environmental condition and the field such as obtain, and it requires recognition system to have under various environmental baseline complex transformations and facial image bird caging or occlusion condition still to have stable recognition performance.Therefore, how solving complicated light and facial image exists recognition of face problem under bird caging or the occlusion condition to become a technical matters that needs to be resolved hurrily of current face recognition technology.
Summary of the invention
The object of the invention will provide a kind of face identification method exactly, solves complicated light and facial image and has the recognition of face problem under bird caging or the occlusion condition.
In order to realize the object of the invention, used following scheme: a kind of face identification method may further comprise the steps:
S1. quotient graph calculates: calculate people's face light model and storage from reference man's face image data base;
S2. light compensation: according to the facial image under original training of human face image data base and the synthetic different light rays condition of people's face light model and store, constitute light compensation facial image database;
S3. feature extraction: each width of cloth facial image of light compensation facial image database is divided sub-piece and carried out feature extraction;
S4. model training: based on the faceform under the features training different light rays condition of extracting among the step S3 and store, constitute faceform's database;
S5. feature extraction: facial image to be identified is divided sub-piece and carried out feature extraction;
S6. recognition of face: the characteristic according to extracting among faceform's database that has constituted and the step S5 is calculated coupling, selects best matching result;
S7. subsequent treatment: best matching result and predefined judgment threshold are compared the output judged result.
In the said method, the step S6 of recognition of face comprises step by step following again:
S61. calculate every sub-block and every type of people's under each light condition in faceform's database of the facial image to be identified sub-piece likelihood score of corresponding sub block of each submodel;
S62. calculate the sub-piece likelihood score of associating of each submodel of every type of people under each light condition in facial image to be identified and the faceform's database;
S63. the sub-piece likelihood score of the associating of each submodel of every type of people under every kind of light condition is merged the conjunctive model likelihood score that obtains every type of people and facial image to be identified under every kind of light condition;
S64. the conjunctive model likelihood score under all light condition of every type of people is merged the whole likelihood score that obtains every type of people and facial image to be identified;
S65. choose the corresponding artificial recognition result of that type of value with largest global likelihood score.
Beneficial effect of the present invention:, and formed a new light compensation facial image database according to the original training of human face image data base of this person's face light model and initial setting because this system of the present invention has at first made up people's face light model according to a given in advance reference man's face image data base that under the different light rays condition, collects.Light compensation facial image database be comprised the data in the original training of human face image data base and the new representative of being synthesized by people's face light model the set of facial image of different light rays condition.Respectively the facial image in facial image to be identified and the light compensation facial image database is carried out feature extraction again; The characteristic of in the recognition of face step, mating most with image to be identified under every kind of light condition of automatic selection; And only discern based on these characteristics of mating most; Promptly when the faceform's of faceform's database that the light with image to be identified is mated most local feature is selected, also selected unpolluted characteristic, thereby the present invention has suppressed topography and has polluted the influence to discrimination of the local feature mismatch that causes.Thereby solved complicated light and facial image and had the recognition of face problem under bird caging or the occlusion condition.
Description of drawings
Fig. 1 is the process flow diagram of recognition of face of the present invention.
Fig. 2 is the effect synoptic diagram when original training of human face image data base is carried out light compensation.
Fig. 3 is the process flow diagram to the model training step.
Fig. 4 is the process flow diagram to the recognition of face step.
Fig. 5 model w Lm(n) and w LmCorresponding relation figure.
Embodiment
Basic ideas of the present invention are: made up people's face light model based on reference man's face image data base; And people's face light model and original training of human face image data base have been synthesized several the new facial images under the different light rays condition thus, have made up light compensation facial image database and have been used to make up the faceform under the different light rays condition.When recognition of face; Recognizer matees with the characteristic of facial image to be identified and everyone faceform under every kind of light condition of structure respectively; Obtain every type of people under every kind of light condition with the similarity of facial image to be identified; And then merge the similarity that obtains final all kinds of people and facial image to be identified through the similarity that under its corresponding all light condition, obtains each individual, and to select to have a type of maximum similarity value be recognition result.In face recognition process; Face recognition process of the present invention has realized that automatically a face characteristic subclass based on optimum carries out similarity and calculates; Thereby effectively reduced between faceform who makes up under the different light rays condition and facial image characteristic to be identified because local light mismatch and by facial image bird caging and the inaccessible faceform who causes and the local mismatches between facial image characteristic to be identified to the influence that identification is brought, effectively raise discrimination.
People's face light model: the model of the representative's face light that makes up by reference man's face image data base and quotient graph computation process.
Reference man's face image data base: in advance given by N class people in 3, collect under the different light condition regular through image, promptly according to position of human eye to facial image up and down pixel carry out the image library that adjusted facial image constituted.Require every kind of corresponding light condition in 3 kinds of light condition of person to person should be identical in the storehouse.
Original training of human face image data base: the facial image by in advance given M class people to be identified is formed, and is used to make up light compensation facial image database.Every type of people's picture number is N in the storehouse m, m=1,2 ... M.
Light compensation facial image database: constitute by original training of human face image data base with according to original training of human face image data base and the synthetic facial image of M class people under L kind different light rays condition of people's face light model.The total picture number of database is
Figure G2009100596663D00041
wherein
Figure G2009100596663D00042
for M class people's in the original training of human face image data base facial image number,
Figure G2009100596663D00043
is synthetic M class people facial image number under L kind different light rays condition.M, m=1,2 ... M class people's picture number is N m* (L+1).
Faceform's database: the faceform by M class people under the L kind light of light compensation facial image database and training process structure forms.Complete model bank is made up of L * M faceform, and each model is represented one type of people's under a kind of light condition faceform, by w LmExpression, wherein l is a l kind light condition, m is m class people.Each model is divided into R submodel again, and each submodel is by a mixture model weight p (r|w Lm), mean vector μ LmrDiagonal angle vector ∑ with covariance matrix LmrForm, r=1 here, 2 ... R.
Below in conjunction with accompanying drawing and specific embodiment the present invention is described further.
Like Fig. 1, Fig. 2, Fig. 3, Fig. 4 and shown in Figure 5, the specific embodiments of face identification method of the present invention is following:
S1. quotient graph calculates: calculate people's face light model and storage from reference man's face image data base;
At first made up people's face light model through reference man's face image data base, planted facial image and storage under the light condition according to original training of human face image data base and the synthetic L (1≤L≤20) of people's face light model then, the L value is 5 in the present embodiment.Synthetic concrete steps are following:
1. the reference man's face image data base by N (N is a natural number) individual makes up A 1, A 2..., A N(every type of people's of A representative pattern matrix) be N matrix altogether, each matrix A NBe made up of 3 row, every row are included in the width of cloth facial image vector under a kind of light condition.
2. the vectorial y of a secondary facial image of given original training of human face image data base, its vector length are m (m is the image pixel number).
3. calculate N trivector v with formula 1, v 2..., v N
v i = ( Σ i = 1 N A i T A i ) - 1 A i T y Formula (1)
Wherein, i=1 ..., N; v iBe trivector, be used for intermediate computations; T is the matrix transpose computing.
4. separate following equation and obtain α 1, α 2..., α N, and the α that is solved nSatisfy ∑ iα i=N α 1 ( v 1 T A 1 T y s - y s T y s ) + . . . + α N v N T A 1 T y s = 0 α 1 v 1 T A 2 T y s + . . . + α N v N T A 1 T y s = 0 . . . . . . . . . . . . α N v 1 T A N T y s + . . . + α N ( v N T A N T y s - y s T y s ) = 0 Formula (2)
Wherein, α is a matrix parameter, obtains through solution formula (2), is used for intermediate computations.
5. calculate x = Σ i α i v i Formula (3)
Wherein, x is a n dimensional vector n, is used for intermediate computations;
6. calculate people's face light model (quotient graph) Q y=y/ (Ax), here A = 1 N Σ i = 1 N A i ;
S2. light compensation: according to the facial image under original training of human face image data base and the synthetic different light rays condition of people's face light model and store, constitute light compensation facial image database;
The concrete implementation procedure of this step comprises: given different z (z is a n dimensional vector n, is used for intermediate computations), and with the vectorial y of a secondary facial image of the light compensation facial image database under the synthetic new light condition of formula (4) sy sBe by people's face light model Q yMatrix A and the parameter set z={z constructed with reference man's face image data base 1, z 2, z 3Do inner product and obtain.Based on different z, can generate several facial images under the different light rays condition and store formation light compensation facial image database;
y s = Az ⊗ Q y Formula (4)
By formula (4), generated the faceform under the L=5 kind light condition in the present embodiment, the corresponding z value of five kinds of light condition be respectively 0.2,2,0.8}, 0.6,1.5,0.9}, 0.9,1.5,0.6}, 0.8,2,0.2}, 1.2,0.6,1.2}.Shown each width of cloth facial image under 3 width of cloth training facial image and the corresponding synthetic different light rays condition about Fig. 2 respectively.
S3. feature extraction: each width of cloth facial image of light compensation facial image database is divided sub-piece and carried out feature extraction;
The concrete implementation procedure of this process comprises step by step following:
1. read every width of cloth facial image I in the light compensation facial image database, with the non-folding identical Ziren face image I of N1 block size that is divided into of image I 1, I 2..., I N1, N1 is the subimage block number.
2. to every Ziren face image I n, extract Gabor proper vector O n, n=1,2 ... N1.All sub-block eigenvectors are unified into a proper vector O={O 1, O 2..., O N1, and storage is used for intermediate computations.
3. with the n of every width of cloth facial image in the light compensation facial image database, n=1,2 ... the eigenvector O of N1 piece subimage nAs row, constitute a matrix sequence U n, n=1,2 ..., N1 is used for intermediate computations.N matrix U in the matrix sequence nEach classify the proper vector O of the n piece subimage of the piece image in the light compensation facial image database as n
4. respectively to matrix sequence U n, n=1,2 ..., N1 carries out characteristic value decomposition, obtains matrix sequence U n, n=1,2 ..., the eigenwert of N1 and eigenvectors matrix sequence U ' nN=1,2 ..., N1.
5. select the preceding K row proper vector of the corresponding eigenvalue of maximum of each matrix in the eigenvectors matrix sequence to constitute N1 matrix V ector respectively 1, Vector 2..., Vector N1And storage.Here K is more than or equal to 1 columns smaller or equal to eigenvectors matrix.
6. by matrix sequence Vector n, n=1,2 ..., the proper vector O of every width of cloth facial image is obtained the K dimension face characteristic vector X of the every sub-block of every width of cloth facial image in the light compensation facial image database in the light compensation facial image database of N1 and storage by computes n, and the proper vector of all sub-pieces united the feature vector, X={ X that constitutes piece image 1, X 2..., X N1And storage.
X n=O n* Vector nFormula (5)
S4. model training: based on the faceform ω under the features training different light rays condition of the every width of cloth facial image under every kind of light condition in the light compensation facial image database that extracts among the step S3 LmAnd store, constitute faceform's database.Here l=1,2 ... L representes different light condition, m=1, and 2 ... M representes a different M people.
The concrete implementation procedure of this process is as shown in Figure 3, comprises step by step following:
S41 carries out local unsupervised training
Local unsupervised training concrete steps are:
1. to every type of people's under every kind of light condition faceform ω LmCarry out initialization, l=1 here, 2 ..L, m=1,2 ... M.Adopt gauss hybrid models as basic model in the example of the present invention, wherein gauss hybrid models is become by 2 sub-Gauss's model group, and promptly submodel is counted R=2.Initialization procedure is respectively to the weight p (r|w of 2 sub-Gauss models Lm), r=1,2, mean vector μ LmrDiagonal angle vector ∑ with covariance matrix LmrCarry out initialization.The initialization detailed process is following:
1) with the weight p (r|w of each sub-Gauss model Lm), r=1,2 taxes are 0.5.
2) model ω LmR, r=1, the average of 2 sub-Gauss models equals the average of the face characteristic of the face images in the light compensation facial image database of m class people under l kind light condition.
3) model ω LmR, r=1, the diagonal angle of the covariance matrix of 2 sub-Gauss models vector ∑ LmrEqual the resulting covariance matrix diagonal line vector of average of the face characteristic of the face images in the light compensation facial image database of m class people under l kind light condition
Every type of people of initialization behind the model under every kind of light condition, system carries out double counting with the characteristic of the face images under every kind of light condition of every type of people in the light compensator face image data base to the faceform under every kind of light condition of initialized every type of people.This process is carried out repetition training to the faceform under every kind of light condition of every type of people respectively, when everyone finishes after the faceform's trained under all light condition.Present embodiment adopts the greatest hope method, and (Expectation-Maximization, EM) method is as the repetition training method.
S42 carries out overall supervised training.
To every type of people after the faceform under every kind of light condition carries out local unsupervised training, the faceform of every type of people under every kind of light condition carried out overall supervised training.The practical implementation process is following:
Note J is the repetition training number of times, and j is the j time repetition.
1.j since 1; According to the constructed faceform of every type of people under every kind of light condition of the local unsupervised training in front; With piece image characteristic and each faceform's in formula (12) compute ray compensator's face image data base likelihood score value, and preserve as results of intermediate calculations.
P ( X | w Lm j ) = Σ r = 1 R p ( r | w Lm j ) p ( X | w Lm j , r ) Formula (6)
Here X is the proper vector of any piece image of light compensation facial image database, l=1, and 2 ... L, m=1,2 ... M.
Figure DEST_PATH_GSB00000816416800011
For the model for the X?
Figure DEST_PATH_GSB00000816416800012
, the conditional probability of r submodel values.
2. if characteristics of image vector X is the proper vector of piece image wherein under m people's the l kind light condition, and this characteristic and model w LmThe likelihood score value P (X|w that is calculated Lm) greater than the likelihood value P (X|w that is produced with other models L ' m '), l '=1,2 ... L, and l ' ≠ l, m '=1,2 ... M, and m ' ≠ m then judges correctly, otherwise decision error.When decision error; The pattern number that record is misjudged (l '; M ') and pattern number (l; M), upgrade with step 3 pair model
Figure DEST_PATH_GSB00000816416800013
and
Figure DEST_PATH_GSB00000816416800014
.Otherwise to step 4.
3, respectively, seeking formulas (6)? and?
Figure DEST_PATH_GSB00000816416800016
model for each sub-model mean vector?
Figure DEST_PATH_GSB00000816416800017
with? and diagonal covariance matrix vector? with?
Figure DEST_PATH_GSB000008164168000110
partial derivatives.Obtain vector?
Figure DEST_PATH_GSB000008164168000111
and vector?
Figure DEST_PATH_GSB000008164168000112
for intermediate calculations.Here
▿ P ( X | w lm j ) = { ∂ P ( X | w lm j ) / ∂ u lmr j , ∂ P ( X | w lm j ) / ∂ Σ lmr j }
▿ P ( X | w l ′ m ′ j ) = { ∂ P ( X | w l ′ m ′ j ) / ∂ u l ′ m ′ r j , ∂ P ( X | w l ′ m ′ j ) / ∂ Σ l ′ m ′ r j }
And according to the obtained vector?
Figure DEST_PATH_GSB000008164168000115
and vector?
Figure DEST_PATH_GSB000008164168000116
by equation (7) and (8) model is recalculated?
Figure DEST_PATH_GSB000008164168000117
and?
Figure DEST_PATH_GSB000008164168000118
and stored.
w Lm ( j + 1 ) = w Lm ( j ) + η ▿ P ( X | w Lm j ) Formula (7)
w l ′ m ′ ( j + 1 ) = w l ′ m ′ ( j ) - η ▿ P ( X , w l ′ m ′ j ) Formula (8)
η is a undated parameter in formula (7) and (8), is made as 0.002 in the embodiment of the invention.
4. if also have non-selected image in the light compensation facial image database then select another width of cloth characteristics of image to return step 1, otherwise execution in step 5.
5. when if the pattern number number of the misjudgement of preserving is lower than a preset threshold or cycle index during greater than maximum predetermined number of times J, withdraw from circulation, preserve the faceform that all training obtain, otherwise j=j+1 turns back to step 1.Here threshold value be in the light compensation facial image database picture number 5%.
S5. feature extraction: facial image to be identified is divided sub-piece and carried out feature extraction;
The concrete implementation procedure of this process comprises step by step following:
1. read facial image I to be identified, identical with the 1st step of step S3 with the non-folding identical Ziren face image I of N1 block size that is divided into of image I 1, I 21..., I N1, N1 is the subimage block number, it is identical with step S3 neutron piece number.
2. to every Ziren face image I n, extract Gabor proper vector O n, n=1,2 ..., N1.All sub-block eigenvectors are unified into a proper vector O={O 1, O 2..., O N1, and storage is used for intermediate computations.
3. read the matrix V ector of storage n, n=1,2 ..., N1 is obtained the K dimensional feature vector Y of each sub-image of facial image to be identified by computes n, and unite all sub-block eigenvectors formation K * N1 dimension face characteristic Y={Y 1, Y 2... Y N1, and storage.
Y n=O n* Vector nFormula (9)
S6. recognition of face: the characteristic according to extracting among faceform's database that has constituted and the step S5 is calculated coupling, selects best matching result;
S61. by the sub-piece likelihood score P (Y of the corresponding sub block of each submodel of every sub-block of computes facial image to be identified and every type of people under each the lines spare in faceform's database n| w Lm(n), r);
P ( Y n | w Lm ( n ) , r ) = 1 ( 2 π ) K n / 2 Π k 1 K σ Rk 1 Exp ( - 1 2 Σ k 1 = 1 K ( Y n ( k 1 ) - u Lmr ( n , k 1 ) ) 2 σ Rk 1 2 ) Formula (10)
Here Y nBe the proper vector of facial image n piece subimage to be identified, n=1,2 .., N1, its proper vector dimension is K.Y n(k1) be the k1 dimensional feature of the proper vector of facial image n piece subimage to be identified.w Lm(n) be model w LmN (n=1,2 .., N1) individual part, it is made up of R sub-model, each submodel is by a mixture model weight p (r|w Lm), mean vector μ Lmr(n) and the diagonal angle of covariance matrix vector ∑ Lmr(n)={ σ R1, σ R2..., σ RKForm.w Lm(n) and w LmCorresponding relation as shown in Figure 5.
S62. calculate the sub-piece likelihood score of associating of each submodel of every type of people under each light condition in facial image to be identified and the faceform's database;
This step can be divided into step by step following:
1. calculate under the different Q value likelihood score G (Y, Q, w that any Q piece piece likelihood score of each submodel of every type of people under each light condition in facial image to be identified and the faceform's database is constituted respectively by following formula Lm, r), be used for intermediate computations.
G ( Y , Q , w Lm , r ) = Σ n 1 , n 2 , . . . , n Q p ( Y n 1 | w Lm ( n 1 ) , r ) p ( Y n 2 | w Lm ( n 2 ) , r ) . . . p ( Y n Q | w Lm ( n Q ) , r ) Formula (11)
N in the formula (11) 1, n 2..., n QBe any Q sub-block sequence number.The Q value can be taken as 1≤Q≤N1.
2. any Q piece piece likelihood score G (Y, Q, the w of each submodel of every type of people under each light condition under the different Q value that calculates according to step 1 in facial image to be identified and the faceform's database Lm, r), with the sub-piece likelihood score of the associating P (w under the computes different Q value Lm, r|Y Q), is used for intermediate computations.
P ( w Lm , r | Y , Q ) = G ( Y , Q , w Lm , r ) Σ m ′ Σ r = 1 R G ( Y , Q , w l m ′ , r ) p ( r | w Lm ′ ) Formula (12)
W in the formula (12) Lm 'Be m ' under the l kind light condition, m ' ≠ m class people's faceform.
S63. the sub-piece likelihood score of the associating of each submodel of every type of people under every kind of light condition is merged the conjunctive model likelihood score that obtains every type of people and facial image to be identified under every kind of light condition;
This step can divide be specially step by step following:
1. the sub-piece likelihood score of the associating P (w of each submodel of every type of people under the every kind of light condition that obtains by step S62 Lm, r|Y Q) merges the conjunctive model likelihood score obtain every type of people and facial image to be identified under every kind of light condition under the different Q value by following formula;
P ( Y | w Lm , Q ) = Σ r = 1 R P ( w Lm , r | X , Q ) P ( r | w Lm ) Formula (13)
2, under every kind of light condition of the different Q value that obtains the conjunctive model likelihood score of every type of people and facial image to be identified selection maximum likelihood degree value be the conjunctive model likelihood score of every type of people and facial image to be identified under every kind of light condition.
P ( Y | w Lm ) = Arg Max Q P ( Y | w Lm , Q ) Formula (14)
S64. the conjunctive model likelihood score under all light condition of every type of people is merged the whole likelihood score that obtains every type of people and facial image to be identified;
This step merges through the mode that the conjunctive model likelihood score under all light condition of every type of people is sued for peace, and obtains the whole likelihood score of every type of people and facial image to be identified;
S65. choose the corresponding artificial recognition result of that type of value with largest global likelihood score.
S7. subsequent treatment: best matching result and predefined judgment threshold are compared the output judged result.
In this testing experiment, with existing P CA, DCT, Gabor+Cosine, Gabor+SUM, GaborMPCA, the D-subspace system has made contrast test as contradistinction system and our new system.Wherein the Gabor+Cosine system adopts piecemeal Gabor characteristic as face characteristic and adopt the Cosine related function as recognizer; The Gabor+SUM system adopts piecemeal Gabor characteristic as face characteristic, and adopts the Cosine similarity sum of each block feature to discern as last identification mark; Characteristic after the GaborMPCA method adopts a kind of improved PCA to the Gabor conversion is carried out the intrinsic dimensionality compression.
Experiment adopts in the world two famous face database Yale B and AR as experimental data base.Yable B database is made up of 10 people's that under 64 kinds of different light rays conditions, collect facial image.In the experiment, the angle different according to light, image set is divided into 5 sub-set: subclass 1 (0 spends to 12 degree), subclass 2 (12 to 25 degree), subclass 3 (25 to 50 degree), subclass 4 (50 to 77 degree), subclass 5 (other).In the experiment with subclass 1 as training set, other collect as test set.
The AR database has comprised real facial image local pollution situation (sunglasses and scarf) and different light condition.50 people's in the AR database 650 width of cloth front face images have been adopted in experiment, everyone 13 width of cloth.50 philtrums wherein 25 people are the male sex, and other 25 people are the women.Wherein 4 width of cloth in 13 width of cloth images collect under the different light rays condition; But be that clean facial image is as training image; 6 width of cloth in the 9 remaining width of cloth images; 2 width of cloth are represented the different light rays condition respectively, and on behalf of sunglasses and different light rays condition, 2 width of cloth pollute, and on behalf of scarf and different light rays condition, 2 width of cloth pollute as image to be identified.
Wherein table 1 is the recognition result that different system obtains on different subclass on the Yale B database, the recognition result that table 2 obtains on the AR database for each system.
Recognition system DCT Gabor+Cosine Gabor+SUM NEW
Subclass
2 100 100 100 100
Subclass 3 100 100 100 100
Subclass 4 99.82 98.34 98.34 100
Subclass 5 98.29 97.24 97.24 99.47
On average 99.52 98.89 98.89 99.86
Table 1
Recognition system Gabor+Cosine Gabor+SUM GaborMPCA D-subspace NEW
Light changes 94 93.3 93.3 N/A 100
Sunglasses 79 83 82 84 89
Scarf 81 88 89 93 98
Table 2
We can find out from table 1 and table 2:
1) each recognition system all has high recognition under the situation that clean no local facial image pollutes and the light variation is inviolent.
When 2) overcover such as sunglasses, scarf being arranged at facial image and light change violent 4,5 o'clock original system discriminations of subclass bigger decline arranged.This is because original system is discerned based on whole characteristics, do not select optimum characteristic and discern, and original system does not compensate light.By contrast, the present invention has the better recognition effect, and the performance of simultaneity factor is also relatively stable.This is to have carried out optimization selection owing in the embodiment of the invention through posterior probability conjunctive model antithetical phrase piece face characteristic, and selects optimum characteristic and discern, thereby has suppressed the influence of contaminated characteristic to identification.Simultaneously, the present invention compensates light, has made up the faceform under a plurality of light condition based on reference man's face collection, thereby effectively light has been carried out compensation and improved accuracy of identification.
In sum, there is part contaminated or under the situation of different light, the embodiment of the invention is compared present other system has the better recognition effect at the identification facial image.
Those of ordinary skill in the art will appreciate that embodiment described here is in order to help reader understanding's principle of the present invention, should to be understood that the protection domain of inventing is not limited to such special statement and embodiment.Every making according to foregoing description variously possible be equal to replacement or change, and all is considered to belong to the protection domain of claim of the present invention.

Claims (3)

1. a face identification method is characterized in that, may further comprise the steps:
S1. quotient graph calculates: calculate people's face light model and storage from reference man's face image data base; Specifically being defined as of said reference man's face image data base: in advance given by N class people under 3 kinds of different light condition, collect regular through image, promptly according to position of human eye to facial image up and down pixel carry out the image library that adjusted facial image constituted; Require every kind of corresponding light condition in 3 kinds of light condition of person to person should be identical in the storehouse;
S2. light compensation: according to the facial image under original training of human face image data base and the synthetic different light rays condition of people's face light model and store, constitute light compensation facial image database; Specifically being defined as of said original training of human face image data base: the facial image by in advance given M class people to be identified is formed, and is used to make up light compensation facial image database; Every type of people's picture number is N in the storehouse m, m=1,2 ... M;
S3. feature extraction: each width of cloth facial image of light compensation facial image database is divided sub-piece and carried out feature extraction;
S4. model training: based on the faceform under the features training different light rays condition of extracting among the step S3 and store, constitute faceform's database; Here the implication of faceform's database: M class people's faceform forms under the L kind light that is made up by light compensation facial image database and training process; Complete model bank is made up of L * M faceform, and each model is represented one type of people's under a kind of light condition faceform, by w KnExpression, wherein l is a l kind light condition, m is m class people;
S41. carry out local unsupervised training; The concrete steps of local unsupervised training are:
S411. to every type of people's under every kind of light condition faceform w LmCarry out initialization, l, m are natural number; Here l=1,2 ..L, m=1,2 ... M; Adopt gauss hybrid models as basic model, wherein gauss hybrid models is become by 2 sub-Gauss's model group, and promptly submodel is counted R=2; Initialization procedure is respectively to the weight p (r|w of 2 sub-Gauss models Lm), r=1,2, mean vector μ LmrDiagonal angle vector ∑ with covariance matrix LmrCarry out initialization; The initialization detailed process is following:
1) with the weight p (r|w of each sub-Gauss model Lm) to compose be 0.5, r=1 wherein, 2;
2) faceform w LmR, r=1, the average of 2 sub-Gauss models equals the average of the face characteristic of the face images in the light compensation facial image database of m class people under l kind light condition;
3) faceform w LmR, r=1, the diagonal angle of the covariance matrix of 2 sub-Gauss models vector ∑ LmrEqual the resulting covariance matrix diagonal line vector of average of the face characteristic of the face images in the light compensation facial image database of m class people under l kind light condition;
S412. every type of people of initialization behind the model under every kind of light condition, system carries out double counting with the characteristic of the face images under every kind of light condition of every type of people in the light compensator face image data base to the faceform under every kind of light condition of initialized every type of people;
S42. carry out overall supervised training; To every type of people after the faceform under every kind of light condition carries out local unsupervised training, the faceform of every type of people under every kind of light condition carried out overall supervised training; The practical implementation process of overall situation supervised training is following:
Note J is the repetition training number of times, and j is the j time repetition;
S421.j is since 1; According to the constructed faceform of every type of people under every kind of light condition of the local unsupervised training in front; Piece image characteristic and each faceform's likelihood score value in compute ray compensator's face image data base, and preserve as results of intermediate calculations:
P ( X | w lm j ) = Σ r = 1 R p ( r | w lm j ) p ( X | w lm j , r ) ;
Light compensation where X is an arbitrary face image database of an image feature vector, l = 1,2, ... L, m = 1,2, ... M;
Figure FSB00000866769700022
is X for model
Figure FSB00000866769700023
, the conditional probability of r submodel value;
If S422. characteristics of image vector X is the proper vector of piece image wherein under m people's the l kind light condition, and this characteristic and model w LmThe likelihood score value P (X|w that is calculated Lm) greater than the likelihood value P (X|w that is produced with other models L ' m '), l '=1,2 ... L, and l ' ≠ l, m '=1,2 ... M, and m ' ≠ m then judges correctly, otherwise decision error; When decision error, pattern number (l ', m ') that record is misjudged and pattern number (l, m), with step S423 to model
Figure FSB00000866769700031
With
Figure FSB00000866769700032
Upgrade; Otherwise to step S424;
S423. Were seeking formulas
Figure FSB00000866769700033
on
Figure FSB00000866769700034
and model for each sub-model mean vector and and diagonal covariance matrix vector
Figure FSB00000866769700038
and
Figure FSB00000866769700039
partial derivatives; obtain vector
Figure FSB000008667697000310
and vector
Figure FSB000008667697000311
for intermediate calculations; here
▿ P ( X | w lm j ) = { ∂ P ( X | w lm j ) / ∂ u lmr j , ∂ P ( X | w lm j ) / ∂ Σ lmr j }
▿ P ( X | w l ′ m ′ j ) = { ∂ P ( X | w l ′ m ′ j ) / ∂ u l ′ m ′ r j , ∂ P ( X | w l ′ m ′ j ) / ∂ Σ l ′ m ′ r j }
And according to the obtained vector
Figure FSB000008667697000314
and vector
Figure FSB000008667697000315
by equation (7) and (8) re-calculation model
Figure FSB000008667697000316
and
Figure FSB000008667697000317
and stored;
w Lm ( j + 1 ) = w Lm ( j ) + η ▿ P ( X | w Lm j ) Formula (7)
w l ′ m ′ ( j + 1 ) = w l ′ m ′ ( j ) - η ▿ P ( X , w l ′ m ′ j ) Formula (8)
η is a undated parameter in formula (7) and (8);
If S424. also have non-selected image in the light compensation facial image database then select another width of cloth characteristics of image to return step S421, otherwise execution in step S425;
When if the pattern number number of the misjudgement of S425. preserving is lower than a preset threshold or cycle index during greater than maximum predetermined number of times J, withdraw from circulation, preserve the faceform that all training obtain, otherwise j=j+1 turns back to step S421;
S5. feature extraction: facial image to be identified is divided sub-piece and carried out feature extraction;
S6. recognition of face: the characteristic according to extracting among faceform's database that has constituted and the step S5 is calculated coupling, selects best matching result;
S61. calculate every sub-block and every type of people's under each light condition in faceform's database of the facial image to be identified sub-piece likelihood score of corresponding sub block of each submodel;
S62. calculate the sub-piece likelihood score of associating of each submodel of every type of people under each light condition in facial image to be identified and the faceform's database;
S63. the sub-piece likelihood score of the associating of each submodel of every type of people under every kind of light condition is merged the conjunctive model likelihood score that obtains every type of people and facial image to be identified under every kind of light condition;
S64. the conjunctive model likelihood score under all light condition of every type of people is merged the whole likelihood score that obtains every type of people and facial image to be identified;
S65. choose the corresponding artificial recognition result of that type of value with largest global likelihood score;
S7. subsequent treatment: best matching result and predefined judgment threshold are compared the output judged result.
2. a kind of face identification method according to claim 1 is characterized in that, the sub-piece number in above-mentioned steps S3 and the S5 characteristic extraction procedure equates that the sub-piece number of every width of cloth image is 2~64.
3. a kind of face identification method according to claim 1 is characterized in that, the quantity of the sub-piece of associating among the above-mentioned steps S62 is predeterminable, and the scope of preset value is 1≤Q≤N, and in the formula, Q is a preset value, the sub-piece number that N divides during for feature extraction.
CN 200910059666 2009-06-19 2009-06-19 Face recognition method Expired - Fee Related CN101587543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910059666 CN101587543B (en) 2009-06-19 2009-06-19 Face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910059666 CN101587543B (en) 2009-06-19 2009-06-19 Face recognition method

Publications (2)

Publication Number Publication Date
CN101587543A CN101587543A (en) 2009-11-25
CN101587543B true CN101587543B (en) 2012-12-05

Family

ID=41371785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910059666 Expired - Fee Related CN101587543B (en) 2009-06-19 2009-06-19 Face recognition method

Country Status (1)

Country Link
CN (1) CN101587543B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102477190B1 (en) * 2015-08-10 2022-12-13 삼성전자주식회사 Method and apparatus for face recognition

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976339B (en) * 2010-11-12 2015-07-15 北京邮电大学 Local characteristic extraction method for face recognition
CN101986328B (en) * 2010-12-06 2012-06-27 东南大学 Local descriptor-based three-dimensional face recognition method
CN103136504B (en) * 2011-11-28 2016-04-20 汉王科技股份有限公司 Face identification method and device
KR101241625B1 (en) * 2012-02-28 2013-03-11 인텔 코오퍼레이션 Method, apparatus for informing a user of various circumstances of face recognition, and computer-readable recording medium for executing the method
JP5702751B2 (en) * 2012-05-18 2015-04-15 株式会社ユニバーサルエンターテインメント Game equipment
US9183429B2 (en) * 2012-08-15 2015-11-10 Qualcomm Incorporated Method and apparatus for facial recognition
CN103902961B (en) * 2012-12-28 2017-02-15 汉王科技股份有限公司 Face recognition method and device
CN103903004B (en) * 2012-12-28 2017-05-24 汉王科技股份有限公司 Method and device for fusing multiple feature weights for face recognition
CN103208012B (en) * 2013-05-08 2016-12-28 重庆邮电大学 A kind of illumination face recognition method
CN106295571B (en) * 2016-08-11 2020-03-24 深圳市赛为智能股份有限公司 Illumination self-adaptive face recognition method and system
CN106846778A (en) * 2017-03-08 2017-06-13 美的集团股份有限公司 Appliances equipment control method and system, mobile terminal, server
CN107729886B (en) * 2017-11-24 2021-03-02 北京小米移动软件有限公司 Method and device for processing face image
CN110084016A (en) * 2019-04-23 2019-08-02 努比亚技术有限公司 A kind of method, apparatus, mobile terminal and the storage medium of recognition of face unlock
CN110458134B (en) * 2019-08-17 2020-06-16 南京昀趣互动游戏有限公司 Face recognition method and device
CN110866443B (en) * 2019-10-11 2023-06-16 厦门身份宝网络科技有限公司 Portrait storage method, face recognition equipment and storage medium
CN110991228A (en) * 2019-10-24 2020-04-10 青岛中科智保科技有限公司 Improved PCA face recognition algorithm resistant to illumination influence
CN112767073A (en) * 2021-01-07 2021-05-07 北京码牛科技有限公司 Check-in management control method and device, mobile terminal and storage medium
CN113657297A (en) * 2021-08-20 2021-11-16 华能国际电力股份有限公司上海石洞口第二电厂 Intelligent operation violation identification method and device based on characteristic analysis
CN115457644B (en) * 2022-11-10 2023-04-28 成都智元汇信息技术股份有限公司 Picture identification method and device for obtaining target based on expansion space mapping
CN115840834B (en) * 2023-02-20 2023-05-23 深圳市视美泰技术股份有限公司 Face database quick search method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236598A (en) * 2007-12-28 2008-08-06 北京交通大学 Independent component analysis human face recognition method based on multi- scale total variation based quotient image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236598A (en) * 2007-12-28 2008-08-06 北京交通大学 Independent component analysis human face recognition method based on multi- scale total variation based quotient image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Amnon Shashua et al..The Quotient Image:Class-Based Re-Rendering and Recognition with Varying Illuminations.《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》.2001,第23卷(第2期),129-139. *
Marius Tico et al..FINGERPRINT RECOGNITION USING WAVELET FEATURES.《Circuits and Systems,2001.ISCAS 2001.The 2001 IEEE International Symposium on》.2001,第2卷(II-21)-(II-24). *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102477190B1 (en) * 2015-08-10 2022-12-13 삼성전자주식회사 Method and apparatus for face recognition

Also Published As

Publication number Publication date
CN101587543A (en) 2009-11-25

Similar Documents

Publication Publication Date Title
CN101587543B (en) Face recognition method
Alif et al. Isolated Bangla handwritten character recognition with convolutional neural network
Li et al. Linestofacephoto: Face photo generation from lines with conditional self-attention generative adversarial networks
Nam et al. Local decorrelation for improved pedestrian detection
Shao et al. A hierarchical scheme of multiple feature fusion for high-resolution satellite scene categorization
Fitzgibbon et al. On affine invariant clustering and automatic cast listing in movies
Karlinsky et al. Using linking features in learning non-parametric part models
Li et al. Uni-perceiver v2: A generalist model for large-scale vision and vision-language tasks
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
Yeap et al. Image forensic for digital image copy move forgery detection
CN108564061A (en) A kind of image-recognizing method and system based on two-dimensional principal component analysis
Zhang et al. Person re-identification by mid-level attribute and part-based identity learning
Tan et al. Style interleaved learning for generalizable person re-identification
CN108898153B (en) Feature selection method based on L21 paradigm distance measurement
Xin et al. Random part localization model for fine grained image classification
CN116704612A (en) Cross-visual-angle gait recognition method based on contrast domain self-adaptive learning
CN103336974A (en) Flower and plant category recognition method based on local constraint sparse characterization
Wang et al. Two-stage multi-scale resolution-adaptive network for low-resolution face recognition
Teng et al. Unimodal face classification with multimodal training
Islam et al. A preliminary study of lower leg geometry as a soft biometric trait for forensic investigation
Wang et al. Human interaction recognition based on sparse representation of feature covariance matrices
Fu et al. Pedestrian detection by feature selected self-similarity features
Liu et al. Attend, correct and focus: a bidirectional correct attention network for image-text matching
Lin et al. Robust person identification with face and iris by modified PUM method
Xue et al. Informed non-convex robust principal component analysis with features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121205

Termination date: 20170619