CN101916384A - Facial image reconstruction method and device and face recognition system - Google Patents

Facial image reconstruction method and device and face recognition system Download PDF

Info

Publication number
CN101916384A
CN101916384A CN 201010268543 CN201010268543A CN101916384A CN 101916384 A CN101916384 A CN 101916384A CN 201010268543 CN201010268543 CN 201010268543 CN 201010268543 A CN201010268543 A CN 201010268543A CN 101916384 A CN101916384 A CN 101916384A
Authority
CN
China
Prior art keywords
image
target image
illumination
aligned target
standard light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010268543
Other languages
Chinese (zh)
Other versions
CN101916384B (en
Inventor
熊鹏飞
黄磊
刘昌平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwang Technology Co Ltd
Original Assignee
Hanwang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanwang Technology Co Ltd filed Critical Hanwang Technology Co Ltd
Priority to CN2010102685433A priority Critical patent/CN101916384B/en
Publication of CN101916384A publication Critical patent/CN101916384A/en
Application granted granted Critical
Publication of CN101916384B publication Critical patent/CN101916384B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a facial image reconstruction method, a facial image reconstruction device and a face recognition system, and relates to the pattern recognition technology. The facial image reconstruction method comprises the following steps of: performing pixel level alignment on a target image; judging the illumination type of the aligned target image; acquiring a reconstruction model of the illumination type of the aligned target image; reconstructing the aligned target image according to the reconstruction model, and acquiring a standard illumination image of the aligned target image; and correcting the standard illumination image of the aligned target image into an original shape, and acquiring a standard illumination image with the same shape as the target image. The facial image reconstruction method, the facial image reconstruction device and the face recognition system are mainly used for face recognition.

Description

A kind of facial image method for reconstructing, device and face identification system
Technical field
The invention belongs to mode identification technology, relate in particular to a kind of facial image method for reconstructing, device and face identification system.
Background technology
Recognition of face has a wide range of applications at numerous areas such as safety, finance, man-machine interaction, information, education as a kind of important identity authentication technique.Because the photoenvironment of gathering is uncontrollable, the illumination condition of template image and test pattern always is not quite similar, and this makes present face identification system based on image be difficult to tackle the influence that illumination variation is brought.
The improvement of illumination variation human face recognition system is mainly based on two classes: (1) illumination template difference: with illumination human face picture breakdown behaviour face illumination invariant and illumination template; (2) illumination subspace: from human face light image match illumination subspace, the image of illumination all can be gone out by this illumination subspace linear list arbitrarily.
Under first kind method, because the illumination template is light source projection along people's face normal orientation on facial image, and it is not separate between the illumination invariant, the constant image of illumination that direct template difference is obtained makes that human face structure and texture information are weakened greatly, often only remaining profile information is difficult to describe the general image feature; Under second class methods, because the illumination subspace of training objective image from reference picture, therefore the reconstructed image of target image also depends on reference picture, thereby cause image fault, and these class methods are directly come registration image with eye position, owing to itself there are differences between the Different Individual image, cause not the accurately image reconstruction distortion of alignment more serious.
The inventor finds in realizing process of the present invention, because the presentation between the different light hypograph is widely different, and the method for illumination template difference and illumination subspace all adopts identical method to tackle all illumination conditions, must cause the face recognition result under the different light widely different like this.
Summary of the invention
The embodiment of the invention provides a kind of facial image method for reconstructing, device and face identification system, can improve the face recognition result under the illumination variation.
The embodiment of the invention adopts following technical scheme:
A kind of facial image method for reconstructing comprises:
Target image is carried out the Pixel-level alignment;
Judge the affiliated illumination classification of aligned target image;
Obtain the reconstruction model of the affiliated illumination classification of described aligned target image;
Rebuild described aligned target image according to described reconstruction model, obtain the standard light image of described aligned target image.
A kind of facial image reconstructing device comprises:
Alignment unit is used for target image is carried out the Pixel-level alignment;
Illumination classification judging unit is used to judge the affiliated illumination classification of aligned target image;
The reconstruction model acquiring unit is used to obtain the reconstruction model of illumination classification under the described aligned target image;
Reconstruction unit is used for rebuilding described aligned target image according to described reconstruction model, obtains the standard light image of described aligned target image.
A kind of face identification system comprises face identification device and facial image reconstructing device, wherein,
Described facial image reconstructing device is used for target image is rebuild the standard light image that obtains target image;
Described face identification device, the standard light image that is used for described facial image reconstructing device is rebuild the target image of gained carries out recognition of face.
Facial image method for reconstructing, device and the face identification system of the embodiment of the invention, by target image being carried out the Pixel-level alignment, can obtain the better alignment effect of image of aliging with eye position, make that the image after the alignment big distortion can not occur than only; By illumination classification under the judgement aligned target image, and obtain the reconstruction model of illumination classification under the described aligned target image, rebuild described aligned target image according to this reconstruction model then, obtain the standard light image of described aligned target image, the image that same light can be shone projection is divided into a class, target image is made the as a whole image reconstruction that carries out, compared with prior art, avoided losing of characteristics of image that illumination template difference causes, therefore the reconstruction error of simultaneously also having avoided the illumination subspace to bring can improve the face recognition result under the illumination variation.
Description of drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention, the accompanying drawing of required use is done an introduction simply in will describing embodiment below.
The schematic flow sheet of the facial image method for reconstructing that Fig. 1 provides for the embodiment of the invention;
Fig. 2 is people's face alignment synoptic diagram of the embodiment of the invention;
Fig. 3 is the illumination classification synoptic diagram of the embodiment of the invention;
Fig. 4 is the process flow diagram of the facial image method for reconstructing of the embodiment of the invention;
Fig. 5 is that a concrete facial image adopts the facial image method for reconstructing of the embodiment of the invention to obtain the process flow diagram of standard light image;
Fig. 6 (a) is for adopting the recognition of face rate that embodiment of the invention method obtains and the contrast synoptic diagram of other algorithms under the different light classification to Yale B database;
Fig. 6 (b) is for adopting the image presentation that embodiment of the invention method obtains and the contrast synoptic diagram of other algorithms under the different light classification to Yale B database;
Fig. 7 (a) is the contrast synoptic diagram of people's face alignment with unjustified situation human face discrimination;
Fig. 7 (b) rebuilds the contrast synoptic diagram of effect for alignment of people's face and unjustified situation human face, the wherein unjustified reconstruction of first behavior, and second behavior alignment is rebuild;
The functional unit structural drawing of the facial image reconstructing device that Fig. 8 provides for the embodiment of the invention;
The composition synoptic diagram of the face identification system that Fig. 9 provides for the embodiment of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described.
As shown in Figure 1, the facial image method for reconstructing that the embodiment of the invention provides comprises:
Step 11 is carried out the Pixel-level alignment to target image.
This step is specifically as follows: be out of shape general faceform according to the key point of every training image, obtain the sense of reality faceform of every training image correspondence successively; According to the sense of reality faceform's of all training images average, utilize alternative manner to obtain the standard faces model; According to described standard faces model deformation target image, make described target image align with described standard faces model Pixel-level.
Wherein, described key point is predefined people's face pixel, and in the present invention, the key point definition is people's 81 points on the face, comprises facial contour, eyes, and nose, organs such as face are used for describing people's face shape.Key point can be passed through active shape model, and (Active Shape Model, ASM) location obtains automatically.
Described general faceform is to derive from three-dimensional data base, is number of people grid model, contains 13936 summits and 27538 dough sheets altogether.The sparse point of part on this general faceform is corresponding one by one with predefined people's face pixel, in the present invention these corresponding point is called the model corresponding point.
At first, be out of shape general faceform according to the key point of every training image and obtain one group of sense of reality faceform.
The key point sequence of supposing training image is p I, general faceform's corresponding point sequence is p M, can be by the RBF distortion so that under the effect of warping function F (x), p MWith p IIdentical.
F (p M)=p I, wherein: Be selected RBF kernel function, || x-x i|| expression point x and x iBetween distance.
According to given corresponding point p MAnd p I, can calculate the deformation coefficient weights omega:
p I j = Σ i = 1 n ω i h ( | | p M j - p M i | | ) .
After obtaining warping function, for the last sparse some p except that the model corresponding point of general faceform M, bring the correspondence image pixel p after this warping function can calculate model deformation into IThereby, realize the complete matching between the pixel on general faceform and the training image.
p I ′ = Σ i = 1 n ω i h ( | | p M ′ - p i M | | )
Because unique point is chosen at positions such as human face and profile, people's face key position that therefore can accurately align, and retrain other level and smooth relatively human face regions.General faceform after the distortion is the three dimension realistic faceform of every training image correspondence.
Need to prove, to have identical attitude, before being out of shape general faceform, be necessary earlier the training image attitude to be adjusted according to every training image for guaranteeing all sense of reality faceforms because every training image attitude may be different.
There is the rotation change of three directions in the attitude of facial image, for one group of key point p I, make it rotate to be positive unique point p I n, then have
p I n=s·R(α,β,γ)·p I+t
S wherein, R, t be yardstick, rotation and the shift factor in the corresponding rotation conversion respectively, and α, beta, gamma are the anglec of rotation of three directions.Because general faceform is positive fully, so the p after the unique point alignment I nWith model corresponding point p MBetween distance minimum, the angle of attitude rotation can obtain by minimizing corresponding energy function:
( α , β , r ) = arg min α , β , r ( | | P I n - P M | | )
Wherein energy function is the front two coordinate of model corresponding point and the distance between the image key points.Can be easy to obtain the human face posture anglec of rotation according to LM (Levenberg-Marquardt) algorithm, be positive attitude thereby people's face is proofreaied and correct.
Then, after obtaining all sense of reality faceforms, utilize the method for iteration to obtain the standard faces model.The process that obtains the standard faces model is as follows:
(1) according to people's face shape of every training image after the attitude rectification, be out of shape general faceform to everyone face shape, obtain the real human face model of every training image correspondence;
(2) mean value that calculates all real human face models obtains average face model, and every training image people face shape after the distortion attitude correction is to average face model;
(3) repeated for (2) step, twice average face model difference is less than certain threshold value up to front and back, and then this moment, average face model was the standard faces model.
Then, according to described standard faces model deformation target image, make described target image align with described standard faces model Pixel-level.
Because the standard faces model has reduced the difference between faceform and the people's face shape, therefore according to standard faces model deformation target image, make target image with after standard faces model Pixel-level is alignd, can not make that big distortion appears in the people's face shape after the alignment.
Referring to Fig. 2, Fig. 2 is people's face alignment synoptic diagram of the embodiment of the invention, Fig. 2 first row have provided general faceform, sense of reality faceform and standard faces model synoptic diagram, corresponding four different people faces of four row behind Fig. 2, every row is followed successively by original facial image, sense of reality faceform, eyes registration image and Pixel-level registration image.As can be seen from Figure 2 standard faces model and people's face shape are more approaching, and facial image has been realized the corresponding one by one of pixel after the Pixel-level alignment, obtain better alignment effect than the image that only aligns with eye position.
Step 12 is judged the affiliated illumination classification of aligned target image.
According to the projection of illumination on image, same light is divided into a class according to the image that projection distributes, the image reconstruction of being convenient under every class illumination concerns match.Because illumination is presented as the projection of light source along people's face normal direction, in people's face normal direction zone identical with light source direction, its illumination drop shadow intensity maximum can form the illumination center.The illumination center is approximately the center of image than bright area, and identical illumination center shows as approximate direction of illumination and changes with identical image light and shade, so the present invention carries out the illumination category classification with the position at illumination center.
At first, according to the pixel average of aligned target image, aligned target image is carried out the light and shade binaryzation cut apart.
Suppose that facial image I is the h*w size, the pixel average of I is
Figure BDA0000025468910000071
Binary image I then bFor: Wherein bright value gets 1, and dark value gets 0.
Then, calculate that all get the center of gravity of bright value pixel on the described aligned target image, as the illumination center of described aligned target image.
Behind the image binaryzation, all pixel values be 1 point be original image than bright spot, the center of gravity of all 1 pixels on the computed image is the illumination center of image.
At last, judge illumination classification under the described aligned target image according to the position at illumination center.
Because human face structure difference, the position at its illumination center of facial image under same light is shone can not be identical, the present invention's employing is carried out piecemeal by unalterable rules to image and is realized the illumination classification, and the image that the illumination center is belonged to identical piecemeal is divided into same illumination classification.
Suppose facial image evenly is divided into n h* n wPiece, then the illumination classification of facial image correspondence is: c L={ c|P L∈ I c, I wherein cBe c piece subimage, P LBe image irradiation center, c LBe its corresponding illumination classification.The subimage block size obtains according to experiment, and in theory, subimage block is more little, the image irradiation classification is thin more, and classifying quality is good more, but owing to there are differences between the image, image irradiation center under same light is shone has deviation, and therefore too small subimage block can bring error in classification on the contrary.In the present invention all images is normalized to the 64*64 size, subimage block is 8*8.
For the standard light image, 0,1 value of its binary image evenly distributes, so the illumination center is generally image center, this and standard direction of illumination basically identical.For the single light source image, because the projection of light source is not the brightness variation of image point, but the brightness in a certain zone enhancing, so the direction of light source can be reflected in the illumination center; For the multiple light courcess image, on its binary image, the image irradiation center is the center of multiple light source.Because illumination of the present invention classification purpose is not accurately direction of illumination to be estimated, but same light is divided into a class according to the image of projection, therefore all can correctly classifies any illumination condition.
Referring to Fig. 3, Fig. 3 is the illumination classification synoptic diagram of the embodiment of the invention.Fig. 3 has provided the sample of people's face piecemeal and different light classification image, wherein the corresponding former figure of each sample and a binary image.As can be seen from Figure 3, the original direction of illumination basically identical of the illumination classification results of the embodiment of the invention and image.
Step 13 is obtained the reconstruction model of illumination classification under the described aligned target image.
For obtaining the reconstruction model of every kind of illumination classification, the embodiment of the invention is classified all training images by the illumination classification, to every training image under every class illumination classification, the standard light image of a correspondence of pairing, thereby the one group of training image that obtains under this kind illumination classification is right, right according to one group of training image under this kind illumination classification then, match obtains the reconstruction relation between original light image and the standard light image.
Particularly, at first utilize one group of training image and one group of corresponding standard light image under this kind illumination classification to set up primary light photograph subspace and standard light photograph subspace respectively, the described one group of training image of projection carries out maximum pivot analysis (Principal component analysis to primary light according to the subspace then, PCA) dimensionality reduction and obtain one group of projection coefficient, one group of standard light image of the described correspondence of projection carries out the PCA dimensionality reduction according to the subspace and obtains another group projection coefficient to standard light; The last described two groups of projection coefficients of linear fit obtain two groups of reconstruction matrix and reconstruction errors between the projection coefficient, thereby obtain the reconstruction model under this kind illumination classification.For arbitrary image, its original light image
Figure BDA0000025468910000081
(t represents any in 1~n image sequence) and standard light image
Figure BDA0000025468910000082
Between the approximate linear relationship that exists:
I t N = F ( I t L ) = R · I t L + E ,
Wherein, F (x)=Rx+E is a corresponding relation, and R is a reconstruction matrix, and E is a reconstruction error.Suppose
Figure BDA0000025468910000092
Be illumination classification L cUnder all original light images,
Figure BDA0000025468910000093
Be corresponding standard light image, wherein n is the picture number under this illumination classification, R cAnd E cBe respectively the corresponding relation of rebuilding, then
[ I 1 N , I 2 N , . . . , I n N ] = F ( I L ) = R c · [ I 1 L , I 2 L , . . . , I n L ] + E c
If the image pixel number is m, then reconstruction matrix R cBe m*m dimension, reconstruction error E cBe the m*1 dimension.R cAnd E cCan be according to least square method from subclass I LAnd I NIn calculate.Because in most cases image pixel number m will be far longer than image number n, i.e. I NThe row full rank, the R of calculating gained cAnd E cCan there be very big error, therefore need carries out dimensionality reduction image.
According to the PCA theory, arbitrary image is projected as in the corresponding subspace a bit.Shine the picture construction subspace with original light image and standard light respectively, then:
I t N = I N ‾ + Σ i α i B i N , I t L = I L ‾ + Σ i β i B i L
α wherein, β, B N, B LThe projection coefficient and the projection base of the corresponding two groups of images of difference.Projection coefficient α after the projection, β can describe original light image and standard light image respectively, thus original image is able to dimensionality reduction.If projection coefficient is the k dimension, because original light image subspace and standard light image subspace are corresponding mutually, therefore two groups of projection coefficient dimensions can be consistent.
If projection coefficient reconstruction matrix and reconstruction error are respectively R pAnd E p, the corresponding relation between then original light image and the standard light image can be converted into the relation between the homolographic projection coefficient:
α = R p · β + E p = R p 0 0 E p · β 1 R p ′ · β ′
, R ' wherein pBe (k+1) * (k+1) dimension.Because behind the PCA dimensionality reduction, picture number n and k+1 are suitable, therefore can directly obtain following reconstruction matrix:
R p 0 0 E p = R p ′ = α · β ′ T · ( β ′ · β ′ T ) - 1 .
Step 14 is rebuild described aligned target image according to described reconstruction model, obtains the standard light image of described aligned target image.
This step is specifically as follows: the projection aligned target image is shone in the subspace to the primary light of affiliated illumination classification, obtains the projection coefficient of its primary light photograph; Utilize the projection coefficient of this primary light photograph then, from the reconstruction model of corresponding illumination classification, obtain the projection coefficient of its standard light photograph; Utilize at last the projection coefficient of this standard light photograph again, from the standard light image subspace of corresponding illumination classification, reconstruct the standard light image of this aligned target image.
For input picture
Figure BDA0000025468910000102
Project to corresponding primary light according to the projection coefficient β that obtains corresponding primary light photograph in the subspace In, calculate the projection coefficient α of its corresponding standard light photograph then according to its corresponding reconstruction model In=R cβ In+ E cThereby the standard light image of reconstructing is
Figure BDA0000025468910000103
For this reason, the facial image method for reconstructing of the embodiment of the invention by target image being carried out Pixel-level alignment, can obtain the better alignment effect of image of aliging with eye position than only, makes that the image after the alignment big distortion can not occur; By illumination classification under the judgement aligned target image, and obtain the reconstruction model of illumination classification under the described aligned target image, rebuild described aligned target image according to this reconstruction model then, obtain the standard light image of described aligned target image, the image that same light can be shone projection is divided into a class, target image is made the as a whole image reconstruction that carries out, compared with prior art, avoided losing of characteristics of image that illumination template difference causes, therefore the reconstruction error of simultaneously also having avoided the illumination subspace to bring can improve the face recognition result under the illumination variation.
Further, the facial image method for reconstructing of the embodiment of the invention can also comprise:
Step 15 returns the standard light of described aligned target image to original-shape according to image rectification, obtains the standard light image identical with described target image shape.
Because before rebuilding, target image and standard faces model have been carried out Pixel-level to align, so the pixel of the standard light image of reconstruction gained is corresponding one by one with the sparse point of standard faces model, therefore be necessary according to radial basis function (Radial Basis Function, RBF) be out of shape this reconstructed image, make the key point of image key points and target image be consistent, thereby obtain and the preceding identical standard light image of target image shape that aligns.Can improve the face recognition result under the illumination variation further for this reason.
In sum, the process flow diagram of the facial image method for reconstructing of the embodiment of the invention can comprise the steps: as shown in Figure 4
Step 101 trains general faceform to obtain the standard faces model by all training images, the target image of any illumination is carried out Pixel-level with this standard faces model align.
Step 102 is carried out the judgement of illumination classification to all training images after the alignment and target image.
Step 103, to set up training image right for the training image of every kind of illumination classification, obtains reconstruction model.
Particularly, can utilize one group of training image of every kind of illumination classification to shine the subspace according to subspace and standard light with the primary light that one group of corresponding standard light image is set up corresponding illumination classification respectively, projection training image and standard light image carry out the PCA dimensionality reduction and obtain two groups of projection coefficients to corresponding subspace respectively, obtain the reconstruction model of corresponding illumination classification according to these two groups of projection coefficient training relations of rebuilding.
Step 104 selects the reconstruction model of corresponding illumination classification that aligned target image is carried out image reconstruction, obtains the standard light image of aligned target image.
Step 105 is carried out shape correction to the image of rebuilding gained, obtains the standard light image identical with described target image shape.
Referring to Fig. 5, Fig. 5 is that a concrete facial image adopts the facial image method for reconstructing of the embodiment of the invention to obtain the process flow diagram of standard light image.The treatment step that comprises image alignment, standard light photograph image reconstruction and shape correction, before handling, also need general faceform to be trained to the standard faces model according to training image, and the reconstruction model that obtains the different light type according to the training image training different light relation of rebuilding.
On Yale B database, the facial image method for reconstructing of the embodiment of the invention has been carried out experimental verification.Referring to Fig. 6 (a) and 6 (b), Fig. 6 (a) is for adopting the recognition of face rate that embodiment of the invention method obtains and the contrast synoptic diagram of other algorithms under the different light classification to Yale B database; Fig. 6 (b) is for adopting the image presentation that embodiment of the invention method obtains and the contrast synoptic diagram of other algorithms under the different light classification to Yale B database, these other contrast algorithm comprises: brightness of image is regulated, discrete cosine transform (DiscreteCosine Transform, DCT), shade photo-irradiation treatment (Shadow Illuminator Process, SIP), quotient images (Quoient Image, QI).Yale B database root is divided into 5 subclass according to direction of illumination, be template with first subclass in the experiment, other subclass are test, the standard light image that Fig. 6 (a) and 6 (b) have provided original image, brightness regulation image, DCT image, SIP image, QI image successively and adopted the embodiment of the invention to obtain, the recognition result under the different light direction.By Fig. 6 (a) and 6 (b) as can be seen, no matter on recognition of face rate or image presentation, the facial image method for reconstructing of the embodiment of the invention has all embodied classic reconstruction effect.
Referring to Fig. 7 (a) and 7 (b), Fig. 7 (a) is the contrast synoptic diagram of people's face alignment with unjustified situation human face discrimination; 7 (b) rebuild the contrast synoptic diagram of effect for alignment of people's face and unjustified situation human face, the wherein unjustified reconstruction of first behavior, and second behavior alignment is rebuild.By Fig. 7 (a) and 7 (b) as can be seen, the more effective image reconstruction of having realized of people's face alignment schemes provided by the invention has reduced image fault.
Need to prove that the people's face alignment schemes and the illumination sorting technique that provide in the embodiment of the invention also can be used for other Flame Image Process, are not limited to the facial image method for reconstructing of the embodiment of the invention.
Based on the facial image method for reconstructing of the embodiment of the invention, the embodiment of the invention also provides a kind of facial image reconstructing device and a kind of face identification system.
Referring to Fig. 8, the facial image reconstructing device that the embodiment of the invention provides comprises:
Alignment unit 81 is used for target image is carried out the Pixel-level alignment;
Illumination classification judging unit 82 is used to judge the affiliated illumination classification of aligned target image;
Reconstruction model acquiring unit 83 is used to obtain the reconstruction model of illumination classification under the described aligned target image;
Reconstruction unit 84 is used for rebuilding described aligned target image according to described reconstruction model, obtains the standard light image of described aligned target image.
Further, still referring to Fig. 8, the facial image reconstructing device that the embodiment of the invention provides can also comprise:
Correcting unit 85, the standard light that is used for aligned target image that described reconstruction unit 84 is obtained is returned original-shape according to image rectification, obtains the standard light image identical with described target image shape.
Referring to Fig. 9, the face identification system that the embodiment of the invention provides comprises the facial image reconstructing device 80 and the face identification device 90 of the embodiment of the invention, wherein,
Described facial image reconstructing device 80 is used for target image is rebuild the standard light image that obtains target image; Particularly, can comprise alignment unit shown in Figure 8 81, illumination classification judging unit 82, reconstruction model acquiring unit 83 and reconstruction unit 84, and alternatively, can also comprise correcting unit 85.
Described face identification device 90 is used for the standard light image that described facial image reconstructing device 80 is rebuild the target image of gained is carried out recognition of face.
The explanation of the principle of each functional unit can not repeat them here with reference to the principle of the described method embodiment of Fig. 1 in the facial image reconstructing device of the embodiment of the invention and the face identification system.
The facial image reconstructing device and the face identification system of the embodiment of the invention, by alignment unit target image is carried out the Pixel-level alignment, can obtain the better alignment effect of image of aliging with eye position, make that the image after the alignment big distortion can not occur than only; By illumination classification under the illumination classification judgment unit judges aligned target image, and obtain the reconstruction model of illumination classification under the described aligned target image by the reconstruction model acquiring unit, rebuild described aligned target image by reconstruction unit according to this reconstruction model then, obtain the standard light image of described aligned target image, the image that same light can be shone projection is divided into a class, target image is made the as a whole image reconstruction that carries out, compared with prior art, avoided losing of characteristics of image that illumination template difference causes, therefore the reconstruction error of simultaneously also having avoided the illumination subspace to bring can improve the face recognition result under the illumination variation.
Above-mentioned specific embodiment is not in order to restriction the present invention; for those skilled in the art; all under the prerequisite that does not break away from the principle of the invention; that utilizes that the present invention proposes passes through the alignment of people's face and illumination classification realization to the reconstruction of facial image, all should be included within protection scope of the present invention.

Claims (12)

1. a facial image method for reconstructing is characterized in that, comprising:
Target image is carried out the Pixel-level alignment;
Judge the affiliated illumination classification of aligned target image;
Obtain the reconstruction model of the affiliated illumination classification of described aligned target image;
Rebuild described aligned target image according to described reconstruction model, obtain the standard light image of described aligned target image.
2. method according to claim 1 is characterized in that, describedly target image is carried out Pixel-level alignment comprises:
Be out of shape general faceform according to the key point of every training image, obtain the sense of reality faceform of every training image correspondence successively;
According to the sense of reality faceform's of all training images average, utilize alternative manner to obtain the standard faces model;
According to described standard faces model deformation target image, make described target image align with described standard faces model Pixel-level.
3. method according to claim 2 is characterized in that, before being out of shape general faceform according to the key point of every training image, describedly target image is carried out Pixel-level alignment also comprising:
Adjusting every training image is positive attitude.
4. method according to claim 1 is characterized in that, the illumination classification comprises under the described judgement aligned target image:
According to the pixel average of aligned target image, aligned target image is carried out the light and shade binaryzation cut apart;
Calculate that all get the center of gravity of bright value pixel on the described aligned target image, as the illumination center of described aligned target image;
Judge illumination classification under the described aligned target image according to the position at described illumination center.
5. method according to claim 4 is characterized in that, the illumination classification also comprises under the described judgement aligned target image:
By unalterable rules target image is carried out piecemeal;
The illumination classification was specially under described aligned target image was judged in then described position according to described illumination center:
Judge illumination classification under the described aligned target image according to place, described illumination center piecemeal.
6. method according to claim 1 is characterized in that, the described reconstruction model that obtains the affiliated illumination classification of described aligned target image comprises:
All training images are classified by the illumination classification;
Standard light image for the correspondence of every training image pairing of illumination classification under the described aligned target image;
Set up primary light according to one group of training image of illumination classification under the described aligned target image respectively with one group of corresponding standard light image and shine the subspace according to subspace and standard light;
The described one group of training image of projection and one group of corresponding standard light image carry out maximum pivot analysis dimensionality reduction to corresponding subspace respectively, and obtain two groups of projection coefficients;
The described two groups of projection coefficients of linear fit obtain the reconstruction model of the affiliated illumination classification of described aligned target image.
7. method according to claim 1 is characterized in that, describedly rebuilds described aligned target image according to described reconstruction model, and the standard light that obtains described aligned target image comprises according to image:
The described aligned target image of projection is shone in the subspace to the primary light of affiliated illumination classification, obtains the projection coefficient of the primary light photograph of described aligned target image;
Utilize the projection coefficient of described primary light photograph, from described reconstruction model, obtain the projection coefficient of the standard light photograph of described aligned target image;
Utilize the projection coefficient of described standard light photograph, under described aligned target image, reconstruct the standard light image of described aligned target image the standard light image subspace of illumination classification.
8. method according to claim 1 is characterized in that, described method also comprises:
The standard light of described aligned target image is returned original-shape according to image rectification, obtain the standard light image identical with described target image shape.
9. method according to claim 8 is characterized in that, described standard light with described aligned target image is returned original-shape according to image rectification and comprised:
Be out of shape the standard light image of described aligned target image according to radial basis function, with the key point of the standard light image of described aligned target image with align before the key point of target image be consistent.
10. a facial image reconstructing device is characterized in that, comprising:
Alignment unit is used for target image is carried out the Pixel-level alignment;
Illumination classification judging unit is used to judge the affiliated illumination classification of aligned target image;
The reconstruction model acquiring unit is used to obtain the reconstruction model of illumination classification under the described aligned target image;
Reconstruction unit is used for rebuilding described aligned target image according to described reconstruction model, obtains the standard light image of described aligned target image.
11. device according to claim 10 is characterized in that, described device also comprises:
Correcting unit, the standard light that is used for aligned target image that described reconstruction unit is obtained is returned original-shape according to image rectification, obtains the standard light image identical with described target image shape.
12. OnePlant face identification system, it is characterized in that, comprise face identification device and facial image reconstructing device, wherein,
Described facial image reconstructing device is used for target image is rebuild the standard light image that obtains target image;
Described face identification device, the standard light image that is used for described facial image reconstructing device is rebuild the target image of gained carries out recognition of face.
CN2010102685433A 2010-09-01 2010-09-01 Facial image reconstruction method and device and face recognition system Expired - Fee Related CN101916384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102685433A CN101916384B (en) 2010-09-01 2010-09-01 Facial image reconstruction method and device and face recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102685433A CN101916384B (en) 2010-09-01 2010-09-01 Facial image reconstruction method and device and face recognition system

Publications (2)

Publication Number Publication Date
CN101916384A true CN101916384A (en) 2010-12-15
CN101916384B CN101916384B (en) 2012-11-28

Family

ID=43323893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102685433A Expired - Fee Related CN101916384B (en) 2010-09-01 2010-09-01 Facial image reconstruction method and device and face recognition system

Country Status (1)

Country Link
CN (1) CN101916384B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424483A (en) * 2013-08-21 2015-03-18 中移电子商务有限公司 Face image illumination preprocessing method, face image illumination preprocessing device and terminal
CN107945137A (en) * 2017-12-06 2018-04-20 深圳云天励飞技术有限公司 Method for detecting human face, electronic equipment and storage medium
CN108537199A (en) * 2018-04-18 2018-09-14 西安第六镜网络科技有限公司 Based on the 3D facial image correction gain apparatus rebuild and method
CN108932458A (en) * 2017-05-24 2018-12-04 上海云从企业发展有限公司 Restore the facial reconstruction method and device of glasses occlusion area
CN108946349A (en) * 2018-07-23 2018-12-07 广州跨行网络科技有限公司 A kind of elevator control system and its implementation method of recognition of face and cloud video intercommunication
CN109086752A (en) * 2018-09-30 2018-12-25 北京达佳互联信息技术有限公司 Face identification method, device, electronic equipment and storage medium
CN109087429A (en) * 2018-09-19 2018-12-25 重庆第二师范学院 The method of library ticket testimony of a witness consistency check based on face recognition technology
CN109214324A (en) * 2018-08-27 2019-01-15 曜科智能科技(上海)有限公司 Most face image output method and output system based on polyphaser array
CN111325851A (en) * 2020-02-28 2020-06-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112434616A (en) * 2020-11-26 2021-03-02 成都新希望金融信息有限公司 User classification method and device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6701026B1 (en) * 2000-01-26 2004-03-02 Kent Ridge Digital Labs Method and apparatus for cancelling lighting variations in object recognition
CN101046847A (en) * 2007-04-29 2007-10-03 中山大学 Human face light alignment method based on secondary multiple light mould
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6701026B1 (en) * 2000-01-26 2004-03-02 Kent Ridge Digital Labs Method and apparatus for cancelling lighting variations in object recognition
CN101046847A (en) * 2007-04-29 2007-10-03 中山大学 Human face light alignment method based on secondary multiple light mould
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《吉首大学学报(自然科学版)》 20070325 赵英男等 光照变化条件下的人脸识别预处理 63-66 第28卷, 第2期 2 *
《电脑开发与应用》 20080405 刘丽华 人脸光照问题研究综述 36-39 第21卷, 第4期 2 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424483A (en) * 2013-08-21 2015-03-18 中移电子商务有限公司 Face image illumination preprocessing method, face image illumination preprocessing device and terminal
CN108932458A (en) * 2017-05-24 2018-12-04 上海云从企业发展有限公司 Restore the facial reconstruction method and device of glasses occlusion area
CN108932458B (en) * 2017-05-24 2020-09-11 恒睿(重庆)人工智能技术研究院有限公司 Face reconstruction method and device for recovering glasses shielding area
CN107945137A (en) * 2017-12-06 2018-04-20 深圳云天励飞技术有限公司 Method for detecting human face, electronic equipment and storage medium
CN108537199A (en) * 2018-04-18 2018-09-14 西安第六镜网络科技有限公司 Based on the 3D facial image correction gain apparatus rebuild and method
CN108946349A (en) * 2018-07-23 2018-12-07 广州跨行网络科技有限公司 A kind of elevator control system and its implementation method of recognition of face and cloud video intercommunication
CN109214324A (en) * 2018-08-27 2019-01-15 曜科智能科技(上海)有限公司 Most face image output method and output system based on polyphaser array
CN109087429A (en) * 2018-09-19 2018-12-25 重庆第二师范学院 The method of library ticket testimony of a witness consistency check based on face recognition technology
CN109087429B (en) * 2018-09-19 2020-12-04 重庆第二师范学院 Method for checking consistency of library book-borrowing testimony of witness based on face recognition technology
CN109086752A (en) * 2018-09-30 2018-12-25 北京达佳互联信息技术有限公司 Face identification method, device, electronic equipment and storage medium
CN111325851A (en) * 2020-02-28 2020-06-23 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112434616A (en) * 2020-11-26 2021-03-02 成都新希望金融信息有限公司 User classification method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN101916384B (en) 2012-11-28

Similar Documents

Publication Publication Date Title
CN101916384B (en) Facial image reconstruction method and device and face recognition system
CN101499128B (en) Three-dimensional human face action detecting and tracing method based on video stream
CN112418074B (en) Coupled posture face recognition method based on self-attention
CN109359526B (en) Human face posture estimation method, device and equipment
CN103824050B (en) A kind of face key independent positioning method returned based on cascade
CN101714262B (en) Method for reconstructing three-dimensional scene of single image
US8811744B2 (en) Method for determining frontal face pose
CN110490158B (en) Robust face alignment method based on multistage model
CN104504376A (en) Age classification method and system for face images
CN105550658A (en) Face comparison method based on high-dimensional LBP (Local Binary Patterns) and convolutional neural network feature fusion
CN105069746A (en) Video real-time human face substitution method and system based on partial affine and color transfer technology
CN104700076A (en) Face image virtual sample generating method
CN104537630A (en) Method and device for image beautifying based on age estimation
CN106303233A (en) A kind of video method for secret protection merged based on expression
CN101751689A (en) Three-dimensional facial reconstruction method
CN102043966B (en) Face recognition method based on combination of partial principal component analysis (PCA) and attitude estimation
Wu et al. Facial gender classification using shape-from-shading
CN100403333C (en) Personal identity verification process and system
CN106096551A (en) The method and apparatus of face part Identification
Hsu et al. A novel eye center localization method for multiview faces
CN102799872A (en) Image processing method based on face image characteristics
Nederhouser et al. The deleterious effect of contrast reversal on recognition is unique to faces, not objects
CN105426882A (en) Method for rapidly positioning human eyes in human face image
CN103593639A (en) Lip detection and tracking method and device
CN102236786A (en) Light adaptation human skin colour detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121128

CF01 Termination of patent right due to non-payment of annual fee