CN105243386A - Face living judgment method and system - Google Patents

Face living judgment method and system Download PDF

Info

Publication number
CN105243386A
CN105243386A CN201410327039.4A CN201410327039A CN105243386A CN 105243386 A CN105243386 A CN 105243386A CN 201410327039 A CN201410327039 A CN 201410327039A CN 105243386 A CN105243386 A CN 105243386A
Authority
CN
China
Prior art keywords
face
eye
eye image
sight line
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410327039.4A
Other languages
Chinese (zh)
Other versions
CN105243386B (en
Inventor
黄磊
蔡利君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwang Technology Co Ltd
Original Assignee
Hanwang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanwang Technology Co Ltd filed Critical Hanwang Technology Co Ltd
Priority to CN201410327039.4A priority Critical patent/CN105243386B/en
Publication of CN105243386A publication Critical patent/CN105243386A/en
Application granted granted Critical
Publication of CN105243386B publication Critical patent/CN105243386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

One aspect of the invention provides a face living judgment method used to distinguish a real face from a deceptive face. The method comprises the steps as follows: acquiring a certain number of face images; acquiring an eye image of each face image and the features of each eye image; predicting the line of sight according to the features of the acquired eye images and a line-of-sight estimation model to obtain a corresponding number of line-of-sight positions; quantifying the corresponding number of line-of-sight positions according to a line-of-sight dictionary to establish a line-of-sight histogram; and acquiring an information entropy according to the line-of-sight histogram and making living judgment, wherein a face judged to be real if the entropy is greater than 0, or the face is judged to be defective. The face living judgment method provided by the invention is a face living judgment method requiring neither additional equipment nor user's cooperation. A real face and a deceptive face can be distinguished effectively based on the fact that the line-of-sight behavior is uncertain and is difficult for others to acquire through a camera.

Description

Face live body determination methods and system
Technical field
The present invention relates to a kind of identity identifying technology field, particularly relate to a kind of face live body determination methods and system.
Background technology
Face recognition technology is a kind of effective identity identifying technology, but along with widely using of face recognition technology, occurred that various method pretends to be face with by authentication, the mode that the fict deception face of this use carries out authentication is called as " face attack ".Common face form of attacking comprises the duplicity identification carried out with photo, video playback or 3D model to pass through authentication.Wherein photo and video playback are modal attack patterns, and people can obtain the related data of validated user easily from mobile device or monitoring camera.And along with the development of modern technologies, the synthesis of 3D face and the acquisition of model are no longer things being difficult to realize, such as, the service of ThatsMyFace.com can by uploading a front and a side photo realizes the reconstruct of 3D face and the customized of 3D model.Therefore for the consideration of information security, the function adding In vivo detection in face identification system receives increasing concern.
People analyze common face attack pattern, think to compare with real human face, and photo face is plane, and there is secondary and obtain the mass loss brought, the problem such as fuzzy; Video human face has the phenomenons such as LCD is reflective; The motion of 3D model face is rigid motion etc.These be all current face attack pattern exist weakness, therefore, corresponding with these weakness, current face identify in biopsy method be mainly divided three classes: based drive method, based on texture method and merge both method.
Based drive method mainly analyzes the movement tendency of sequence of image frames, estimate the movement tendency of face zones of different as by optical flow method to distinguish real human face and photo attack, or adopt optical flow method by estimating that the rigid motion of object judges whether the various methods such as live body.Except unconscious head movement, the other biological behavior of validated user is also used to judge whether live body, behavior of such as blinking.Based on the change that can bring Quality Down, the microtexture such as fuzzy in the process that the method for texture then obtains at secondary based on deception face, carry out live body judgement by the conspicuousness distinguishing characteristic found out single frames real human face and cheat face.
But attack although above-mentioned two kinds of methods can resist face to a certain extent, still there is a lot of problem, such as based drive method can be resisted photo and attack, it is substantially invalid to attack the photo of distortion and video.Method based on texture can learn the discrimination model of real human face and deception face effectively, but but lost the relevant information between picture frame.
In addition, also there is the biopsy method based on multimode and multiple spectra at present.In vivo detection based on multimode is by face and other biological feature, carries out authentication as attitude, voice, fingerprint and hand shape etc. combine.In general, the biological authentification system based on multimode is stronger than the ability of the Verification System opposing attack based on single features.Biopsy method based on multiple spectra to utilize under near infrared real human face and cheats that the reflectivity of face is different to be distinguished.Although these two class methods achieve good In vivo detection performance, these class methods need extra equipment, even need the cooperation of user in based on the In vivo detection system of multimode, have increased the weight of the burden of user.
Summary of the invention
The present invention is in view of above problem, provide a kind of face live body determination methods, for distinguishing real human face and deception face in recognition of face, its face live body determination methods neither needing extra equipment also not need user to coordinate, real human face and deception face effectively can be distinguished.
One aspect of the present invention provides a kind of face live body determination methods, and for distinguishing real human face and deception face in recognition of face, it comprises: the step obtaining the facial image of some; Obtain the step of the eye image of each facial image and the feature of this eye image; The step that sight line prediction obtains respective amount eye position is carried out according to the feature of eye image obtained and sight line estimation model; According to sight line dictionary, described respective amount eye position is quantized, set up the histogrammic step of sight line; With obtain information entropy according to sight line histogram and carry out the step of live body judgement, if wherein entropy is greater than 0, be then judged to be real human face, otherwise be judged to cheat face.
Face live body determination methods provided by the invention is a kind of face live body determination methods neither needing extra equipment also not need user to coordinate, and due to uncertainty, the very difficult characteristic being passed through the first-class acquisition of shooting by other people of sight line behavior, real human face and deception face effectively can be distinguished.
Another aspect of the present invention provides a kind of live body to judge system, and it comprises: image acquisition units, and it is for gathering the facial image of some; Feature extraction unit, it is for the feature of the eye image and this eye image that obtain each facial image; Sight line estimation unit, it carries out sight line prediction according to the feature of the eye image obtained and sight line estimation model and obtains respective amount eye position; Sight line statistic unit, it quantizes described respective amount eye position according to sight line dictionary, set up sight line histogram, with live body judging unit, it obtains information entropy according to sight line histogram and carries out live body, if wherein entropy is greater than 0, be then judged to be real human face, otherwise be judged to cheat face.
Live body provided by the invention judges system, utilizes the uncertainty of sight line behavior, the difficult characteristic being passed through the first-class acquisition of shooting by other people, effectively can distinguish real human face and deception face.
Accompanying drawing explanation
Fig. 1 is the sight line estimation model method for building up process flow diagram of an embodiment of the present invention;
Fig. 2 is that an embodiment of the present invention is for setting up the data acquisition system (DAS) of sight line estimation model;
Fig. 3 is the schematic diagram that an embodiment of the present invention obtains eye image and feature;
Fig. 4 is the method flow diagram that an embodiment of the present invention obtains eye image and feature;
Fig. 5 is the face live body determination methods process flow diagram of an embodiment of the present invention;
Fig. 6 is an embodiment of the present invention sight line dictionary creation schematic diagram;
Fig. 7 is the structural representation that the face live body of an embodiment of the present invention judges system;
Fig. 8 is the detailed construction schematic diagram that the live body of face shown in Fig. 7 judges the feature extraction unit of system.
Embodiment
For making those skilled in the art understand technical scheme of the present invention better, below in conjunction with the drawings and specific embodiments, face live body determination methods provided by the invention and system are described in detail.In the drawings, for identical or suitable inscape, mark identical label.Below be only the preferred forms of face live body determination methods of the present invention and system, the present invention is not limited in following structure.
Face live body determination methods of the present invention and system, for distinguishing real human face and deception face in recognition of face.Because diversion process has uncertainty, sight line behavior is a kind ofly difficult to by biological information that other people are obtained by the first-class equipment of monitoring camera, simultaneously, compare with deception face, the sight line motion of real human face has larger uncertainty, and therefore sight line is introduced in live body decision problem and effectively can be distinguished real human face and deception face by the present invention.Face live body determination methods of the present invention and system basic thought obtain eye image and feature, and carry out sight line estimation according to eye image feature and sight line estimation model, then the sight line of prediction is quantized and added up, and then carry out live body judgement, that is, judgement is identified and still cheats face to liking real human face.
For this reason, first the present invention needs to set up suitable sight line estimation model, utilizes this model can detect that sight line changes well.Existing sight line estimation model is mainly: based on the method for outward appearance and the method for feature based, the method of feature based needs the local feature (such as iris profile, pupil and speck etc.) extracting eyes, and sets up the mapping relations between feature and viewpoint.But, generally need high-definition camera even to need multiple light source or multiple camera based on these class methods.First method based on outward appearance orients eye areas, then directly sets up the mapping relations of eye coordinates in eye image and screen, takes full advantage of the information of eye sight line.Consider that face live body determination methods that the present invention proposes carries out under natural light environment and under not relying on the condition of extra equipment or light source (in fact, a resolution is only had to be the USB camera of 640 × 480 pixels), therefore present embodiment is selected to carry out modeling based on the method for outward appearance to direction of visual lines.In addition, consider that real human face and deception face exist the difference of microtexture, first the present invention extracts Local textural feature to eye image, then sets up the mapping relations between Local textural feature vector sum viewpoint by regression model.In the present embodiment, the foundation of sight line estimation model watches 9 viewpoints in computer screen attentively successively mainly through allowing user, obtain user's direct picture data of 9 direction of visual lines thus, set up the mapping relations of viewpoint in eyes of user characteristics of image and computer screen by regression model.Below in conjunction with Fig. 1 ~ Fig. 4, the foundation of sight line estimation model of the present invention be described and solve.
Fig. 1 is the sight line estimation model method for building up process flow diagram of an embodiment of the present invention; Fig. 2 is that an embodiment of the present invention is for setting up the data acquisition system (DAS) of sight line estimation model.As shown in Figure 1, the method for building up of this sight line estimation model comprises: step S1, data acquisition; Step S2, extracts eye image and feature; Step S3, model is set up and is solved.
In order to set up the statistical model that sight line is estimated, first the collection of related data is carried out, the data acquisition system (DAS) of present embodiment is as shown in (a) in Fig. 2, and its USB camera being 640 × 480 pixels by the liquid crystal display of 1 19 cun of size (resolution is 1440 × 900 pixels) and 1 resolution is formed.There are 9 point of fixity on a liquid crystal display, be called viewpoint, the particular location of viewpoint to be shown in Fig. 2 shown in (b), the coordinate of each viewpoint is in units of pixel, in Fig. 2 in (b) 1,2,3 ... .9 the label of viewpoint is represented, value in label lower right, upper right side or lower left bracket is the position of corresponding viewpoint in computer screen, and the position of such as first viewpoint is (88,83).
In step sl, acquisition target is sitting in the place of the about 50cm-60cm of distance calculation screen, and keeps the fixing of head as far as possible.In the process of data acquisition, first acquisition target watching the 1st viewpoint (88 under the prerequisite keeping head fixing is required, 83) and keep this direction of visual lines, preserve the direct picture of the acquisition target that camera during this period captures, after reaching 30 frames, stop the preservation of image; Then the sight line of acquisition target is guided to turn to the 2nd viewpoint (552,83), same preservation user direct picture 30 frame; Continue, until 270 image frame grabbers under 9 viewpoints are complete.In an identical manner, gather the image that 9 viewpoints watched attentively by 50 samples herein, totally 13500 user's direct pictures, the image that artificial removal closes one's eyes, remain 12698 effective images.
It should be noted that, in data acquisition, can be the same with aforementioned description, each view collection 30 two field picture, but also can be the image of other suitable quantities, be not limited to 30 frames.Because set up for model, the scope that neither one is clear and definite, sample is more, and model can be more accurate, and model also can be more complicated simultaneously, and present embodiment considers accuracy and the complexity of model, selects 30 frames.
When after completing steps S1, carry out step S2, extract eye image and feature in step s 2.The extracting method of eye image and feature in present embodiment is described below in conjunction with Fig. 3 ~ Fig. 4.
As shown in Figure 3, present embodiment extraction eye image and feature comprise:
Step S20, Face detection and interior tail of the eye spot check are surveyed.ViolaandJones method as by classics carries out the detection of Face detection and interior tail of the eye point, with left eye in such as Fig. 4 shown in (a).
Step S21, eye image aligns, namely according to the cutting of position, canthus and eye areas of aliging.The method of concrete alignment is: first by human face region that statistics gatherer arrives, the eye image standard form of definition 64 × 32 sizes, and the position arranging inside and outside two canthus is respectively (54,20) and (9,20), by human face region to be carried out the conversion of yardstick and rotation according to the position of the tail of the eye, obtain the eye image alignd, as shown in (b) in Fig. 4.
Further, for left eye, yardstick with the concrete steps of rotational transform is: the tail of the eye in 1) being connected, makes inner eye corner and the tail of the eye in the same horizontal line by this line segment by rotational transform, also carry out identical rotational transform simultaneously to other pixels of human face region; 2) canthus, left and right be on same level position is made it to overlap with according to tail of the eye position in standard form defines by dimensional variation, meanwhile identical change of scale is also done to other pixels of human face region; 3) finally intercept the human face region through over-rotation and change of scale according to the size of the eye image standard form of definition, obtain the eye image alignd.
It should be noted that, above-mentioned eye image standard form be adapted to proprietary, as long as navigate to the interior tail of the eye position of left eye, just by rotate and tail of the eye position in the tail of the eye in detecting and standard form define is consistent by change of scale, thus the eye image of alignment can be obtained.Eye image alignment is the important pre-treatment step of of recognition of face, and before Modling model, carrying out eye image alignment in the present embodiment, is the small human head movements in order to occur in processes captured image.
Step S22, obtains the feature of eye image.In characteristic extraction part, in order to make full use of the microtexture difference of real human face and the existence of deception face, be r × c sub regions by eye image Further Division, present embodiment adopts 4 × 2, as shown in (c) in Fig. 4, Double-histogram local binary patterns (DualHistogramLocalBinaryPattern is extracted to each subregion, DH-LBP) (DH-LBP is local binary patterns (LBP to feature, LocalBinaryPattern) histogrammic improvement, the resolution characteristic that LBP is good is remained) while greatly reducing intrinsic dimensionality, the DH-LBP feature of all subregions is connected, form the feature of eye image, as shown in (d) in Fig. 4, it is 128 dimensions.
Be understandable that, described eye image feature may be used for the direction of visual lines representing user on the one hand; On the other hand due to problems such as printing, display screen are reflective, there is the difference of obvious microtexture in real human face and deception face, therefore adopts Local textural feature to can be good at distinguishing real human face and deception face..
When after completing steps S2, carry out step S3, carry out the foundation of model in step s3 and solve.Foundation and the solution procedure of sight line estimation model are described below in detail.
Such data set is obtained by step S1 and S2 wherein, x i∈ R n(n=128 in present embodiment, its dimension with eye image feature about) is the eye image feature of sample, y i=(p x,i, p y,i) ∈ R 2it is the two-dimensional coordinate of corresponding viewpoint.The mapping relations between eye image feature and corresponding eye coordinates are set up by following bidimensional regression model in present embodiment:
y=w Tx+b,
w=(w 1,w 2)∈R n×2,b=(b 1,b 2)∈R 2(1)
Wherein w, b are model parameters to be asked
And the loss function of definition correspondence is:
loss = Σ i = 1 N | | y i - y ^ i | | 2 - - - ( 2 )
Wherein be corresponding predicted value, in general, formula (1) is tried to achieve by least square method.But in conjunction with the actual conditions of this problem, least square method has following weak point: the sight line estimation problem 1) under non-high-definition camera is a complicated nonlinear problem, especially for 9 calibration points that span is larger; 2) least square method is more responsive for the point outside fitting a straight line; 3) least square method is only to empirical risk minimization, and Generalization Ability is not strong.Based on what time above, present embodiment determines to adopt to have to solve this problem compared with support vector regression (SVR, the SupportVectorRegression) method of strong robustness.In order to apply SVR, formula (1) is reduced to two one dimension regression models:
p x = w 1 T x + b 1 p y = w 2 T x + b 2 - - - ( 3 )
That is, x coordinate points and y coordinate points are considered respectively, for making fitting function level and smooth as much as possible, formula (1) is further converted to solving of following two optimization problems:
min imize 1 2 | | w 1 | | 2 - - - ( 4 )
subjectto|p x,i-w 1 Tx-b 1|≤ε 1
With
min imize 1 2 | | w 2 | | 2 - - - ( 5 )
subjectto|p y,i-w 2 Tx-b 2|≤ε 2
Wherein ε 1, ε 2it is admissible maximum deviation between predicted value and actual value.Libsvm tool box is adopted to solve (4) in present embodiment and (5) obtain with namely for test sample book z, x and the y coordinate figure of its eye position is predicted as f respectively x(z), f y(z):
f x ( z ) = Σ i = 1 n 1 w 1 * ( z , x i ) + b 1 * f y ( z ) = Σ i = 1 n 2 w 2 * ( z , y i ) + b 2 * - - - ( 6 )
Wherein, x i, y ibe respectively support vector, n 1, n 2it is support vector number.Compared to linear regression, non-linear regression can be more effective to data modeling, be non-linear conventional method is geo-nuclear tracin4 by linear transfor.By definition kernel function, the solution of formula (1) is:
f x ( z ) = Σ i = 1 n x w x * k ( z , x i ) + b x * f y ( z ) = Σ i = 1 n y w y * k ( z , y i ) + b y * - - - ( 7 )
Present embodiment adopts Radial basis kernel function to solve, and for test data z, is obtained the eye position predicted by (7) formula y ^ = ( p ^ x , p ^ y ) = ( f x ( z ) , f y ( z ) ) .
The change predicting sight line is laid particular emphasis on according to the sight line estimation model (7) that above-mentioned steps S1 ~ S3 can set up, instead of stress the accuracy estimated, and the sight line estimation model that present embodiment proposes only need configure a resolution under natural light is the change that the USB camera of 640 × 480 pixels just well can detect sight line.
Be understandable that, although present embodiment adopts non-linear regression in step s3 to set up and solves sight line estimation model, in fact also can adopt other homing method, such as partial least squares regression etc.
Further, the sight line estimation model of present embodiment and the maximum difference of existing sight line estimation model are that the occasion due to application is different, and therefore emphasis is also different.Existing sight line estimation model is mainly used in man-machine interaction, and what therefore focus on is the accuracy that sight line is estimated.Existing sight line estimation model or need multiple calibration point, or need high-definition camera, what have even needs multiple camera.And application according to the present invention occasion---live body judges, the sight line estimation model that present embodiment proposes lays particular emphasis on the change of sight line, instead of the accuracy that sight line is estimated, and the USB camera that the sight line estimation model that present embodiment proposes only need configure 640 × 480 pixel sizes under natural light just well can detect the change of sight line.
As previously mentioned, after setting up above-mentioned sight line estimation model, this model can be utilized to carry out live body judgement.Below in conjunction with Fig. 5 and Fig. 6, face live body determination methods of the present invention is described.
Fig. 5 is the face live body determination methods process flow diagram of an embodiment of the present invention; Fig. 6 is an embodiment of the present invention sight line dictionary creation schematic diagram.As shown in Figure 5, the face live body determination methods of present embodiment comprises the steps:
Step S100, obtains facial image.In step S100, obtain some facial images by such as usb first-class image acquisition unit of making a video recording, such as gather 10 seconds, obtain 100 two field pictures.
Step S200, obtains the feature of eye image and eye image.Obtain the eye image of each facial image and the feature of this eye image in step 200, such as obtain eye image and the eye image feature of each image of aforementioned 100 frame facial images.The characteristic-acquisition method of eye image and eye image, with described above, does not repeat them here.
Step S300, carries out sight line prediction, namely carries out sight line prediction according to the feature of eye image obtained and sight line estimation model and obtains respective amount eye position.Particularly, in step S300, predict the eye position sequence of user in a period of time according to sight line estimation model formula (7).Such as, for a certain user, suppose in current slot, to collect M two field picture, by carrying out the acquisition of eye image and feature to image sequence, obtain M the eye position predicted { y ^ i = ( p ^ x , i , p ^ y , i ) } i = 1 M .
Step S400, sight line quantizes and statistics, that is, quantize described respective amount eye position according to sight line dictionary, set up sight line histogram.
Word bag model (BagofWords, BOW) is applied in information retrieval field usually, and the frequency occurred by adding up each vocabulary forms histogram and represents document.In the present embodiment by by the vocabulary extension in document, to `` visual vocabulary in image, " word bag model is extended to visual word bag model, and what will add up is direction of visual lines, and therefore `` visual vocabulary here " is just embodied as `` direction of visual lines " herein.`` sight line word bag model is referred to as in the present invention ", and the entry in sight line dictionary is called " sight line entry ".Be similar to word bag model, the structure of sight line word bag model is mainly divided into 2 steps: the structure of sight line dictionary (codebook) and the histogrammic generation of sight line.In the present invention, sight line entry is by carrying out stress and strain model to obtain by user's visual line of sight, suppose that the visual line of sight of user is m × n size, here m, n is in units of pixel, it is evenly divided into the grid of r × c size, then the set of the individual net point of (r+1) on grid × (c+1) constitutes sight line dictionary.Such as, as shown in Figure 6, when the face live body determination methods of present embodiment is applied to resolution is in the window of 640 × 480 pixel sizes, first the visual line of sight (being assumed to be the rectangular area (fringe region is not considered) of 600 × 400 pixel sizes) of user is determined, then be divided into 2 × 2 (value of r and c can not be 2) sub regions here by 600 × 400, then 9 red points constitute sight line dictionary.Preferably, the structure of present embodiment sight line dictionary be 9 viewpoints when setting up sight line estimation model as sight line entry, i.e. { (88,83), (552,83), (1016,83), (88,440), (552,440), (1016,440), (88,797), (552,797), (1016,797) }.
After having built sight line dictionary, by the eye position sequence predicted quantize according to sight line dictionary.Such as, find and each predicted position nearest sight line entry vote (present embodiment employing Euclidean distance), the poll of adding up each position entry forms histogram and is normalized, and forms sight line histogram.
Step 500, live body judges, that is, obtain information entropy according to sight line histogram and carry out live body judgement.
Particularly, for normalized sight line histogram H={p 1..., p 9(meet entropy is tried to achieve in definition according to entropy
entroy = - Σ i = 1 9 p i log ( p i ) - - - ( 8 )
If &Exists; { p i k > 0 } k = 1 l , 1 < l &le; 9 , Then have
entropy = - &Sigma; i = 1 9 p i log ( p i ) = - &Sigma; k = 1 l p i k log ( p i k ) > 0 - - - ( 9 )
Show that direction of visual lines has been quantized in the individual different sight line entry of l, now corresponding entropy is greater than 0.
If only exist a certain then have
entroy = - &Sigma; i = 1 9 p i log ( p i ) = - p i 0 log ( p i 0 ) = 0 - - - ( 10 )
Show that direction of visual lines is only quantized in a sight line entry, now corresponding entropy equals 0.Namely, under the condition quantized, when direction of visual lines changes, entropy is greater than 0; When not changing, entropy equals 0.
To sum up, setting threshold value in present embodiment is 0, and the condition setting live body judgement is: if try to achieve (8) formula entropy entropy>0, be then judged to be real human face, otherwise is deception face.
The present invention, except providing a kind of face live body determination methods, also provides a kind of live body to judge system, for distinguishing real human face and deception face in recognition of face.Illustrate that live body of the present invention judges system below in conjunction with Fig. 7 and Fig. 8.
Fig. 7 is the structural representation that the live body of an embodiment of the present invention judges system; Fig. 8 is the detailed construction schematic diagram that live body shown in Fig. 7 judges the feature extraction unit of system.
As shown in Figure 7, the live body of present embodiment judges that system comprises image acquisition units 100, feature extraction unit 200, sight line estimation unit 300, sight line statistic unit 400 and live body judging unit 500.
Wherein, image acquisition units 100 is such as the camera of 640 × 480 pixels, for gathering the facial image of some.
Feature extraction unit 200 is for the feature of the eye image and this eye image that obtain each facial image.Feature extraction unit 200 comprises detection and localization module 201, eye image alignment module 202 and characteristic extracting module 203.Detection and localization module 201 is surveyed for carrying out Face detection and interior tail of the eye spot check, and the ViolaandJones method as by classics carries out the detection of Face detection and interior tail of the eye point, with left eye in such as Fig. 4 shown in (a).Eye image alignment module 202 for according to the cutting of position, canthus and align eye areas thus obtain align eye image, it specifically performs following operation: the facial image arrived by statistics gatherer, definition eye image standard form, and inner eye corner and tail of the eye position are set, the eye image alignd is obtained with the conversion that described human face region is carried out yardstick and rotation according to the position of the tail of the eye, detailed process as previously mentioned, does not repeat them here.Characteristic extracting module 203 is according to the feature of the Ophthalmologic image-taking eye image of described alignment, it specifically performs following operation: described eye image is divided into r*c sub regions, and Double-histogram local binary patterns feature is extracted to every sub regions, the Double-histogram local binary patterns feature of all subregions is connected, forms the feature of described eye image.
Sight line estimation unit 300 obtains respective amount eye position for carrying out sight line prediction according to the feature of eye image obtained and sight line estimation model.That is, the eye position sequence of user in a period of time is predicted according to sight line estimation model formula (7).Such as, for a certain user, suppose in current slot, to collect M two field picture, by carrying out the acquisition of eye image and feature to image sequence, obtain M the eye position predicted
Sight line statistic unit 400 quantizes described respective amount eye position according to sight line dictionary, sets up sight line histogram.The process of sight line statistics and quantification as previously mentioned, does not repeat them here.
Live body judging unit 500 obtains information entropy according to sight line histogram and carries out live body, wherein
If entropy is greater than 0, be then judged to be real human face, otherwise be judged to cheat face.
The illustrative embodiments that above embodiment is only used to principle of the present invention is described and adopts, but the present invention is not limited thereto.For those skilled in the art, without departing from the spirit and substance in the present invention, various modification and improvement can be made.These modification and improvement are also considered as guard interval of the present invention.

Claims (12)

1. a face live body determination methods, for distinguishing real human face and deception face in recognition of face, is characterized in that, comprise:
Obtain the step of the facial image of some;
Obtain the step of the eye image of each facial image and the feature of this eye image;
The step that sight line prediction obtains respective amount eye position is carried out according to the feature of eye image obtained and sight line estimation model;
According to sight line dictionary, described respective amount eye position is quantized, set up the histogrammic step of sight line; With
Obtain information entropy according to sight line histogram and carry out the step of live body judgement, wherein
If entropy is greater than 0, be then judged to be real human face, otherwise be judged to cheat face.
2. face live body determination methods as claimed in claim 1, it is characterized in that, the step of the eye image of each facial image of described acquisition and the feature of this eye image comprises:
Carry out the step of Face detection and the survey of interior tail of the eye spot check;
According to the cutting of position, canthus and the eye areas thus obtain of the aliging step of eye image of aliging; With
According to the step of the feature of the Ophthalmologic image-taking eye image of described alignment.
3. face live body determination methods as claimed in claim 2, is characterized in that, described according to the cutting of position, canthus and align eye areas thus the step obtaining the eye image alignd comprise:
By the facial image that statistics gatherer arrives, definition eye image standard form, and the step of inner eye corner and tail of the eye position is set; With
Described human face region is carried out the step of the eye image of the conversion acquisition alignment of yardstick and rotation according to the position of the tail of the eye.
4. face live body determination methods as claimed in claim 2, it is characterized in that, the step according to the feature of the Ophthalmologic image-taking eye image of described alignment is specially:
Described eye image is divided into r × c sub regions, and Double-histogram local binary patterns feature is extracted to every sub regions, the Double-histogram local binary patterns feature of all subregions is connected, forms the feature of described eye image.
5. the face live body determination methods as described in claim 1-4 any one, is characterized in that, also comprise:
Set up the step of sight line estimation model, this step comprises:
The step of image data, described data be user observe successively setting quantity viewpoint time facial image;
Obtain the step of the eye image of each facial image and the feature of this eye image; With
According to the feature of described eye image and the coordinate of corresponding viewpoint, set up sight line estimation model and solve.
6. face live body determination methods as claimed in claim 5, it is characterized in that, described sight line estimation model is:
f x ( z ) = &Sigma; i = 1 n x w x * k ( z , x i ) + b x * f y ( z ) = &Sigma; i = 1 n y w y * k ( z , y i ) + b y *
Wherein, described eye position x i∈ R nthe feature of eye image, y i=(p x,i, p y,i) ∈ R 2it is corresponding viewpoint two-dimensional coordinate.
7. live body judges a system, for distinguishing real human face and deception face in recognition of face, it is characterized in that, comprise:
Image acquisition units, it is for gathering the facial image of some;
Feature extraction unit, it is for the feature of the eye image and this eye image that obtain each facial image;
Sight line estimation unit, it carries out sight line prediction according to the feature of the eye image obtained and sight line estimation model and obtains respective amount eye position;
Sight line statistic unit, it quantizes described respective amount eye position according to sight line dictionary, sets up sight line histogram, and
Live body judging unit, it obtains information entropy according to sight line histogram and carries out live body, wherein
If entropy is greater than 0, be then judged to be real human face, otherwise be judged to cheat face.
8. live body as claimed in claim 7 judges system, and it is characterized in that, described feature extraction unit comprises:
Detection and localization module, it is for carrying out Face detection and the survey of interior tail of the eye spot check;
Eye image alignment module, its for according to the cutting of position, canthus and align eye areas thus obtain align eye image; With
Characteristic extracting module, it is according to the feature of the Ophthalmologic image-taking eye image of described alignment.
9. live body as claimed in claim 8 judges system, and it is characterized in that, described eye image alignment module specifically performs following operation:
By the facial image that statistics gatherer arrives, definition eye image standard form, and inner eye corner and tail of the eye position are set; With
The conversion that described human face region carries out yardstick and rotation according to the position of the tail of the eye is obtained the eye image alignd.
10. live body as claimed in claim 8 judges system, and it is characterized in that, described characteristic extracting module specifically performs following operation:
Described eye image is divided into r × c sub regions, and Double-histogram local binary patterns feature is extracted to every sub regions, the Double-histogram local binary patterns feature of all subregions is connected, forms the feature of described eye image.
11. live bodies as claimed in claim 7 judge system, and it is characterized in that, described image acquisition units is the camera of 640 × 480 pixels.
12. live bodies as described in claim 7-11 judge system, and it is characterized in that, described sight line estimation model is:
f x ( z ) = &Sigma; i = 1 n x w x * k ( z , x i ) + b x * f y ( z ) = &Sigma; i = 1 n y w y * k ( z , y i ) + b y *
Wherein, described eye position x i∈ R nthe feature of eye image, y i=(p x,i, p y,i) ∈ R 2it is corresponding viewpoint two-dimensional coordinate.
CN201410327039.4A 2014-07-10 2014-07-10 Face living body judgment method and system Active CN105243386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410327039.4A CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410327039.4A CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Publications (2)

Publication Number Publication Date
CN105243386A true CN105243386A (en) 2016-01-13
CN105243386B CN105243386B (en) 2019-02-05

Family

ID=55041027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410327039.4A Active CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Country Status (1)

Country Link
CN (1) CN105243386B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169075A (en) * 2016-07-11 2016-11-30 北京小米移动软件有限公司 Auth method and device
CN106203372A (en) * 2016-07-19 2016-12-07 奇酷互联网络科技(深圳)有限公司 Eye-based living body detection method and device and terminal equipment
CN106203373A (en) * 2016-07-19 2016-12-07 中山大学 A kind of human face in-vivo detection method based on deep vision word bag model
CN106682578A (en) * 2016-11-21 2017-05-17 北京交通大学 Human face recognition method based on blink detection
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN107330914A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 Face position method for testing motion and device and vivo identification method and system
CN107992811A (en) * 2017-11-27 2018-05-04 北京小米移动软件有限公司 Face identification method and device
CN108229376A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method and device of blink
CN108445505A (en) * 2018-03-29 2018-08-24 南京航空航天大学 Feature significance detection method based on laser radar under thread environment
CN108549887A (en) * 2018-07-23 2018-09-18 北京智芯原动科技有限公司 A kind of living body faces detection method and device
CN108875468A (en) * 2017-06-12 2018-11-23 北京旷视科技有限公司 Biopsy method, In vivo detection system and storage medium
CN109190522A (en) * 2018-08-17 2019-01-11 浙江捷尚视觉科技股份有限公司 A kind of biopsy method based on infrared camera
CN110705569A (en) * 2019-09-18 2020-01-17 西安交通大学 Image local feature descriptor extraction method based on texture features
CN110705507A (en) * 2016-06-30 2020-01-17 北京七鑫易维信息技术有限公司 Identity recognition method and device
CN110738072A (en) * 2018-07-18 2020-01-31 浙江宇视科技有限公司 Living body judgment method and device
CN110929705A (en) * 2020-02-17 2020-03-27 京东数字科技控股有限公司 Living body detection method and device, identity authentication method and system and storage medium
CN111582238A (en) * 2020-05-28 2020-08-25 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112633217A (en) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN113869219A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005277567A (en) * 2004-03-23 2005-10-06 Olympus Corp Image recording device
US7027617B1 (en) * 1999-01-19 2006-04-11 Humanscan Gmbh Method and system for preventing fraud in the biometric detection of persons
CN101710383A (en) * 2009-10-26 2010-05-19 北京中星微电子有限公司 Method and device for identity authentication
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027617B1 (en) * 1999-01-19 2006-04-11 Humanscan Gmbh Method and system for preventing fraud in the biometric detection of persons
JP2005277567A (en) * 2004-03-23 2005-10-06 Olympus Corp Image recording device
CN101710383A (en) * 2009-10-26 2010-05-19 北京中星微电子有限公司 Method and device for identity authentication
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705507B (en) * 2016-06-30 2022-07-08 北京七鑫易维信息技术有限公司 Identity recognition method and device
CN110705507A (en) * 2016-06-30 2020-01-17 北京七鑫易维信息技术有限公司 Identity recognition method and device
CN106169075A (en) * 2016-07-11 2016-11-30 北京小米移动软件有限公司 Auth method and device
CN106203373B (en) * 2016-07-19 2019-04-16 中山大学 A kind of human face in-vivo detection method based on deep vision bag of words
CN106203372A (en) * 2016-07-19 2016-12-07 奇酷互联网络科技(深圳)有限公司 Eye-based living body detection method and device and terminal equipment
CN106203373A (en) * 2016-07-19 2016-12-07 中山大学 A kind of human face in-vivo detection method based on deep vision word bag model
CN106682578A (en) * 2016-11-21 2017-05-17 北京交通大学 Human face recognition method based on blink detection
CN106682578B (en) * 2016-11-21 2020-05-05 北京交通大学 Weak light face recognition method based on blink detection
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN107330914A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 Face position method for testing motion and device and vivo identification method and system
CN108875468A (en) * 2017-06-12 2018-11-23 北京旷视科技有限公司 Biopsy method, In vivo detection system and storage medium
CN107992811A (en) * 2017-11-27 2018-05-04 北京小米移动软件有限公司 Face identification method and device
CN108229376A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method and device of blink
CN108229376B (en) * 2017-12-29 2022-06-03 百度在线网络技术(北京)有限公司 Method and device for detecting blinking
CN108445505A (en) * 2018-03-29 2018-08-24 南京航空航天大学 Feature significance detection method based on laser radar under thread environment
CN110738072A (en) * 2018-07-18 2020-01-31 浙江宇视科技有限公司 Living body judgment method and device
CN108549887A (en) * 2018-07-23 2018-09-18 北京智芯原动科技有限公司 A kind of living body faces detection method and device
CN109190522B (en) * 2018-08-17 2021-05-07 浙江捷尚视觉科技股份有限公司 Living body detection method based on infrared camera
CN109190522A (en) * 2018-08-17 2019-01-11 浙江捷尚视觉科技股份有限公司 A kind of biopsy method based on infrared camera
CN110705569A (en) * 2019-09-18 2020-01-17 西安交通大学 Image local feature descriptor extraction method based on texture features
CN110929705A (en) * 2020-02-17 2020-03-27 京东数字科技控股有限公司 Living body detection method and device, identity authentication method and system and storage medium
CN111582238A (en) * 2020-05-28 2020-08-25 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112633217A (en) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN113869219A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium
CN113869219B (en) * 2021-09-29 2024-05-21 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN105243386B (en) 2019-02-05

Similar Documents

Publication Publication Date Title
CN105243386A (en) Face living judgment method and system
CN109522793B (en) Method for detecting and identifying abnormal behaviors of multiple persons based on machine vision
CN104966070B (en) Biopsy method and device based on recognition of face
CN108549854B (en) A kind of human face in-vivo detection method
CN104063722B (en) A kind of detection of fusion HOG human body targets and the safety cap recognition methods of SVM classifier
CN104732601B (en) Automatic high-recognition-rate attendance checking device and method based on face recognition technology
KR100974293B1 (en) METHOD AND SYSTEM FOR AUTOMATED FACE DETECTION and Recognition
RU2431190C2 (en) Facial prominence recognition method and device
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
CN103902962B (en) One kind is blocked or the adaptive face identification method of light source and device
KR20190030151A (en) Apparatus, method and computer program for analyzing image
CN202257856U (en) Driver fatigue-driving monitoring device
CN109190475B (en) Face recognition network and pedestrian re-recognition network collaborative training method
CN105426827A (en) Living body verification method, device and system
CN101814137B (en) Driver fatigue monitor system based on infrared eye state identification
CN106384106A (en) Anti-fraud face recognition system based on 3D scanning
CN109298785A (en) A kind of man-machine joint control system and method for monitoring device
CN105335726B (en) Recognition of face confidence level acquisition methods and system
CN105138980A (en) Identify authentication method and system based on identity card information and face identification
CN102262727A (en) Method for monitoring face image quality at client acquisition terminal in real time
CN101201893A (en) Iris recognizing preprocessing method based on grey level information
CN101339603A (en) Method for selecting qualified iris image from video frequency stream
CN101251894A (en) Gait recognizing method and gait feature abstracting method based on infrared thermal imaging
CN104143086A (en) Application technology of portrait comparison to mobile terminal operating system
CN110555931A (en) Face detection and gate inhibition system device based on deep learning recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant