CN105243386B - Face living body judgment method and system - Google Patents

Face living body judgment method and system Download PDF

Info

Publication number
CN105243386B
CN105243386B CN201410327039.4A CN201410327039A CN105243386B CN 105243386 B CN105243386 B CN 105243386B CN 201410327039 A CN201410327039 A CN 201410327039A CN 105243386 B CN105243386 B CN 105243386B
Authority
CN
China
Prior art keywords
face
sight
eye
eye image
living body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410327039.4A
Other languages
Chinese (zh)
Other versions
CN105243386A (en
Inventor
黄磊
蔡利君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwang Technology Co Ltd
Original Assignee
Hanwang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanwang Technology Co Ltd filed Critical Hanwang Technology Co Ltd
Priority to CN201410327039.4A priority Critical patent/CN105243386B/en
Publication of CN105243386A publication Critical patent/CN105243386A/en
Application granted granted Critical
Publication of CN105243386B publication Critical patent/CN105243386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

One aspect of the present invention provides a kind of face living body judgment method, for distinguishing real human face and deception face comprising: the step of obtaining a certain number of facial images;Obtain each facial image eye image and the eye image feature the step of;The step of sight prediction obtains corresponding number eye position is carried out according to the feature of the eye image of acquisition and sight estimation model;The step of corresponding number eye position is quantified, establishes sight histogram according to sight dictionary;With comentropy is obtained according to sight histogram and the step of carry out living body judgement, wherein being determined as real human face if entropy is greater than 0, being otherwise judged to cheating face.Face living body judgment method provided by the invention is that one kind had not both needed additional equipment or do not needed the face living body judgment method that user cooperates, and due to the uncertainty using sight behavior, difficult the characteristics of being obtained by other people by camera etc., it can effectively distinguish real human face and deception face.

Description

Face living body judgment method and system
Technical field
The present invention relates to a kind of identity identifying technology field more particularly to a kind of face living body judgment methods and system.
Background technique
Face recognition technology is a kind of effective identity identifying technology, but being widely used with face recognition technology, Various methods occur pretends to be face by authentication, this side for carrying out authentication using non-genuine deception face Formula is referred to as " face attack ".Common face attack form includes the duplicity carried out with photo, video playing or 3D model Identification is to pass through authentication.Wherein photos and videos broadcasting is the most common attack pattern, people can with slave mobile device or The related data of legitimate user is obtained in person's monitoring camera easily.And with the development of modern technologies, the conjunction of 3D face It is no longer the thing being difficult to realize at the acquisition with model, for example, the service of ThatsMyFace.com can be by uploading one It opens front and a side photo realizes that reconstructing for 3D face is customized with 3D model.Therefore for information security the considerations of, The function that In vivo detection is added in face identification system receives more and more attention.
People analyze common face attack pattern, it is believed that it is compared with real human face, photo face is plane, And there are secondary acquisition bring mass loss, it is fuzzy the problems such as;Video human face has phenomena such as LCD is reflective;3D model face Movement is rigid motion etc..These are all weakness existing for current face attack pattern, therefore, corresponding with these weakness, when Biopsy method in preceding recognition of face is broadly divided into three classes: based drive method, method and fusion based on texture The method of the two.
Based drive method is mainly to analyze the movement tendency of image frame sequence, for example estimate face by optical flow method The movement tendency of different zones is transported to distinguish real human face and photo attack, or using optical flow method by the rigid body of estimation object It moves to determine whether the various methods such as living body.In addition to unconscious head movement, the other biological behavior of legitimate user also by with To determine whether living body, such as blink behavior.Method based on texture is then based on deception face meeting during secondary acquisition The variation for bringing the microtextures such as quality decline, fuzzy, the conspicuousness by finding out single frames real human face and deception face are distinguished special Sign carries out living body judgement.
But although above two method can resist face attack to a certain extent, there are still much asked Topic, such as based drive method can resist photo attack, substantially invalid to the photos and videos attack of distortion.Based on texture Method can effectively learn real human face and cheat the discrimination model of face, however the correlation being but lost between picture frame Information.
In addition, there is also the biopsy methods based on multimode and multiple spectra at present.In vivo detection based on multimode be by Face and other biological feature, such as posture, voice, fingerprint and hand shape combine carry out authentication.In general, it is based on The biological authentification system of multimode is stronger than the ability that the Verification System based on single features resists attack.Living body inspection based on multiple spectra Survey method is distinguished using real human face is different with the deception reflectivity of face under near-infrared.Although these two kinds of methods Preferable In vivo detection performance is achieved, but such method needs additional equipment, or even in the In vivo detection based on multimode The cooperation that user is needed in system has aggravated the burden of user.
Summary of the invention
The present invention provides a kind of face living body judgment method in view of problem above, true for distinguishing in recognition of face Real face and deception face had not both needed additional equipment or had not needed the face living body judgment method of user's cooperation, can be with Effectively distinguish real human face and deception face.
One aspect of the present invention provides a kind of face living body judgment method, for distinguishing real human face in recognition of face With deception face comprising: the step of obtaining a certain number of facial images;Obtain each facial image eye image and The step of feature of the eye image;Sight prediction is carried out according to the feature of the eye image of acquisition and sight estimation model to obtain The step of obtaining corresponding number eye position;The corresponding number eye position is quantified according to sight dictionary, establishes sight The step of histogram;With comentropy is obtained according to sight histogram and the step of carry out living body judgement, wherein if entropy is greater than 0, then it is determined as real human face, is otherwise judged to cheating face.
Face living body judgment method provided by the invention is that one kind had not both needed additional equipment or do not needed user's cooperation Face living body judgment method, and due to the uncertainty of sight behavior, be difficult spy by other people by the acquisitions such as camera Property, it can effectively distinguish real human face and deception face.
Another aspect of the present invention provides a kind of living body and judges system comprising: image acquisition units are used to adopt Collect a certain number of facial images;Feature extraction unit is used to obtain the eye image and the eyes of each facial image The feature of image;It is pre- to carry out sight according to the feature of the eye image of acquisition and sight estimation model for sight estimation unit It surveys and obtains corresponding number eye position;Sight statistic unit carries out the corresponding number eye position according to sight dictionary Quantization, establishes sight histogram and living body judging unit, obtains comentropy according to sight histogram and carries out living body, wherein If entropy is greater than 0, it is determined as real human face, is otherwise judged to cheating face.
Living body provided by the invention judges system, passes through camera etc. by other people using the uncertainty of sight behavior, hardly possible The characteristic of acquisition can effectively distinguish real human face and deception face.
Detailed description of the invention
Fig. 1 is the sight estimation method for establishing model flow chart of an embodiment of the present invention;
Fig. 2 is the data collection system that an embodiment of the present invention is used to establish sight estimation model;
Fig. 3 is the schematic diagram that an embodiment of the present invention obtains eye image and feature;
Fig. 4 is the method flow diagram that an embodiment of the present invention obtains eye image and feature;
Fig. 5 is the face living body judgment method flow chart of an embodiment of the present invention;
Fig. 6 is an embodiment of the present invention sight dictionary creation schematic diagram;
Fig. 7 is that the face living body of an embodiment of the present invention judges the structural schematic diagram of system;
Fig. 8 is the detailed construction schematic diagram for the feature extraction unit that face living body shown in Fig. 7 judges system.
Specific embodiment
Technical solution in order to enable those skilled in the art to better understand the present invention, with reference to the accompanying drawing and specific embodiment party Face living body judgment method provided by the invention and system is described in detail in formula.In the drawings, for identical or Comparable constituent element marks identical label.It is below only the best implementation of face living body judgment method and system of the invention Mode, the present invention are not limited in following structures.
Face living body judgment method of the invention and system, for distinguishing real human face and deception people in recognition of face Face.Since attention transfer process has uncertainty, sight behavior is that one kind is difficult to be set by other people by the way that monitoring camera is first-class The standby biological information obtained, meanwhile, it is compared with deception face, the sight movement of real human face has bigger uncertainty, because Sight is introduced into living body decision problem by this present invention can effectively distinguish real human face and deception face.Face of the invention is living Body judgment method and system basic thought are to obtain eye image and feature, and estimate mould according to eye image feature and sight Type carries out sight estimation, and then the sight of prediction is quantified and counted, and then carries out living body judgement, that is, judgement is identified Object is real human face or deception face.
For this purpose, the present invention estimates model firstly the need of suitable sight is established, can be detected well using the model Sight variation.Existing sight estimation model is main are as follows: the method based on appearance and the method based on feature, the side based on feature Method needs to extract the local feature (such as iris profile, pupil and speck etc.) of eyes, and establishes reflecting between feature and viewpoint Penetrate relationship.However, generally requiring high-definition camera based on such method or even needing multiple light sources or multiple cameras.Based on outer The method of sight orients eye areas first, then directly establishes the mapping relations of eye coordinates in eye image and screen, fills Divide the information that eye sight line is utilized.Under natural light environment and not in view of face living body judgment method proposed by the present invention Rely on carry out under conditions of additional equipment or light source (in fact, only one resolution ratio is the USB of 640 × 480 pixels Camera), therefore method of the present embodiment selection based on appearance models direction of visual lines.In addition, it is contemplated that true people There are the difference of microtexture, the present invention extracts Local textural feature to eye image first, then passes through back for face and deception face Return the mapping relations between model foundation Local textural feature vector sum viewpoint.In the present embodiment, sight estimates model It establishes mainly by 9 viewpoints for allowing user successively to watch attentively in computer screen, thus to obtain the user front of 9 direction of visual lines Image data establishes the mapping relations of viewpoint in eyes of user characteristics of image and computer screen by regression model.It ties below FIG. 1 to FIG. 4 is closed to illustrate the foundation and solution of sight estimation model of the present invention.
Fig. 1 is the sight estimation method for establishing model flow chart of an embodiment of the present invention;Fig. 2 is an embodiment party of the invention Formula is used to establish the data collection system of sight estimation model.As shown in Figure 1, the method for building up of sight estimation model includes: Step S1, data acquisition;Step S2 extracts eye image and feature;Step S3, model foundation and solution.
In order to establish the statistical model of sight estimation, the acquisition of related data is carried out first, the data of present embodiment are adopted Collecting system is as shown such as (a) in Fig. 2, by the liquid crystal display of 1 19 cun of size (resolution ratio is 1440 × 900 pixels) and 1 Resolution ratio is that the USB camera of 640 × 480 pixels is constituted.There are 9 fixed points, referred to as viewpoint on a liquid crystal display, viewpoint Specific location is shown in that the coordinate of each viewpoint is 1,2,3 ... the .9 table in (b) in Fig. 2 as unit of pixel in Fig. 2 shown in (b) Show the label of viewpoint, the value in label lower right, upper right side or lower-left square brackets is corresponding viewpoint in computer screen Position, for example the position of first viewpoint is (88,83).
In step sl, acquisition target is sitting in the place of distance calculation screen about 50cm-60cm, and keeps as far as possible The fixation on head.During data acquisition, acquisition target watching under the premise of keeping head fixed is required first 1st viewpoint (88,83) simultaneously keeps the direction of visual lines, saves the front elevation for the acquisition target that camera captures during this period Picture reaches the preservation of stopping image after 30 frames;Then the sight of acquisition target is guided to turn to the 2nd viewpoint (552,83), equally Save 30 frame of user's direct picture;It continues, until 270 image frame grabbers under 9 viewpoints finish.With identical side Formula acquires the image that 50 samples watch 9 viewpoints attentively, totally 13500 user's direct pictures, the figure that artificial removal closes one's eyes herein Picture, remaining 12698 effective images.
It should be noted that in data acquisition, it can be as foregoing description, each 30 frame image of view collection, but It is also possible to other appropriate number of images, it is not limited to 30 frames.Because none is specific for model foundation Range, sample is more, and model can be more accurate, while model also can be more complicated, and present embodiment comprehensively considers the accuracy of model And complexity, select 30 frames.
After completing step S1, step S2 is carried out, extracts eye image and feature in step s 2.Below with reference to Fig. 3~ Fig. 4 illustrates the extracting method of eye image and feature in present embodiment.
As shown in figure 3, present embodiment extracts eye image and feature includes:
Step S20, Face detection and the detection of interior tail of the eye point.Such as by classical Viola and Jones method into The detection of row Face detection and interior tail of the eye point is that (a) is shown in such as Fig. 4 with left eye.
Step S21, eye image alignment, i.e., cut according to canthus position and be aligned eye areas.The method being specifically aligned Are as follows: the human face region arrived first by statistics gatherer defines the eye image standard form of 64 × 32 sizes, and in setting The position at outer two canthus is respectively (54,20) and (9,20), by the way that human face region is carried out scale according to the position of the tail of the eye With the transformation of rotation, the eye image being aligned, in Fig. 4 shown in (b).
Further, by taking left eye as an example, the specific steps of scale and rotation transformation are as follows: 1) tail of the eye in connecting, by the line Section makes inner eye corner and the tail of the eye in the same horizontal line by rotation transformation, while also carrying out to other pixels of human face region Identical rotation transformation;2) the left and right canthus in the same horizontal position is allowed to by dimensional variation and according to standard form The interior tail of the eye position of upper definition is overlapped, and also does identical change of scale to other pixels of human face region at the same time;3) most The human face region by rotation and change of scale is intercepted according to the size of the eye image standard form of definition afterwards, is aligned Eye image.
It should be noted that above-mentioned eye image standard form be adaptable to it is proprietary, as long as navigating to the interior of left eye External eyes Angle Position, so that it may by rotating the interior external eyes defined on the interior tail of the eye and standard form that will test with change of scale Angle Position is consistent, thus the eye image being aligned.Eye image alignment is an important pre- place of recognition of face Step is managed, eye image alignment is carried out before establishing model in the present embodiment, is to occur in processes captured image Small human head movements.
Step S22 obtains the feature of eye image.In characteristic extraction part, in order to make full use of real human face and deception Eye image is further divided into r × c sub-regions by microtexture difference existing for face, and present embodiment is using 4 × 2, such as In Fig. 4 shown in (c), Double-histogram local binary patterns (Dual Histogram Local is extracted to each sub-regions Binary Pattern, DH-LBP) (DH-LBP is local binary patterns (LBP, Local Binary Pattern) histogram to feature The improvement of figure remains LBP good resolution capability while greatly reducing intrinsic dimensionality), by the DH- of all subregions The series connection of LBP feature, forms the feature of eye image, in Fig. 4 shown in (d), for 128 dimensions.
It is understood that on the one hand the eye image feature can be used to indicate that the direction of visual lines of user;Another party Face due to printing, display screen is reflective the problems such as, there are apparent microtextures distinguish for real human face and deception face, therefore using office Portion's textural characteristics can be good at distinguishing real human face and cheat face..
After completing step S2, step S3 is carried out, carries out the foundation and solution of model in step s3.It is described below in detail The foundation and solution procedure of sight estimation model.
Such data set is obtained by step S1 and S2Wherein, xi∈Rn(n=128 in present embodiment, Its with the dimension of eye image feature in relation to) be sample eye image feature, yi=(px,i,py,i)∈R2It is corresponding viewpoint Two-dimensional coordinate.Eye image feature and corresponding eye coordinates are established by following bidimensional regression model in present embodiment Between mapping relations:
Y=wTX+b,
W=(w1, w2)∈Rn×2, b=(b1, b2)∈R2 (1)
Wherein w, b are model parameters to be asked
And define corresponding loss function are as follows:
WhereinIt is corresponding predicted value, in general, formula (1) can be acquired by least square method.So And in conjunction with the actual conditions of the problem, least square method has the following deficiencies place: 1) the sight estimation under non-high-definition camera Problem is a complicated nonlinear problem, especially for biggish 9 calibration points of span;2) least square method is for fitting Point except straight line is more sensitive;3) for least square method only to empirical risk minimization, Generalization Ability is not strong.Based on the above points, Present embodiment is determined using the support vector regression (SVR, Support Vector Regression) with higher robustness Method solves the problem.In order to which using SVR, formula (1) is reduced to two one-dimensional regression models:
That is, x coordinate point and y-coordinate point are considered respectively, to keep fitting function smooth as far as possible, formula (1) further turns Turn to the solution of following two optimization problem:
subject to|px,i-w1 Tx-b1|≤ε1
With
subject to|py,i-w2 Tx-b2|≤ε2
Wherein ε12It is admissible maximum deviation between predicted value and actual value.Libsvm work is used in present embodiment Tool case solves (4) and (5) and obtainsWithI.e. for test sample z, the x and y coordinates value difference of eye position is pre- Surveying is fx(z),fy(z):
Wherein, xi,yiRespectively supporting vector, n1,n2It is supporting vector number.Compared to linear regression, nonlinear regression Can be significantly more efficient to data modeling, it be non-linear common method by linear transfor is geo-nuclear tracin4.By defining kernel function, The solution of formula (1) are as follows:
Present embodiment is solved using Radial basis kernel function, for test data z, is predicted by (7) formula Eye position
Sight estimation model (7) that can establish according to above-mentioned steps S1~S3 lays particular emphasis on the variation of prediction sight, without It is to stress the accuracy of estimation, and the sight estimation model that present embodiment proposes need to only configure a resolution under natural light Rate is that the USB camera of 640 × 480 pixels can detect the variation of sight well.
It is understood that although present embodiment uses nonlinear regression to estimate to establish and solve sight in step s3 Model is counted, but actually can also be using other homing methods, such as Partial Least Squares Regression etc..
Further, present embodiment sight estimation model and existing sight estimation the maximum difference of model be by It is different in the occasion of application, therefore emphasis is also different.Existing sight estimation model is mainly used in human-computer interaction, therefore infuses Heavy is the accuracy of sight estimation.Existing sight estimates model or needs multiple calibration points or need high-definition camera Head, some even need multiple cameras.And application according to the present invention occasion --- living body judgement, what present embodiment proposed Sight estimation model lays particular emphasis on the variation of sight, rather than the accuracy of sight estimation, and the sight that present embodiment proposes The USB camera that estimation model need to only configure 640 × 480 pixel sizes under natural light can detect sight well Variation.
As previously mentioned, can use the model after establishing above-mentioned sight estimation model to carry out living body judgement.It ties below Fig. 5 and Fig. 6 is closed to illustrate face living body judgment method of the invention.
Fig. 5 is the face living body judgment method flow chart of an embodiment of the present invention;Fig. 6 is an embodiment of the present invention view Line dictionary creation schematic diagram.As shown in figure 5, the face living body judgment method of present embodiment includes the following steps:
Step S100 obtains facial image.In step S100, obtained by image acquisition units such as usb cameras Certain amount facial image, such as acquisition 10 seconds, obtain 100 frame images.
Step S200 obtains the feature of eye image and eye image.The eye of each facial image is obtained in step 200 The feature of eyeball image and the eye image, for example, obtain each image of aforementioned 100 frame facial image eye image and Eye image feature.The characteristic-acquisition method of eye image and eye image is with described previously, and details are not described herein.
Step S300 carries out sight prediction, i.e., is carried out according to the feature of the eye image of acquisition and sight estimation model Sight prediction obtains corresponding number eye position.Specifically, in step S300, modular form (7) are estimated according to sight to predict The eye position sequence of user in a period of time.For example, for a certain user, it is assumed that M frame image is collected in current slot, By carrying out the acquisition of eye image and feature, the M eye position predicted to image sequence
Step S400, sight quantization and statistics, that is, according to sight dictionary to the corresponding number eye position amount of progress Change, establishes sight histogram.
Bag of words (Bag of Words, BOW) are typically used in information retrieval field, are gone out by counting each vocabulary The existing frequency forms histogram to be indicated to document.In the present embodiment by by the vocabulary extension in document to image In `` visual vocabulary " bag of words are extended into vision bag of words, what is counted herein is direction of visual lines, therefore here `` visual vocabulary " is just embodied as `` direction of visual lines ".It is referred to as `` sight bag of words in the present invention ", and will be in sight dictionary Entry be known as " sight entry ".Similar to bag of words, the building of sight bag of words is broadly divided into 2 steps: sight dictionary (codebook) building and sight histogram map generalization.Sight entry is by the way that user's range of visibility is carried out net in the present invention Lattice divide to obtain, it is assumed that the range of visibility of user is m × n size, here m, and n is uniformly to be drawn it as unit of pixel It is divided into the grid of r × c size, then (r+1) on grid × (c+1) a mesh point set constitutes sight dictionary.For example, such as Shown in Fig. 6, when the window for by the face living body judgment method of present embodiment applied to resolution ratio being 640 × 480 pixel sizes When middle, it is first determined (being assumed to be the rectangular areas of 600 × 400 pixel sizes, (fringe region is not examined for the range of visibility of user Consider)), 2 × 2 (value of r and c can not be 2 here) sub-regions then are divided by 600 × 400, then 9 red point structures At sight dictionary.Preferably, the construction of present embodiment sight dictionary is made with 9 viewpoints when establishing sight estimation model For sight entry, i.e., (88,83), (552,83), (1016,83), (88,440), (552,440), (1016,440), (88, 797), (552,797), (1016,797) }.
After having constructed sight dictionary, the eye position sequence that will predictAccording to sight dictionary Quantified.For example, finding and each predicted positionApart from nearest sight entry and vote (present embodiment use Europe Formula distance), the poll for counting each position entry forms histogram and is normalized, and forms sight histogram.
Step 500, living body judges, that is, obtains comentropy according to sight histogram and carries out living body judgement.
Specifically, for normalized sight histogram H={ p1,…,p9(meetAccording to The definition of entropy acquires entropy
IfThen have
Show that direction of visual lines has been quantized in l different sight entries, corresponding entropy is greater than 0 at this time.
If there is only a certainThen have
Show that direction of visual lines is only quantized in a sight entry, corresponding entropy is equal to 0 at this time.I.e. in the item of quantization Under part, when direction of visual lines changes, entropy is greater than 0;When not changing, entropy is equal to 0.
To sum up, given threshold is 0 in present embodiment, and sets the condition of living body judgement are as follows: if acquiring (8) formula entropy Entropy > 0 is then determined as real human face, is otherwise deception face.
The present invention also provides a kind of living body and judges system, in people other than providing a kind of face living body judgment method Real human face and deception face are distinguished in face identification.Illustrate that living body of the invention judges system below with reference to Fig. 7 and Fig. 8.
Fig. 7 is that the living body of an embodiment of the present invention judges the structural schematic diagram of system;Fig. 8 is the judgement of living body shown in Fig. 7 The detailed construction schematic diagram of the feature extraction unit of system.
As shown in fig. 7, the living body of present embodiment judges that system includes image acquisition units 100, feature extraction unit 200, sight estimation unit 300, sight statistic unit 400 and living body judging unit 500.
Wherein, image acquisition units 100, for example be the camera of 640 × 480 pixels, for acquiring a certain number of people Face image.
Feature extraction unit 200 is used to obtain the eye image of each facial image and the feature of the eye image.It is special Levying extraction unit 200 includes detection and localization module 201, eye image alignment module 202 and characteristic extracting module 203.Positioning inspection Module 201 is surveyed for carrying out Face detection and the detection of interior tail of the eye point, for example, by classical Viola and Jones method into The detection of row Face detection and interior tail of the eye point is that (a) is shown in such as Fig. 4 with left eye.Eye image alignment module 202 is used for Eye image of the eye areas to be aligned is cut and be aligned according to canthus position, is specifically executed operations described below: being passed through The facial image that statistics gatherer arrives defines eye image standard form, and inner eye corner and external eyes Angle Position is arranged, and by the people Face region carries out the eye image of the transformation acquisition alignment of scale and rotation, for example preceding institute of detailed process according to the position of the tail of the eye It states, details are not described herein.Feature of the characteristic extracting module 203 according to the Ophthalmologic image-taking eye image of the alignment, tool Body executes operations described below: the eye image being divided into r*c sub-regions, and extracts Double-histogram part to each subregion The Double-histogram local binary patterns feature of all subregions is connected, forms the spy of the eye image by binary pattern feature Sign.
Sight estimation unit 300 is used to carry out sight according to feature and sight the estimation model of the eye image of acquisition pre- It surveys and obtains corresponding number eye position.That is, estimating modular form (7) according to sight to predict the eye position of user in a period of time Sequence.For example, for a certain user, it is assumed that M frame image is collected in current slot, by carrying out eyes figure to image sequence The acquisition of picture and feature, the M eye position predicted
Sight statistic unit 400 quantifies the corresponding number eye position according to sight dictionary, and it is straight to establish sight Fang Tu.Sight statistics and the process of quantization are as previously mentioned, details are not described herein.
Living body judging unit 500 obtains comentropy according to sight histogram and carries out living body, wherein
If entropy is greater than 0, it is determined as real human face, is otherwise judged to cheating face.
Principle that embodiment of above is intended to be merely illustrative of the present and the illustrative embodiments used, however this hair It is bright to be not limited thereto.For those skilled in the art, in the feelings for not departing from spirit and substance of the present invention Under condition, various changes and modifications can be made therein.These variations and modifications are also considered as guard interval of the invention.

Claims (12)

1. a kind of face living body judgment method, for distinguishing real human face and deception face in recognition of face, which is characterized in that Include:
The step of obtaining a certain number of facial images;
Obtain each facial image eye image and the eye image feature the step of;
Sight prediction, which is carried out, according to the feature of the eye image of acquisition and sight estimation model obtains corresponding number eye position The step of;
The step of corresponding number eye position is quantified, establishes sight histogram according to sight dictionary;With
The step of obtaining comentropy according to sight histogram and carry out living body judgement, wherein
If entropy is greater than 0, it is determined as real human face, is otherwise judged to cheating face.
2. face living body judgment method as described in claim 1, which is characterized in that the eyes for obtaining each facial image The step of feature of image and the eye image includes:
The step of carrying out Face detection and the detection of interior tail of the eye point;
The step of being cut according to canthus position and being aligned eye image of the eye areas to be aligned;With
The step of according to the feature of the Ophthalmologic image-taking eye image of the alignment.
3. face living body judgment method as claimed in claim 2, which is characterized in that described to cut and be aligned according to canthus position Eye areas to be aligned eye image the step of include:
The facial image arrived by statistics gatherer defines eye image standard form, and inner eye corner and external eyes Angle Position is arranged Step;With
Human face region is subjected to scale and the step of converting the eye image for obtaining alignment of rotation according to the position of the tail of the eye.
4. face living body judgment method as claimed in claim 2, which is characterized in that according to the Ophthalmologic image-taking of the alignment The step of feature of eye image specifically:
The eye image is divided into r × c sub-regions, and it is special to extract Double-histogram local binary patterns to each subregion The Double-histogram local binary patterns feature of all subregions is connected, forms the feature of the eye image by sign.
5. the face living body judgment method as described in claim 1-4 any one, which is characterized in that further include:
The step of establishing sight estimation model, which includes:
The step of acquiring data, the data are facial image when user successively observes the viewpoint for setting quantity;
Obtain each facial image eye image and the eye image feature the step of;With
According to the coordinate of the feature of the eye image and corresponding viewpoint, establishes sight estimation model and solve.
6. face living body judgment method as claimed in claim 5, which is characterized in that the sight estimates model are as follows:
Wherein, the eye positionxi∈RnIt is the feature of eye image, yi=(px,i, py,i)∈R2It is corresponding viewpoint two-dimensional coordinate.
7. a kind of living body judges system, for distinguishing real human face and deception face in recognition of face, which is characterized in that packet It includes:
Image acquisition units are used to acquire a certain number of facial images;
Feature extraction unit is used to obtain the eye image of each facial image and the feature of the eye image;
Sight estimation unit carries out sight prediction acquisition pair according to the feature of the eye image of acquisition and sight estimation model Answer quantity eye position;
Sight statistic unit quantifies the corresponding number eye position according to sight dictionary, establishes sight histogram, With
Living body judging unit obtains comentropy according to sight histogram and carries out living body, wherein
If entropy is greater than 0, it is determined as real human face, is otherwise judged to cheating face.
8. living body as claimed in claim 7 judges system, which is characterized in that the feature extraction unit includes:
Detection and localization module is used to carry out Face detection and the detection of interior tail of the eye point;
Eye image alignment module, the eyes figure for being used to cut and be aligned according to canthus position eye areas to be aligned Picture;With
Characteristic extracting module, according to the feature of the Ophthalmologic image-taking eye image of the alignment.
9. living body as claimed in claim 8 judges system, which is characterized in that under the eye image alignment module specifically executes State operation:
The facial image arrived by statistics gatherer defines eye image standard form, and inner eye corner and external eyes Angle Position is arranged;With
Human face region is carried out to the eye image of the transformation acquisition alignment of scale and rotation according to the position of the tail of the eye.
10. living body as claimed in claim 8 judges system, which is characterized in that the characteristic extracting module specifically executes following Operation:
The eye image is divided into r × c sub-regions, and it is special to extract Double-histogram local binary patterns to each subregion The Double-histogram local binary patterns feature of all subregions is connected, forms the feature of the eye image by sign.
11. living body as claimed in claim 7 judges system, which is characterized in that described image acquisition unit is 640 × 480 pictures The camera of element.
12. the living body as described in claim 7-11 is any judges system, which is characterized in that the sight estimates model are as follows:
Wherein, the eye positionxi∈RnIt is the feature of eye image, yi=(px,i, py,i)∈R2It is corresponding viewpoint two-dimensional coordinate.
CN201410327039.4A 2014-07-10 2014-07-10 Face living body judgment method and system Active CN105243386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410327039.4A CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410327039.4A CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Publications (2)

Publication Number Publication Date
CN105243386A CN105243386A (en) 2016-01-13
CN105243386B true CN105243386B (en) 2019-02-05

Family

ID=55041027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410327039.4A Active CN105243386B (en) 2014-07-10 2014-07-10 Face living body judgment method and system

Country Status (1)

Country Link
CN (1) CN105243386B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705507B (en) * 2016-06-30 2022-07-08 北京七鑫易维信息技术有限公司 Identity recognition method and device
CN106169075A (en) * 2016-07-11 2016-11-30 北京小米移动软件有限公司 Auth method and device
CN106203373B (en) * 2016-07-19 2019-04-16 中山大学 A kind of human face in-vivo detection method based on deep vision bag of words
CN106203372A (en) * 2016-07-19 2016-12-07 奇酷互联网络科技(深圳)有限公司 Eye-based living body detection method and device and terminal equipment
CN106682578B (en) * 2016-11-21 2020-05-05 北京交通大学 Weak light face recognition method based on blink detection
CN106778518B (en) * 2016-11-24 2021-01-08 汉王科技股份有限公司 Face living body detection method and device
CN107330914B (en) * 2017-06-02 2021-02-02 广州视源电子科技股份有限公司 Human face part motion detection method and device and living body identification method and system
CN108875468B (en) * 2017-06-12 2022-03-01 北京旷视科技有限公司 Living body detection method, living body detection system, and storage medium
CN107992811A (en) * 2017-11-27 2018-05-04 北京小米移动软件有限公司 Face identification method and device
CN108229376B (en) * 2017-12-29 2022-06-03 百度在线网络技术(北京)有限公司 Method and device for detecting blinking
CN108445505B (en) * 2018-03-29 2021-07-27 南京航空航天大学 Laser radar-based feature significance detection method in line environment
CN110738072A (en) * 2018-07-18 2020-01-31 浙江宇视科技有限公司 Living body judgment method and device
CN108549887B (en) * 2018-07-23 2021-07-30 北京智芯原动科技有限公司 Living body face detection method and device
CN109190522B (en) * 2018-08-17 2021-05-07 浙江捷尚视觉科技股份有限公司 Living body detection method based on infrared camera
CN110705569A (en) * 2019-09-18 2020-01-17 西安交通大学 Image local feature descriptor extraction method based on texture features
CN110929705A (en) * 2020-02-17 2020-03-27 京东数字科技控股有限公司 Living body detection method and device, identity authentication method and system and storage medium
CN111582238B (en) * 2020-05-28 2021-04-02 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112633217A (en) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN113869219B (en) * 2021-09-29 2024-05-21 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19901881A1 (en) * 1999-01-19 2000-07-27 Dcs Dialog Communication Syste Falsification protection method for biometric identification process for people using multiple data samples,
JP2005277567A (en) * 2004-03-23 2005-10-06 Olympus Corp Image recording device
CN101710383B (en) * 2009-10-26 2015-06-10 北京中星微电子有限公司 Method and device for identity authentication
CN103106397B (en) * 2013-01-19 2016-09-21 华南理工大学 Human face in-vivo detection method based on bright pupil effect

Also Published As

Publication number Publication date
CN105243386A (en) 2016-01-13

Similar Documents

Publication Publication Date Title
CN105243386B (en) Face living body judgment method and system
CN108416336B (en) A kind of method and system of intelligence community recognition of face
CN105389554B (en) Living body determination method and equipment based on recognition of face
RU2431190C2 (en) Facial prominence recognition method and device
CN104966070B (en) Biopsy method and device based on recognition of face
CN109819208A (en) A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring
CN109598242B (en) Living body detection method
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
CN108229376B (en) Method and device for detecting blinking
CN104143086B (en) Portrait compares the application process on mobile terminal operating system
CN101923641B (en) Improved human face recognition method
CN108875485A (en) A kind of base map input method, apparatus and system
CN110414381A (en) Tracing type face identification system
CN101551852B (en) Training system, training method and detection method
CN101201893A (en) Iris recognizing preprocessing method based on grey level information
CN101339603A (en) Method for selecting qualified iris image from video frequency stream
JP6822482B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
CN109670390A (en) Living body face recognition method and system
CN109190475A (en) A kind of recognition of face network and pedestrian identify network cooperating training method again
CN105405130B (en) License image highlight detection method and device based on cluster
CN109255319A (en) For the recognition of face payment information method for anti-counterfeit of still photo
US9607231B2 (en) Use of relatively permanent pigmented or vascular skin mark patterns in images for personal identification
CN112633217A (en) Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
JP2020140630A (en) Fixation point estimation system, fixation point estimation method, fixation point estimation program, and information recording medium for recording the same
CN108334870A (en) The remote monitoring system of AR device data server states

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant