CN107918773A - A kind of human face in-vivo detection method, device and electronic equipment - Google Patents

A kind of human face in-vivo detection method, device and electronic equipment Download PDF

Info

Publication number
CN107918773A
CN107918773A CN201711330803.3A CN201711330803A CN107918773A CN 107918773 A CN107918773 A CN 107918773A CN 201711330803 A CN201711330803 A CN 201711330803A CN 107918773 A CN107918773 A CN 107918773A
Authority
CN
China
Prior art keywords
image
region
feature
face
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711330803.3A
Other languages
Chinese (zh)
Other versions
CN107918773B (en
Inventor
刘昌平
孙旭东
黄磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwang Technology Co Ltd
Original Assignee
Hanwang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanwang Technology Co Ltd filed Critical Hanwang Technology Co Ltd
Priority to CN201711330803.3A priority Critical patent/CN107918773B/en
Publication of CN107918773A publication Critical patent/CN107918773A/en
Application granted granted Critical
Publication of CN107918773B publication Critical patent/CN107918773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The present invention provides a kind of human face in-vivo detection method, belongs to living things feature recognition field, solves the problems, such as that human face in-vivo detection method recognition accuracy is low in the prior art.The described method includes:The characteristics of image for the respective K image-region of two images that the image pair of face includes is gathered under different spectrum by obtaining;Determine the area image feature to be identified of each image-region of described image centering each image;According to the area image feature to be identified of the same position image-region of described image centering each image, the Relating Characteristic of the same position image-region is determined;Area image feature to be identified and Relating Characteristic based on the K image-region, In vivo detection is carried out by face In vivo detection model trained in advance to the face.Method disclosed by the embodiments of the present invention, carries out In vivo detection by the relevance combined between the characteristics of image and image that are gathered under different spectral conditions, further improves the accuracy rate of face In vivo detection.

Description

A kind of human face in-vivo detection method, device and electronic equipment
Technical field
The present invention relates to living things feature recognition field, more particularly to a kind of human face in-vivo detection method, device and electronics to set It is standby.
Background technology
Biometrics identification technology is widely used in the every field in life, wherein, face recognition technology is because of its feature The features such as convenient, healthy is gathered, is most widely used, for example, face recognition application is in security protection, gate inhibition field.As face is known , also there is the method for more and more attack recognitions of face in the extension of other application field.Common attack method includes the use of The media such as human face photo, video and 3D mask models simulation face attacks recognition of face before face recognition device.Can See, it is non-living body medium to carry out the most of of attack use to recognition of face in the prior art, therefore, to face to be identified into Row In vivo detection, is a urgent problem to be solved to resist the attack carried out to identification.
In the prior art, the method for carrying out face In vivo detection is broadly divided into three classes:Method based on textural characteristics, be based on The method of motion feature and based on other characterization methods.Based on motion feature in human face in-vivo detection method of the prior art Human face in-vivo detection method recognition accuracy in the case where doing attack medium with video is relatively low;Based on textural characteristics or other spies The human face in-vivo detection method of sign is had a great influence by illumination, and recognition accuracy is unstable.
To sum up, at least there is applicable attack medium and be limited in human face in-vivo detection method of the prior art, and face live body Detect the problem of recognition accuracy is low.
The content of the invention
The embodiment of the present invention provides a kind of human face in-vivo detection method and device, to solve existing face In vivo detection side The problem of method recognition accuracy is low.
In a first aspect, an embodiment of the present invention provides a kind of human face in-vivo detection method, including:
Obtain the figure for the respective K image-region of two images that the image pair of the collection face under different spectrum includes As feature, wherein, K is the natural number more than 0;
Determine the area image feature to be identified of each image-region of described image centering each image;
According to the area image feature to be identified of the same position image-region of described image centering each image, institute is determined State the Relating Characteristic of same position image-region;
Area image feature to be identified and Relating Characteristic based on the K image-region, pass through people trained in advance Face In vivo detection model carries out In vivo detection to the face.
Second aspect, the embodiment of the present invention additionally provide a kind of face living body detection device, including:
Characteristics of image acquisition module, for obtaining the two images that the image pair of face is gathered under different spectrum and is included The characteristics of image of respective K image-region, wherein, K is the natural number more than 0;
Area image characteristic determination module to be identified, for determining each image-region of described image centering each image Area image feature;
Relating Characteristic determining module, for treating for the same position image-region according to described image centering each image Identification region characteristics of image, determines the Relating Characteristic of the same position image-region;
In vivo detection module, for area image feature to be identified and Relating Characteristic based on the K image-region, In vivo detection is carried out to the face by face In vivo detection model trained in advance
The third aspect, the embodiment of the present invention additionally provide a kind of electronic equipment, including memory, processor and are stored in institute State the computer program that can be run on memory and on a processor, it is characterised in that the processor performs the computer The human face in-vivo detection method described in the embodiment of the present invention is realized during program.
Fourth aspect, the embodiment of the present invention additionally provide a kind of computer-readable recording medium, are stored thereon with computer Program, it is characterised in that the program realizes the human face in-vivo detection method described in embodiment of the present invention when being executed by processor Step.
In this way, human face in-vivo detection method disclosed by the embodiments of the present invention, face is gathered by obtaining under different spectrum The characteristics of image of the respective K image-region of two images that includes of image pair;Determine described image centering each image Each image-region area image feature to be identified;According to the same position image-region of described image centering each image Area image feature to be identified, determine the Relating Characteristic of the same position image-region;Based on the K image-region Area image feature to be identified and Relating Characteristic, the face is carried out by face In vivo detection model trained in advance In vivo detection, solves the problems, such as that existing human face in-vivo detection method recognition accuracy is low.Side disclosed by the embodiments of the present invention Method, by combining the relevance between the characteristics of image and image that are gathered under different spectral conditions, carries out In vivo detection, into one Step improves the accuracy rate of In vivo detection.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, For those of ordinary skill in the art, without having to pay creative labor, can also be obtained according to these attached drawings Obtain other attached drawings.
Fig. 1 is the human face in-vivo detection method flow chart of the embodiment of the present invention one;
Fig. 2 is the human face in-vivo detection method flow chart of the embodiment of the present invention two;
Fig. 3 is image-region schematic diagram in the embodiment of the present invention two;
Fig. 4 is one of face living body detection device structure chart of the embodiment of the present invention three;
Fig. 5 is the two of the face living body detection device structure chart of the embodiment of the present invention three.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without creative efforts Example, belongs to the scope of protection of the invention.
Embodiment one:
A kind of human face in-vivo detection method is present embodiments provided, as shown in Figure 1, the described method includes:Step 10 is to step Rapid 13.
Step 10, obtain and the respective K image of two images that the image pair of face includes is gathered under different spectrum The characteristics of image in region.
The K image-region is evenly dividing to obtain.Wherein, K is the natural number more than 0.
In the embodiment of the present invention, by taking different spectral conditions is near infrared lights and visible ray as an example, face live body is described in detail The concrete scheme of detection method.When it is implemented, those skilled in the art can also be according to collection facial image and identification face Demand, facial image is gathered under other spectral conditions, is suitable for the invention human face in-vivo detection method.The application exists When it is implemented, facial image of the face to be identified under a variety of different spectral conditions, then, any choosing can also be gathered at the same time The facial image under two of which spectral conditions is selected, forms the image pair of face.
In the present embodiment, when carrying out face In vivo detection, first, obtain under visible light with gather people under near infrared light The two images of face, form one group of image pair, then, based on this group of image to carrying out face In vivo detection.In order to lift detection Accuracy, the time interval for gathering visible images and near infrared light image is as far as possible small.It is for example, same on a collecting device When visible image capturing head and near infrared light camera are set, while Image Acquisition is carried out to face, obtains a width visible ray respectively Facial image and a width near infrared light facial image.
Then, the visible images and near infrared light image of the image pair of acquisition are respectively divided according to preset rules Obtain K image-region.When it is implemented, can be using entire image as an image-region, without drawing for smaller particle size Point;The image of described image centering can also be divided into the identical phase of multiple sizes according to order from left to right from top to bottom Adjacent image-region.The present invention does not limit the dividing mode of K image-region.
Afterwards, the characteristics of image of each image-region in visible images and near infrared light image is extracted respectively.Extraction figure As the method for the characteristics of image in region has very much, such as LBP features, textural characteristics, the present invention is special to the image for extracting image-region The method of sign does not limit.
Step 11, the area image feature to be identified of each image-region of described image centering each image is determined.It is right In each image-region of image pair each image, its area image feature to be identified is determined respectively.When it is implemented, wait to know Other area image feature can be the described image feature in the region.Preferably, area image to be identified is characterized as the region Characteristics of image component of the described image feature in a certain specified projector space.
Step 12, it is special according to the area image to be identified of the same position image-region of described image centering each image Sign, determines the Relating Characteristic of the same position image-region.
Specifically, the Relating Characteristic of the same position image-region of the two images of image pair, that is, refer to above-mentioned two The image-region of same position corresponds to a group association feature in width image.The Relating Characteristic is used to represent described image pair Relevance between the same position image-region of middle two images.
Specifically, after the area image feature to be identified of K image-region of image pair each image is got, Further according to the area image feature to be identified of K image-region of visible images, K image district of near infrared light image The area image feature to be identified in domain determines that the relevance of the visible images and each image-region of near infrared light image is special Sign, then, the area image to be identified by the area image feature to be identified of the visible images, near infrared light image are special The feature to be identified of sign, Relating Characteristic generation described image pair.
By taking K=1 as an example, i.e., using the view picture visible images of image pair as an image-region, by image pair View picture near infrared light image is as an image-region, it is seen that the area image character representation to be identified of light image V is FV, it is near red The area image character representation to be identified of outer light image N is FN, it is seen that the Relating Characteristic of light image V and near infrared light image N It is expressed as FR, then the feature to be identified of described image centering two images can be expressed as FVN={ FV, FN, FR}。
If K is greater than or equal to 2, i.e. each image of image pair is all divided into multiple images region, then to visible ray Each image-region of image V and near infrared light image N extracts characteristics of image respectively.Then, according to the figure of each image-region As feature determines the area image feature to be identified of each image-region.Further calculate visible images V and near infrared light figure As the Relating Characteristic of the same position image-region of N, the images to be recognized feature of each image-region, Relating Characteristic according to Preset rules arrange, that is, obtain the feature to be identified for the image pair that the visible images V and near infrared light image N are formed.
Step 13, area image feature to be identified and Relating Characteristic based on the K image-region, by instructing in advance Experienced face In vivo detection model carries out In vivo detection to the face.
Finally, it is the feature to be identified for the realtime graphic pair being made of area image feature to be identified and Relating Characteristic is defeated Enter to face In vivo detection model trained in advance, carry out Classification and Identification, determine whether the face is living body faces.
When it is implemented, the image pair of substantial amounts of visible images and near infrared light image composition is gathered, as training sample This, and sample label is set.The sample includes the image pair of live body and non-living body.Then, using with carrying out In vivo detection When, the identical method of feature to be identified is extracted from the image pair of face, extracts the feature to be identified of every training sample, as The input of model, and training face In vivo detection model.When it is implemented, face In vivo detection model can be SVM classifier Or neural network model, or other disaggregated models of the prior art, it is of the invention that this is not limited.
Human face in-vivo detection method disclosed by the embodiments of the present invention, the image of face is gathered by obtaining under different spectrum The characteristics of image for the respective K image-region of two images that centering includes;Determine each of described image centering each image The area image feature of image-region;It is special according to the area image of the same position image-region of described image centering each image Sign, determines the Relating Characteristic of the same position image-region;Finally, the area image based on the K image-region is special Seek peace Relating Characteristic, In vivo detection is carried out to the face by face In vivo detection model trained in advance, is solved existing The problem of somebody's face biopsy method recognition accuracy is low.Method disclosed by the embodiments of the present invention, is not shared the same light by combining Relevance between the characteristics of image and image that are gathered under spectral condition, carries out In vivo detection, further improves In vivo detection Accuracy rate.
Embodiment two:
A kind of human face in-vivo detection method is present embodiments provided, as shown in Fig. 2, the described method includes:Step 20 is to step Rapid 24.
It is N by the graphical representation gathered under the conditions of near infrared light, it is seen that the image table gathered under optical condition in the present embodiment It is shown as V.In the present embodiment, the two images that image pair includes are divided into according to order from left to right from top to bottom more Identical adjacent 16 image-region of a size, i.e. K are equal to exemplified by 16, illustrate the particular technique side of human face in-vivo detection method Case.
Step 20, based on the area image feature to be identified and Relating Characteristic extracted from some image pairs, training of human Face In vivo detection model.
Wherein, each described image is to being included in the two width facial images gathered respectively under different spectral conditions.
When it is implemented, the two images gathered respectively under visible light conditions and under the conditions of near infrared light are obtained, as One group of image pair, as a training data, and sets sample label, wherein, the sample label is used to identify the image pair It is live body or non-living body.According to the method described above, some training datas, the i.e. some groups of images pair for being provided with label are obtained.So Afterwards, as shown in figure 3, visible images 31 and near infrared light image 32 for the image pair in every training data, according to Identical method, is evenly dividing as 16 image-regions respectively.In the present embodiment, it is assumed that get M bar training datas, that is, obtain To M group images pair, training dataset can be expressed asWherein, S is sample label, and value is 0 or 1;1 ≤ j≤M, M and j are positive integer;Vj,NjRepresent one group of image pair, VjRepresent visible images, NjRepresent near infrared light image.
Afterwards, every width visible images V is extracted respectivelyj16 image-regions characteristics of image, obtained characteristics of image It is expressed as FVj={ FVj1, FVj... FVji};Extract every width near infrared light image Nj16 image-regions image feature representation For FNj={ FNj1, FNj... FNji}.Wherein, i=16.
Then, for every group of image pair, the same position image-region of two images therein is calculated respectively (in such as Fig. 3 311 and area image feature to be identified 321) and Relating Characteristic.When it is implemented, the area image feature to be identified Can be characteristics of image of the slave phase with location drawing picture extracted region, then the Relating Characteristic is treated for same position image-region Similarity between identification region characteristics of image.
With image to Vj,NjExemplified by, FV is calculated respectivelyj1And FNj1Relating Characteristic FRj1、FVj2And FNj2Relevance it is special Levy FRj2... and FVjiAnd FNjiRelating Characteristic FRji.In order to lift the accuracy of Face datection, the feature bag to be identified Include the area image feature to be identified of the K image-region, the same position for representing described image centering two images The Relating Characteristic of relevance between image-region.
Finally, by the area image feature to be identified of the visible images of image pair same position image-region, near red The area image feature to be identified of outer smooth figure, and the Relating Characteristic of the image-region form the image pair two images Feature to be identified.
For example, for image to Vj,Nj, feature to be identified can be expressed as:FVN={ { FVj1,FNj1,FRj1},{FVj2, FNj2,FRj2},…,{FVji,FNji,FRji}}。
For every training data, a string of features to be identified can be obtained, based on treating for every obtained training data Identification feature and sample label, you can training face In vivo detection model.It is using face In vivo detection model as SVM classifier Example, feature and sample label to be identified based on every obtained training data, you can training obtains SVM classifier.
When it is implemented, the characteristics of image in described image region can be LBP (local binary patterns) feature Or textural characteristics etc..In the present embodiment, by taking the LBP features for extracting image-region as an example, illustrate the extraction process of characteristics of image.It is right In each image-region, extracting parameter is traditional LBP features of (8,1):Around each pixel, LBP (8,1) is special Sign will be found is 8 sampling pixel points of 1 pixel apart from pixel distance, and its arranged clockwise is got up;If sampling The pixel number of point is higher than initial pixel point, then obtains bit " 1 ", otherwise obtain bit " 0 ";So for imago in each Vegetarian refreshments, can obtain a corresponding 8 bit-binary numeric string;Finally, by counting institute in each image-region Have the corresponding magnitude frequency of pixel, you can obtain a histogram, the histogram can be used as representing the feature in the region to Amount.
When it is implemented, since visible ray and near infrared light image correspond to different vector spaces, in order to improve relevance The accuracy of judgement, it is preferred that the area image feature to be identified includes:The characteristics of image in each described image region is referring to Determine the characteristics of image component of projecting direction.Visible images and the characteristics of image of near infrared light image are projected to one respectively Optimal vector space, then further determines that the characteristic component in the optimal vector space is special as area image to be identified Levy, and the relevance of each image-region is determined according to the area image feature to be identified.
When it is implemented, firstly, for each image to Vj,Nj, by the K for calculating the image pair each image respectively The characteristics of image of a image-region specify projecting direction characteristics of image component related coefficient, determine related coefficient maximum Corresponding projecting direction is best projection directionWithThat is visible images VjImage-region i optimal vector space With near infrared light image NjImage-region i optimal vector space.Then, visible images V is obtained respectivelyjImage-region i In projecting directionOn characteristics of image componentAs visible images VjImage-region i administrative division map to be identified As feature;And near infrared light image NjImage-region i projecting directionOn characteristics of image componentMake For near infrared light image NjImage-region i area image feature to be identified.
When it is implemented, canonical correlation analysis (Canonical Correlation Analysis, CCA) side can be passed through Method calculates the correlation between the feature vector that different spectrum obtain.Learn visible ray figure using Canonical Correlation Analysis As the best projection direction of i-th of image-region of VWith the best projection side of i-th of image-region of near infrared light image N ToTo maximize two projection vectorsWithCorrelation coefficient ρi, formula is as follows:
Wherein, footmark T It is the transposition of vector, E [g] represents the expectation of g.For this equation of further abbreviation, covariance matrix C in class is introducedVVWith CNN, and covariance matrix C between classNVAnd CVN, since all feature vectors are extracted on less subregion picture, for Covariance square introduces situations such as regularization parameter λ is to avoid over-fitting is produced in class, and optimization aim formula above can change It is written as:
Wherein, regularization parameter λ determines value according to experimental data. Optimization objective function can be by canonical correlation algorithm (the Regularized Canonical with regular terms Correlation Analysis) solve, repeated no more in the embodiment of the present invention.
Try to achieve two best projection directionsWithAfterwards, the correlation of different spectral conditions hypographs can thus be built Feature.The Relating Characteristic of each image-region is according to the described image characteristic component of the same position image-region of two images Included angle cosine value determine that the Relating Characteristic in that is, each described image region is according to the same position image-regions of two images The included angle cosine value of the area image feature to be identified determine.When it is implemented, formula can be passed through:
Calculate the projection vector of i-th of image-region of visible images VWith The projection vector of i-th of image-region of near infrared light image NBetween correlation properties Ψi, as different spectrum bars The Relating Characteristic of some image-region of the two images gathered under part.When it is implemented, each image-region is specifying throwing The projection vector in shadow direction can include 1 dimensional feature component, can also include multidimensional characteristic component, true according to projecting direction matrix It is fixed.
Facial image is divided into some less image-regions, with better adapt to the change of illumination and excavate visible ray and The related information of near infrared light image, but use the scheme of piecemeal can be so that feature vector becomes longer, it is likely that caused The generation of fitting.Also, the image-region of some edges may more contain non-face picture, or even some image-regions May be completely outside facial contour line, their contributions for system can't be very big, or even have side effect.These are useless The feature of image-region should be removed, or reduce negative effect for system as far as possible, therefore, in the model training stage, In order to further lift Detection accuracy, while the feature weight of each image-region in entire image is calculated, to weigh difference The importance in region.
It is when it is implemented, described based on the feature to be identified extracted from some image pairs, training face In vivo detection mould During type, further include:Determine the area image feature to be identified and the corresponding feature weight of Relating Characteristic in each described image region, So that the area image feature to be identified of image pair two images of the face In vivo detection model based on the face and Relating Characteristic, and the corresponding feature weight, In vivo detection is carried out to the face.
The area image feature to be identified and the corresponding feature weight of Relating Characteristic for determining each described image region, Including:To the characteristics of image extracted from some image pairs, retain the characteristics of image of a different image-region every time, calculate The maximization related coefficient of the characteristics of image projection vector of remaining K-1 image-region;According to being calculated every time Related coefficient is maximized, the area image feature to be identified and Relating Characteristic for determining the described image region of this reservation correspond to Feature weight.
Assuming that some images for training pattern are to being expressed as1≤j≤M, M and j are positive integer; Vj,NjRepresent one group of image pair, VjRepresent visible images, NjRepresent near infrared light image.By each image is divided into 16 A image-region, and can be obtained described some after decibel extraction characteristics of image from each image-region of each image The characteristics of image FP of group image pairM, it is expressed as:Wherein, FVjFor the characteristics of image of visible images j, FNjFor the characteristics of image of near infrared light image j;The characteristics of image of each image all includes the characteristics of image of 16 image-regions. The image feature representation of all visible images isThe image feature representation of all near infrared light images ForWherein, K=16.First, the characteristics of image FV of i-th of image-region is retainedjiAnd FNji, by FV and The new feature vector of remaining two string of characteristics of image composition, is expressed as in FN:WithCan be by maximizing correlation coefficient ρ(-i)Calculate the best projection side of this two strings characteristics of image To maximizing:
Wherein,WithIt is vector respectively WithProjecting direction.Maximum correlation coefficient ρ is solved using with foregoing identical Canonical Correlation Analysis(-i)
Afterwards, can be according to formulaCalculate the feature weight c of image-region ii., can according to this scheme In the hope of retain it is each allow the corresponding maximum correlation coefficient of image-region, further, it may be determined that each image-region Feature weight.
Finally, the feature weight of each image-region is arranged to same position image-region in trained obtained model Area image feature to be identified and the corresponding feature weight of Relating Characteristic.
When it is implemented, using training with radial direction base (Radial Basis Function) kernel function SVM classifier as Example, it is assumed that the input feature vector of model is respectively fxAnd fy, can be adjusted by the weight of each image-region in RBF kernel functions Relevant parameter.For example, for kernel functionIn distance calculation formulaWherein diagonal square matrix Q a rows b row element definition be:A, b=1,2 ..., K × K ', wherein, K is the quantity of image-region in each image, and K ' is each image The corresponding characteristic dimension in region, K ' is equal to 3 in the present embodiment.Square formation element ctSubscript Represent Downward rounding.
Step 21, obtain and the respective K image of two images that the image pair of face includes is gathered under different spectrum The characteristics of image in region.
Gathered under different spectrum in the concrete scheme of the image pair of face referring to embodiment 1, details are not described herein again.
It is N for the graphical representation gathered under the conditions of near infrared lightcurIt is with the graphical representation gathered under visible light conditions Vcur, according to region division identical method is carried out during training pattern to image, respectively obtain 16 image districts of each image Domain.Then, according to the method identical with the characteristics of image of extraction image-region during training pattern, visible images are extracted respectively Vcur16 image-regions characteristics of image, obtained image feature representation is FVcur={ FVcur1, FVcur2... FVcuri};Press According to the identical method of the characteristics of image with extracting image-region during training pattern, near infrared light image N is extracted respectivelycur16 The characteristics of image of image-region, obtained image feature representation are FNcur={ FNcur1, FNcur2... FNcuri, wherein, i=16.
Step 22, the area image feature to be identified of each image-region of described image centering each image is determined.
In the present embodiment, the area image feature to be identified includes:The characteristics of image in each described image region is referring to Determine the characteristics of image component of projecting direction.
When it is implemented, can be calculated first by canonical correlation analysis CCA methods feature that different spectrum obtain to Correlation between amount, i.e. near infrared light image NcurWith visible images VcurCorrelation.First, near infrared light figure is determined respectively As NcurWith visible images VcurEach image-region specified projecting direction.
Preferably, the specified projecting direction is related for the characteristics of image component of the same position image-region of two images The corresponding projecting direction of coefficient maximum.For example, for visible images VcurI-th of image-region characteristics of image FVcuri With near infrared light image NcurI-th of image-region characteristics of image FNcuri, using training pattern when calculates one group of training sample Method in the best projection direction of two images, determines characteristics of image FV in thiscuriAnd FNcuriBest projection directionWith
Then, best projection direction is further determined thatThe image of i-th of image-region of corresponding visible images is special Levy componentWith best projection directionThe characteristics of image component of i-th of image-region of corresponding near infrared light imageIn this way, near infrared light image N can be obtainedcurWith visible images VcurAll image-regions specify The characteristics of image component of projecting direction, i.e. best projection direction.
Step 23, it is special according to the area image to be identified of the same position image-region of described image centering each image Sign, determines the Relating Characteristic of the same position image-region.
When it is implemented, the area image to be identified according to the same position image-region of described image centering each image Feature, determines the Relating Characteristic of the same position image-region, including:By the described image centering each image each It is determined as one group of area image feature to be identified in the corresponding area image feature to be identified of the image-region of same position, and divides Not Ji Yu each group of area image feature to be identified, determine the corresponding Relating Characteristic of each group of area image feature to be identified.
Preferably, the Relating Characteristic in each described image region is according to the institute of the same position image-region of two images The included angle cosine value for stating area image feature to be identified determines.Image of each image-region in specified projecting direction is being determined After characteristic component, further pass through formulaCalculate the Relating Characteristic of each image-region Ψ。
Finally, for each image-region i, it will be seen that the area image feature to be identified of the image-region in light image (the i.e. characteristics of image component of characteristics of image), near infrared light image the image-region images to be recognized feature (i.e. The characteristics of image component of characteristics of image), the relevance of near infrared light image and visible images based on the image-region Characteristic component Ψi, one group of feature vector is combined as, such as be expressed asAs described image To one-dimensional feature to be identified.By the corresponding K groups feature vector of K image-region of the image pair two images of face according to Preset rules arrange, you can obtain the feature to be identified of described image centering two images.
Step 24, area image feature to be identified and Relating Characteristic based on the K image-region, by instructing in advance Experienced face In vivo detection model carries out In vivo detection to the face.
Finally, the feature to be identified that the area image feature of K image-region and Relating Characteristic are formed is inputted to pre- First trained face In vivo detection model, carries out Classification and Identification, determines whether the face is living body faces.
Human face in-vivo detection method disclosed by the embodiments of the present invention, the image of face is gathered by obtaining under different spectrum The characteristics of image for the respective K image-region of two images that centering includes;According to the K image-region of each image Characteristics of image, determines the feature to be identified of described image centering two images;Based on the feature to be identified, by training in advance Face In vivo detection model In vivo detection is carried out to the face, it is accurate to solve the identification of existing human face in-vivo detection method The problem of rate is low.
Method disclosed by the embodiments of the present invention, by combine under different spectral conditions the characteristics of image that gathers and image it Between relevance, carry out In vivo detection, further improve the accuracy rate of In vivo detection.By will be gathered under different spectral conditions Image to being divided into multiple images region, respectively determine each image-region Relating Characteristic, then, by each image district The area image feature to be identified and Relating Characteristic in domain form feature to be identified, and fine-characterization granularity, can further be lifted The accuracy rate of detection.When carrying out face In vivo detection model training, the feature weight of each image-region is further trained, can To reduce not comprising facial image or the image-region of very little not worked or acted on to In vivo detection in In vivo detection process In influence, can further lift the accuracy of In vivo detection.
Embodiment three:
Correspondingly, as shown in figure 4, the invention also discloses a kind of face living body detection device, described device includes:
Characteristics of image acquisition module 40, for obtaining the two width figures that the image pair of face is gathered under different spectrum and is included As the characteristics of image of respective K image-region, wherein, K is the natural number more than 0;
Area image characteristic determination module 41 to be identified, for determining each image district of described image centering each image The area image feature in domain;
Relating Characteristic determining module 42, for the same position image-region according to described image centering each image Area image feature to be identified, determines the Relating Characteristic of the same position image-region;
In vivo detection module 43, it is special for area image feature and relevance to be identified based on the K image-region Sign, In vivo detection is carried out by face In vivo detection model trained in advance to the face.
Face living body detection device disclosed by the embodiments of the present invention, the image of face is gathered by obtaining under different spectrum The characteristics of image for the respective K image-region of two images that centering includes;Then, it is determined that described image centering each image The area image feature of each image-region;According to the administrative division map of the same position image-region of described image centering each image Picture feature, determines the Relating Characteristic of the same position image-region;Finally, the administrative division map based on the K image-region As feature and Relating Characteristic, In vivo detection is carried out to the face by face In vivo detection model trained in advance, is solved The problem of existing human face in-vivo detection method recognition accuracy is low.Method disclosed by the embodiments of the present invention, by combining not With the relevance between the characteristics of image and image gathered under spectral conditions, In vivo detection is carried out, further improves live body The accuracy rate of detection.
Optionally, the area image feature to be identified includes:
The characteristics of image in each described image region is specifying the characteristics of image component of projecting direction.
Optionally, the Relating Characteristic in each described image region is according to the same position image-regions of two images The included angle cosine value of the area image feature to be identified determine.
Optionally, the specified projecting direction is related for the characteristics of image component of the same position image-region of two images The corresponding projecting direction of coefficient maximum.
Since visible ray and near infrared light image correspond to different vector spaces, by will be seen that light image and near infrared light The characteristics of image of image projects to an optimal vector space respectively, then further determines the spy in the optimal vector space The relevance of component is levied, the accuracy of relevance judgement can be improved.
Optionally, as shown in figure 5, described device further includes:
Model training module 44, for based on the area image feature and relevance to be identified extracted from some image pairs Feature, training face In vivo detection model;Wherein, each described image is to being included in gathered respectively under different spectral conditions two Width facial image.
Optionally, K is the natural number more than or equal to 2, and the model training module 44 is further used for:
Determine the area image feature to be identified and the corresponding feature weight of Relating Characteristic in each described image region so that The area image feature to be identified of image pair two images of the face In vivo detection model based on the face and association Property feature, and the corresponding feature weight, In vivo detection is carried out to the face.
By the way that to being divided into multiple images region, the image gathered under different spectral conditions is determined each image district respectively The Relating Characteristic in domain, then, forms feature to be identified, refinement is special by the characteristics of image and Relating Characteristic of each image-region Granularity is levied, can further lift the accuracy rate of detection.
Optionally, the area image feature to be identified for determining each described image region and the corresponding spy of Relating Characteristic Weight is levied, including:
To the characteristics of image extracted from some image pairs, each characteristics of image for retaining a different image-region, Calculate the maximization related coefficient of the characteristics of image projection vector of remaining K-1 image-region;
According to the maximization related coefficient being calculated every time, determine that the described image region of this reservation waits to know Other area image feature and the corresponding feature weight of Relating Characteristic.
When carrying out face In vivo detection model training, the feature weight of each image-region is further trained, can be dropped It is low not include facial image or In vivo detection is not worked or acted on the image-region of very little during In vivo detection Influence, can further lift the accuracy of In vivo detection.
Correspondingly, the embodiment of the invention also discloses a kind of electronic equipment, the electronic equipment, including memory, processing Device and the computer program that can be run on the memory and on a processor is stored in, the processor performs the computer The embodiment of the present invention one and the human face in-vivo detection method described in embodiment two are realized during program.The electronic equipment can be hand Machine, PAD, tablet computer, human face recognition machine etc..
Correspondingly, the embodiment of the present invention additionally provides a kind of computer-readable recording medium, computer journey is stored thereon with Sequence, the program realize the step of the embodiment of the present invention one and the human face in-vivo detection method described in embodiment two when being executed by processor Suddenly.
The device of the invention embodiment is corresponding with method, the specific implementation side of each module and each unit in device embodiment Formula is embodiment referring to method, and details are not described herein again.
Those of ordinary skill in the art may realize that each exemplary list described with reference to the embodiments described herein Member and algorithm steps, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually Performed with hardware or software mode, application-specific and design constraint depending on technical solution.Professional technician Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed The scope of the present invention.
One with ordinary skill in the art would appreciate that in embodiment provided herein, it is described to be used as separating component The unit of explanation may or may not be physically separate, you can with positioned at a place, or can also be distributed Onto multiple network unit.In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit In or unit be individually physically present, can also two or more units integrate in a unit.
If the function is realized in the form of SFU software functional unit and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme can be produced with software The form of product embodies, which is stored in a storage medium, including some instructions are used so that one Platform computer equipment (can be personal computer, server, or network equipment etc.) is performed described in each embodiment of the present invention The all or part of step of method.And foregoing storage medium includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. It is various can be with the medium of store program codes.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, expect changing or replace without creative work Change, should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of the claims Subject to.

Claims (16)

  1. A kind of 1. human face in-vivo detection method, it is characterised in that the described method includes:
    The image for obtaining the respective K image-region of two images that the image pair of the collection face under different spectrum includes is special Sign, wherein, K is the natural number more than 0;
    Determine the area image feature to be identified of each image-region of described image centering each image;
    According to the area image feature to be identified of the same position image-region of described image centering each image, the phase is determined With the Relating Characteristic in location drawing picture region;
    Area image feature to be identified and Relating Characteristic based on the K image-region, are lived by face trained in advance Body detection model carries out In vivo detection to the face.
  2. 2. according to the method described in claim 1, it is characterized in that, the area image feature to be identified includes:
    The characteristics of image in each described image region is specifying the characteristics of image component of projecting direction.
  3. 3. according to the method described in claim 2, it is characterized in that, the Relating Characteristic in each described image region is according to two width The included angle cosine value of the area image feature to be identified of the same position image-region of image determines.
  4. 4. according to the method described in claim 2, it is characterized in that, the specified projecting direction is the same position of two images The corresponding projecting direction of characteristics of image component related coefficient maximum of image-region.
  5. 5. method according to any one of claims 1 to 4, it is characterised in that described to be treated based on the K image-region Identification region characteristics of image and Relating Characteristic, live body is carried out by face In vivo detection model trained in advance to the face Before the step of detection, further include:
    Based on the area image feature to be identified and Relating Characteristic extracted from some image pairs, training face In vivo detection mould Type;Wherein, each described image is to being included in the two width facial images gathered respectively under different spectral conditions.
  6. It is 6. described to be based on from some according to the method described in claim 5, it is characterized in that, K is natural number more than or equal to 2 The step of area image feature to be identified and Relating Characteristic of image pair extraction, training face In vivo detection model, including:
    Determine the area image feature to be identified and the corresponding feature weight of Relating Characteristic in each described image region so that described The area image feature and relevance to be identified of image pair two images of the face In vivo detection model based on the face are special Sign, and the corresponding feature weight, In vivo detection is carried out to the face.
  7. 7. the according to the method described in claim 6, it is characterized in that, administrative division map to be identified for determining each described image region As the step of feature and Relating Characteristic corresponding feature weight, including:
    To the characteristics of image extracted from some image pairs, retain the characteristics of image of a different image-region every time, calculate The maximization related coefficient of the characteristics of image projection vector of remaining K-1 image-region;
    According to the maximization related coefficient being calculated every time, the area to be identified in the described image region of this reservation is determined Area image feature and the corresponding feature weight of Relating Characteristic.
  8. A kind of 8. face living body detection device, it is characterised in that including:
    Characteristics of image acquisition module, the two images that the image pair for obtaining the collection face under different spectrum includes are each K image-region characteristics of image, wherein, K is natural number more than 0;
    Area image characteristic determination module to be identified, the area of each image-region for determining described image centering each image Area image feature;
    Relating Characteristic determining module, for the to be identified of the same position image-region according to described image centering each image Area image feature, determines the Relating Characteristic of the same position image-region;
    In vivo detection module, for area image feature to be identified and Relating Characteristic based on the K image-region, passes through Trained face In vivo detection model carries out In vivo detection to the face in advance.
  9. 9. device according to claim 8, it is characterised in that the area image feature to be identified includes:
    The characteristics of image in each described image region is specifying the characteristics of image component of projecting direction.
  10. 10. device according to claim 9, it is characterised in that the Relating Characteristic in each described image region is according to two The included angle cosine value of the area image feature to be identified of the same position image-region of width image determines.
  11. 11. device according to claim 9, it is characterised in that the specified projecting direction is the identical bits of two images Put the corresponding projecting direction of characteristics of image component related coefficient maximum of image-region.
  12. 12. according to claim 8 to 11 any one of them device, it is characterised in that described device further includes:
    Model training module, for based on the area image feature to be identified and Relating Characteristic extracted from some image pairs, Training face In vivo detection model;Wherein, each described image is to being included in the two width people gathered respectively under different spectral conditions Face image.
  13. 13. device according to claim 12, it is characterised in that K is natural number more than or equal to 2, the model training Module is further used for:
    Determine the area image feature to be identified and the corresponding feature weight of Relating Characteristic in each described image region so that described The area image feature and relevance to be identified of image pair two images of the face In vivo detection model based on the face are special Sign, and the corresponding feature weight, In vivo detection is carried out to the face.
  14. 14. device according to claim 13, it is characterised in that described to determine area image feature and relevance to be identified The corresponding feature weight of feature, including:
    To the characteristics of image extracted from some image pairs, retain the characteristics of image of a different image-region every time, calculate The maximization related coefficient of the characteristics of image projection vector of remaining K-1 image-region;
    According to the maximization related coefficient being calculated every time, the area to be identified in the described image region of this reservation is determined Area image feature and the corresponding feature weight of Relating Characteristic.
  15. 15. a kind of electronic equipment, including memory, processor and it is stored on the memory and can runs on a processor Computer program, it is characterised in that the processor realizes claim 1 to 7 any one when performing the computer program The human face in-vivo detection method.
  16. 16. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor The step of human face in-vivo detection method described in claim 1 to 7 any one is realized during execution.
CN201711330803.3A 2017-12-13 2017-12-13 Face living body detection method and device and electronic equipment Active CN107918773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711330803.3A CN107918773B (en) 2017-12-13 2017-12-13 Face living body detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711330803.3A CN107918773B (en) 2017-12-13 2017-12-13 Face living body detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107918773A true CN107918773A (en) 2018-04-17
CN107918773B CN107918773B (en) 2021-06-04

Family

ID=61893196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711330803.3A Active CN107918773B (en) 2017-12-13 2017-12-13 Face living body detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN107918773B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549886A (en) * 2018-06-29 2018-09-18 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
CN109147116A (en) * 2018-07-25 2019-01-04 深圳市飞瑞斯科技有限公司 The method that smart lock and control smart lock are opened
CN110633691A (en) * 2019-09-25 2019-12-31 北京紫睛科技有限公司 Binocular in-vivo detection method based on visible light and near-infrared camera
CN111191519A (en) * 2019-12-09 2020-05-22 同济大学 Living body detection method for user access of mobile power supply device
CN111488756A (en) * 2019-01-25 2020-08-04 杭州海康威视数字技术股份有限公司 Face recognition-based living body detection method, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040002928A1 (en) * 2002-06-27 2004-01-01 Industrial Technology Research Institute Pattern recognition method for reducing classification errors
CN101964056A (en) * 2010-10-26 2011-02-02 徐勇 Bimodal face authentication method with living body detection function and system
US20130342703A1 (en) * 2012-06-25 2013-12-26 PSP Security Co., Ltd. System and Method for Identifying Human Face
CN104573672A (en) * 2015-01-29 2015-04-29 厦门理工学院 Discriminative embedding face recognition method on basis of neighbor preserving
US20170017831A1 (en) * 2015-07-13 2017-01-19 The Johns Hopkins University Face detection, augmentation, spatial cueing and clutter reduction for the visually impaired
CN106778607A (en) * 2016-12-15 2017-05-31 国政通科技股份有限公司 A kind of people based on recognition of face and identity card homogeneity authentication device and method
CN107358180A (en) * 2017-06-28 2017-11-17 江苏爱朋医疗科技股份有限公司 A kind of pain Assessment method of human face expression
CN107358181A (en) * 2017-06-28 2017-11-17 重庆中科云丛科技有限公司 The infrared visible image capturing head device and method of monocular judged for face live body

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040002928A1 (en) * 2002-06-27 2004-01-01 Industrial Technology Research Institute Pattern recognition method for reducing classification errors
CN101964056A (en) * 2010-10-26 2011-02-02 徐勇 Bimodal face authentication method with living body detection function and system
US20130342703A1 (en) * 2012-06-25 2013-12-26 PSP Security Co., Ltd. System and Method for Identifying Human Face
CN104573672A (en) * 2015-01-29 2015-04-29 厦门理工学院 Discriminative embedding face recognition method on basis of neighbor preserving
US20170017831A1 (en) * 2015-07-13 2017-01-19 The Johns Hopkins University Face detection, augmentation, spatial cueing and clutter reduction for the visually impaired
CN106778607A (en) * 2016-12-15 2017-05-31 国政通科技股份有限公司 A kind of people based on recognition of face and identity card homogeneity authentication device and method
CN107358180A (en) * 2017-06-28 2017-11-17 江苏爱朋医疗科技股份有限公司 A kind of pain Assessment method of human face expression
CN107358181A (en) * 2017-06-28 2017-11-17 重庆中科云丛科技有限公司 The infrared visible image capturing head device and method of monocular judged for face live body

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孙霖: "人脸识别中的活体检测技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *
王悦扬: "基于多光谱成像的人脸活体检测", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王莹: "基于图像的人脸识别技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549886A (en) * 2018-06-29 2018-09-18 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
WO2020000908A1 (en) * 2018-06-29 2020-01-02 汉王科技股份有限公司 Method and device for face liveness detection
CN109147116A (en) * 2018-07-25 2019-01-04 深圳市飞瑞斯科技有限公司 The method that smart lock and control smart lock are opened
CN111488756A (en) * 2019-01-25 2020-08-04 杭州海康威视数字技术股份有限公司 Face recognition-based living body detection method, electronic device, and storage medium
CN111488756B (en) * 2019-01-25 2023-10-03 杭州海康威视数字技术股份有限公司 Face recognition-based living body detection method, electronic device, and storage medium
US11830230B2 (en) 2019-01-25 2023-11-28 Hangzhou Hikvision Digital Technology Co., Ltd. Living body detection method based on facial recognition, and electronic device and storage medium
CN110633691A (en) * 2019-09-25 2019-12-31 北京紫睛科技有限公司 Binocular in-vivo detection method based on visible light and near-infrared camera
CN111191519A (en) * 2019-12-09 2020-05-22 同济大学 Living body detection method for user access of mobile power supply device
CN111191519B (en) * 2019-12-09 2023-11-24 同济大学 Living body detection method for user access of mobile power supply device

Also Published As

Publication number Publication date
CN107918773B (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN108038476B (en) A kind of facial expression recognition feature extracting method based on edge detection and SIFT
Zhou et al. Salient region detection via integrating diffusion-based compactness and local contrast
Lu et al. Learning optimal seeds for diffusion-based salient object detection
CN106372581B (en) Method for constructing and training face recognition feature extraction network
Wong et al. Saliency-enhanced image aesthetics class prediction
CN107918773A (en) A kind of human face in-vivo detection method, device and electronic equipment
US9025864B2 (en) Image clustering using a personal clothing model
Satta et al. Fast person re-identification based on dissimilarity representations
CN104915673B (en) A kind of objective classification method and system of view-based access control model bag of words
CN109154978A (en) System and method for detecting plant disease
US8861873B2 (en) Image clustering a personal clothing model
JP6112801B2 (en) Image recognition apparatus and image recognition method
Casanova et al. Texture analysis using fractal descriptors estimated by the mutual interference of color channels
Hu et al. Classification of melanoma based on feature similarity measurement for codebook learning in the bag-of-features model
Wu et al. Natural scene text detection by multi-scale adaptive color clustering and non-text filtering
Han et al. High-order statistics of microtexton for hep-2 staining pattern classification
Agbo-Ajala et al. A lightweight convolutional neural network for real and apparent age estimation in unconstrained face images
Chandaliya et al. Child face age progression and regression using self-attention multi-scale patch gan
CN104680190B (en) Object detection method and device
Jingade et al. DOG-ADTCP: A new feature descriptor for protection of face identification system
Tokarczyk et al. Beyond hand-crafted features in remote sensing
JP7077046B2 (en) Information processing device, subject identification method and computer program
Rotem et al. Combining region and edge cues for image segmentation in a probabilistic gaussian mixture framework
CN105844299B (en) A kind of image classification method based on bag of words
Yoon et al. An accurate and real-time multi-view face detector using orfs and doubly domain-partitioning classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant