CN109858375A - Living body faces detection method, terminal and computer readable storage medium - Google Patents
Living body faces detection method, terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN109858375A CN109858375A CN201811652890.9A CN201811652890A CN109858375A CN 109858375 A CN109858375 A CN 109858375A CN 201811652890 A CN201811652890 A CN 201811652890A CN 109858375 A CN109858375 A CN 109858375A
- Authority
- CN
- China
- Prior art keywords
- living body
- test
- image
- threshold value
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a kind of living body faces detection method, terminal and computer readable storage mediums, the living body faces detection method includes: the character image for intercepting who object from video to be detected based on human face detection tech, and the character image is compared with reference to face characteristic;If the character image, to passing through, is input to the depth residual error network model for being used for In vivo detection with reference to face aspect ratio by the character image;Based on the depth residual error network model, judge whether the character image is living body.The present invention carries out cooperation response action without user, and user need to only pass by usually under the camera of living body faces detection device, and testing process greatly simplifies, and improves user in safety check, the experience in checking of passing by.
Description
Technical field
The present invention relates to technical field of face recognition more particularly to a kind of living body faces detection methods, terminal and computer
Readable storage medium storing program for executing.
Background technique
With the development of face recognition technology, recognized in more and more scenes using face recognition technology to carry out identity
Card.But the accessibility and face identification system due to facial image can not judge that facial image comes from user's sheet
The video of the photo or user of people or user, so face recognition technology has security risk in the application.This is just needed
In vivo detection is carried out in face recognition process, finds imitation behavior.Currently, being based on people using more biopsy method
Machine interaction, prompts user to carry out a series of random human face actions, such as blink, open one's mouth, nod, but uses this
Method can reduce the friendliness to user, and the external cooperation of In vivo detection dependence, testing process are cumbersome, influence the experience of user.
Above content is only used to facilitate the understanding of the technical scheme, and is not represented and is recognized that above content is existing skill
Art.
Summary of the invention
The main purpose of the present invention is to provide a kind of living body faces detection method, terminal and computer-readable storage mediums
Matter, it is intended to solve existing In vivo detection and depend on the technical problem that outside cooperates, retrieval flow is cumbersome unduly.
To achieve the above object, the present invention provides a kind of living body faces detection method, the living body faces detection method packet
It includes:
The character image for intercepting who object from video to be detected based on human face detection tech, by the character image with
It is compared with reference to face characteristic;
If the character image, to passing through, the character image is input to and is used for living body inspection with reference face aspect ratio
The depth residual error network model of survey;
Based on the depth residual error network model, judge whether the character image is living body.
Optionally, the character image for intercepting who object from video to be detected based on human face detection tech, by institute
Character image, which is stated, with the step of being compared with reference to face characteristic includes:
It is photographed based on camera to who object, to obtain video to be detected;
It obtains in the video to be detected comprising the video frame images for meeting the face characteristic of preset faceform, will occur
The video frame images that moment is mutually interposed between in preset duration are extracted as the character image of the who object;
The human face region image of the character image is compared with the reference face characteristic of the who object, if institute
There is the character image and default similar threshold value is both greater than or be equal to reference to the similarity of face characteristic, then determines the figure map
As with reference face aspect ratio to passing through;
It is less than the character image of default similar threshold value with the similarity of reference face characteristic if it exists, then determines the personage
Image with reference to face aspect ratio to not passing through.
Optionally, the reference face characteristic of the human face region image by the character image and the who object into
Row compare the step of include:
Obtain the histograms of oriented gradients feature of character image;According to the histograms of oriented gradients feature and line of character image
Property detection of classifier goes out the human face region image in character image;
According to the key point position of preset faceform and support vector regression algorithm, the key of human face region image is obtained
The affine camera for putting position and threedimensional model to two dimensional image, using the key point position of human face region image as current key point
Position;
Triangulation is carried out to current key point position, to obtain the corresponding triangular facet in each current key point position;Root
Affine transformation is carried out to each triangular facet according to the affine camera, obtains the positive key point of each current key point position
It sets;Direction adjustment is carried out to human face region image according to positive key point position, to obtain positive facial image;
Image enhancement is carried out to human face region image, forming face enhances image;
Obtain the gray value of each pixel in face enhancing image;According to the gray value of each pixel, determine each
The corresponding DCP code of pixel;The DCP code of each key point position is extracted, and the statistics for calculating key point region DCP code is straight
Fang Tu, and using the statistic histogram of key point region DCP code as face characteristic;
Face characteristic is compared with the reference face characteristic of who object, by face characteristic and with reference to face characteristic
Similarity is as character image and with reference to the default similarity of face characteristic.
Optionally, in the step of the character image for intercepting who object from video to be detected based on human face detection tech
Include: before rapid
The sample facial image of predetermined number is acquired, wherein includes the equal packet of quantity in predetermined number sample facial image
Positive sample containing real human face and the negative sample comprising photo face;
Positive sample and negative sample are zoomed into pre-set dimension and carry out subtracting average value processing, respectively obtain effectively positive sample and
Effective negative sample;
The effective positive sample and effective negative sample for selecting the first ratio, the second ratio and third ratio at random, respectively as
Training set, verifying collection and test set;
It is trained according to multiple alternative depth residual error network models of the training set to pre-selection, is being based on the training
During the alternative depth residual error network model of collection training, each alternative depth residual error network model is instructed based on verifying collection
Practice compliance test result;
Using the optimal alternative depth residual error network model of training effect as the depth residual error network model finally used;
The test set tests depth residual error network model, obtains living body probability threshold value.
Optionally, the step of sample facial image of the acquisition predetermined number includes:
Online and offline image of the acquisition comprising face is to generate face image set;
It is chosen from the facial image and meets the corresponding subsample facial image of each default characteristic condition, all subsamples
Facial image collectively forms sample facial image, wherein default characteristic condition include different picture size, personage's patterning positions,
Personage takes pictures posture, character face's expression and intensity of illumination.
Optionally, described to test depth residual error network model the test set, obtain living body probability threshold value
Step includes:
By in the test set positive sample and negative sample be input in the depth residual error network model, obtain it is each just
The test living body probability of sample and each negative sample;
It chooses test probability threshold value one by one in preset threshold section, according to test probability threshold value and tests living body probability,
Obtain In vivo detection accuracy rate of the depth residual error network model under different test probability threshold values;
By test probability threshold value corresponding to the maximum In vivo detection accuracy rate of numerical value, most as depth residual error network model
Whole living body probability threshold value.
Optionally, described to choose test probability threshold value one by one in preset threshold section, according to test probability threshold value and survey
The step of trying living body probability, obtaining In vivo detection accuracy rate of the depth residual error network model under different test probability threshold values is wrapped
It includes:
It chooses test probability threshold value one by one in preset threshold section, obtains test living body probability and be greater than or equal to test generally
True living body number, the test living body probability of the positive sample of rate threshold value are less than the false living body number of the positive sample of test probability threshold value, test
Living body probability is less than the true non-living body number of the negative sample of test probability threshold value and test living body probability is greater than or equal to test probability
The false non-living body number of the negative sample of threshold value, until test probability threshold value traverses preset threshold section;
By under different test probability threshold values, true living body number and true non-living body number sum of the two and true living body number, false living body number,
The ratio of true non-living body number and false non-living body several the sum of four, as depth residual error network model under different test probability threshold values
In vivo detection accuracy rate.
Optionally, described to test depth residual error network model the test set, obtain living body probability threshold value
Step includes:
By in the test set positive sample and negative sample be input in the depth residual error network model, obtain it is each just
The test living body probability of sample and each negative sample;
It chooses test probability threshold value one by one in preset threshold section, obtains test living body probability and be greater than or equal to test generally
True living body number, the test living body probability of the positive sample of rate threshold value are less than the false living body number of the positive sample of test probability threshold value, test
Living body probability is less than the true non-living body number of the negative sample of test probability threshold value and test living body probability is greater than or equal to test probability
The false non-living body number of the negative sample of threshold value, until test probability threshold value traverses preset threshold section;
It obtains under different test probability threshold values, false living body number and the false rejection rate of the ratio between positive sample number and vacation are non-live
The false acceptance rate of the ratio between body number and negative sample number;
Externally input In vivo detection demand is received, the false rejection rate of In vivo detection demand adaptation and mistake are connect
By test probability threshold value corresponding to rate, the living body probability threshold value final as depth residual error network model.
The present invention also provides a kind of living body faces to detect terminal, and the living body faces detection terminal includes memory, processing
Device and it is stored in the computer-readable instruction that can be run on the memory and on the processor, it is described computer-readable
The step of instruction realizes above-mentioned living body faces detection method when being executed by the processor.
The present invention also provides a kind of computer readable storage medium, calculating is stored on the computer readable storage medium
Machine readable instruction realizes the step such as above-mentioned living body faces detection method when the computer-readable instruction is executed by processor
Suddenly.
The present invention is by being input to use for character image after determining character image and reference face aspect ratio to passing through
In the depth residual error network model of In vivo detection, it is based on depth residual error network model, judges whether character image is living body, is not necessarily to
Based on human-computer interaction, prompts user to carry out a series of random human face actions, i.e., cooperate independent of the external of user, without
User carries out cooperation response action, and user need to only pass by usually under the camera of living body faces detection device, detection stream
Journey greatly simplifies, and improves user in safety check, the experience in checking of passing by.
Detailed description of the invention
Fig. 1 is that the invention shows the structural schematic diagrams of one embodiment of terminal;
Fig. 2 is that the present invention is based on the flow diagrams of one embodiment of living body faces detection method;
Fig. 3 is DCP coded sample schematic diagram of the present invention;
Fig. 4 is the coordinate diagram of In vivo detection accuracy rate and test probability threshold value of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, Fig. 1 is the carry-on terminal structure signal for the hardware running environment that the embodiment of the present invention is related to
Figure.
Living body faces detection method, terminal and computer readable storage medium provided by the invention are related to face recognition technology
Field, is related to the application of deep learning algorithm, and display terminal can have aobvious for smart television, tablet computer, smart phone etc.
Show the terminal device of function.
As shown in Figure 1, living body faces detection terminal may include: processor 1001, such as CPU, network interface 1004,
User interface 1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 is for realizing between these components
Connection communication.User interface 1003 may include the such as micro- keyboard (Keyboard) of input unit, and optional user interface 1003 is also
It may include standard wireline interface and wireless interface.Network interface 1004 optionally may include the wireline interface, wireless of standard
Interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable memory (non-
Volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor 1001
Storage device.
It will be understood by those skilled in the art that the detection terminal structure of living body faces shown in Fig. 1 is not constituted to living body
The restriction of Face datection terminal may include perhaps combining certain components or different than illustrating more or fewer components
Component layout.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage medium
Believe module, Subscriber Interface Module SIM and the computer-readable instruction for realizing the intention analysis method based on deep learning.
In living body faces detection terminal shown in Fig. 1, network interface 1004 is mainly used for connecting background server (such as cloud
Server), data communication is carried out with background server;User interface 1003 is mainly used for connecting client (as showing work
The display terminal of body recognition result), data communication is carried out with client;And processor 1001 can be used for calling memory 1005
The computer-readable instruction of middle storage, and execute following operation:
The character image for intercepting who object from video to be detected based on human face detection tech, by character image and reference
Face characteristic is compared;
If character image, to passing through, is input to the depth for being used for In vivo detection with reference to face aspect ratio by character image
Residual error network model;
Based on depth residual error network model, judge whether character image is living body.
Terminal is detected based on above-mentioned living body faces, a kind of living body faces detection method is provided, referring to Fig. 2, living body faces inspection
Survey method includes:
Step S10 intercepts the character image of who object based on human face detection tech, by figure map from video to be detected
As being compared with reference face characteristic;
Video to be detected can be to the video for needing the who object for carrying out living body faces detection to be recorded, who object
It can be living body personage or picture, video with personage etc..Personage is being carried out to video to be detected based on human face detection tech
During image interception, human face detection tech is primarily used to judge whether image includes face, and the character image of screenshot is
Video interception image comprising face.Face datection can be differentiated using various ways, it is only necessary to can be detected in image
Whether face is contained.
For example, the image of the camera acquisition who object to living body faces detection, carries out greyscale transformation, filtering processing etc.
After image procossing, the grayscale image of high quality is obtained;Then Haar-Like wavelet character is quickly calculated using integral to grayscale image
Value, is applied to discrete trained AdaBoost (Adaptive Boosting) classifier, judges that camera acquires who object
Image whether include face, using comprising face camera acquisition who object image as character image.
Haar-Like feature, i.e., the Haar feature that many people often say are that a kind of common feature of computer vision field is retouched
Operator is stated, can be used for face description.After obtaining Haar-Like wavelet character value, classified using discrete trained AdaBoost
Device carries out recognition of face.So-called classifier just refers to herein to face and the non-face algorithm classified, in AdaBoost
On the basis of algorithm, Face datection is carried out using Haar-Like wavelet character and integrogram method.It is worth noting that, this hair
Method in bright about Face datection is not limited to the method mentioned in embodiment, other existing methods that Face datection may be implemented
?.
Step S20, if character image with reference to face aspect ratio to passing through, character image is input to be used for living body inspection
The depth residual error network model of survey;
Character image is compared with reference to face characteristic, determines the human face region image in character image;According to pre-
The key point position distribution for setting faceform carries out three-dimensional calibration to human face region image and direction adjusts, to obtain positive people
Face image;Image enhancement is carried out to positive facial image, forming face enhances image;Extract all pixels in face enhancing image
The DCP code of point, and the statistic histogram of key point region DCP code is calculated to constitute the face characteristic of the character image;
Face characteristic is compared with the reference face characteristic of the facial image in face database, and calculates similarity.Work as phase
Knowledge and magnanimity are greater than some default similarity threshold and then think character image and refer to face aspect ratio to passing through, otherwise it is assumed that figure map
As with reference face aspect ratio to not passing through.
Step S30 is based on depth residual error network model, judges whether character image is living body.
After determining character image and reference face aspect ratio to passing through, character image is input to and is used for In vivo detection
Depth residual error network model, be based on depth residual error network model, judge whether character image is living body.
Depth residual error network model is trained neural network, to identify whether character image is living body faces figure
Picture, i.e., face comes from real human face and also comes from photo or video human face in differentiation character image, to prevent non-
Method user is cheated using the photo of legitimate user, video.
Usually, it is trained using general neural network model, the standard of network should be able to be promoted by increasing depth
True rate, but over-fitting is also resulted in simultaneously.The problem of increasing depth is, passes through predicted value and true value ratio in network model end
The weight modification signal relatively obtained, the layer before having arrived network model can become very faint.This basically implies that network model
Layer before is almost without being learnt.This is referred to as gradient extinction tests.
And another problem is, needs to execute optimization in huge parameter space, arbitrarily increasing the number of plies will lead to more
Big training error.In order to overcome this learning rate deepened and generated due to network depth to be lower, accuracy rate can not be mentioned effectively
The problem of rising, depth residual error network (ResNet) is devised.
In the present embodiment, it compared to general network is used, is trained using ResNet, even if network is deep again, training
Performance it is also good, mistake also can be less.The general network of one multilayer is changed to ResNet, is exactly plus all jumps
Remote connection, i.e., skip one layer or several layers of for certain layer of information, pass information to the deeper of neural network.What the present invention used
Depth residual error network model can be Resnet18.
In the present embodiment, after determining character image and reference face aspect ratio to passing through, character image is inputted
To the depth residual error network model for being used for In vivo detection, it is based on depth residual error network model, judges whether character image is living body,
It is not necessarily based on human-computer interaction, prompts user to carry out a series of random human face actions, i.e., cooperates independent of the external of user,
Cooperation response action is carried out without user, user need to only pass by usually under the camera of living body faces detection device, inspection
Flow gauge greatly simplifies, and improves user in safety check, the experience in checking of passing by.
Optionally, in another embodiment of living body faces detection method of the present invention, step S10 includes:
Step S11 photographs to who object based on camera, to obtain video to be detected;
Pass through in the living body faces detection terminal of user (i.e. a certain who object) by the application of living body faces detection method
When, the camera of living body faces detection terminal photographs to user, obtains the video to be detected of camera acquisition.
Step S12 is obtained comprising the video frame images for meeting the face characteristic of preset faceform in video to be detected, will
Current moment is separated by the character image that the video frame images in preset duration are extracted as who object out;
The face characteristic of preset faceform includes the face spatial layout feature of general population face, and face spatial layout feature includes
The position distribution feature of eyes, double eyebrows, nose, lip, ears, as frontal face from the crown and under be successively parallel double eyebrows, simultaneously
Row eyes extend nose vertically, are laterally extended lip, are interval in the ears of both sides of the head parallel.It obtains and is wrapped in video to be detected
Current moment out is separated by the video frame figure in preset duration by the video frame images containing the face characteristic for meeting preset faceform
Character image as being extracted as who object;After acquiring one section of several seconds video such as camera, intercepted using human face detection tech
The video frame images of face characteristic of the N (N > 1) therein comprising meeting preset faceform.Appointing in N video frame images
Anticipate two images time interval in preset duration.Face characteristic is carried out to this N video frame images respectively and extracts calculating, and
It is compared with the feature of reference face.If all comparisons can be transferred through, that show current who object certification is me, row
Except other people mixed possibility in verification process.If comparison cannot pass through, directly think that current who object certification does not have
Pass through.After all comparisons can be transferred through, in this N video frame images, randomly selects an image and carry out In vivo detection.
Step S13 the human face region image of character image is compared with the reference face characteristic of who object, if institute
There is character image and default similar threshold value is both greater than or be equal to reference to the similarity of face characteristic, then determines character image and reference
Face characteristic comparison passes through;
Step S14 is less than the character image of default similar threshold value with the similarity of reference face characteristic if it exists, then determines
Character image with reference to face aspect ratio to not passing through.
Optionally, the human face region image of character image and the reference face characteristic of who object are carried out in step S13
The step of comparison includes:
Step 131, the histograms of oriented gradients feature of character image is obtained;According to the histograms of oriented gradients of character image
Feature and linear classifier detect the human face region image in character image;
Face generally comprises eyebrow, eyes and the lip of horizontal direction extension, the bridge of the nose and inclination that vertical direction extends
The cheek profile of extension, analyzes character image, obtains eyebrow, eyes, lip, the bridge of the nose and cheek profile region,
And histograms of oriented gradients is identified, to obtain the histograms of oriented gradients feature of character image.It is then based on linear classifier
The histograms of oriented gradients feature of character image is detected, determines eyebrow, the eyes, mouth of histograms of oriented gradients label
Lip, the bridge of the nose and cheek profile region, so that the human face region image in character image is detected, to eliminate character image
The influence that middle background identifies human face region image identity.
Step 132, according to the key point position of preset faceform and support vector regression algorithm, human face region figure is obtained
The key point position of picture and threedimensional model to two dimensional image affine camera, using the key point position of human face region image as working as
Preceding key point position;
Step 133, triangulation is carried out to current key point position, to obtain each current key point position corresponding three
Edged surface;Affine transformation is carried out to each triangular facet according to affine camera, obtains the positive key point of each current key point position
Position;Direction adjustment is carried out to human face region image according to positive key point position, to obtain positive facial image;
The affine transformation vector of each triangular facet is calculated according to affine camera, then according to the affine transformation vector to each
Triangular facet carries out affine transformation, to obtain the positive key point position of each current key point position.
Using to human face region image mechanical energy three-dimensional calibration and direction adjustment, personage's appearance in character image is eliminated
Influence of the state (posture includes side face, comes back, bows) to identification.
Step 134, image enhancement is carried out to positive facial image, forming face enhances image;
Image enhancement is carried out to human face region image using difference of Gaussian filtering, forming face enhances image, to reduce light
Influence according to variation to human face region image to identification.
Step 135, the gray value of each pixel in face enhancing image is obtained;According to the gray value of each pixel,
Determine the corresponding DCP code of each pixel;The DCP code of each key point position is extracted, and calculates key point region DCP code
Statistic histogram, and using the statistic histogram of key point region DCP code as face characteristic;
DCP (Dual-Cross Patterns, dual crossing mode) is a kind of mode for describing image texture characteristic, DCP code
It can be with the texture of quantized image, referring to Fig. 3, each pixel DCP code are as follows:
Wherein:
I in formulaAi,IBi,IO, respectively Fig. 1 A as iniPoint, BiThe gray value of point, O point.It is mentioned according to above-mentioned two formula
The DCP code for taking out all pixels point in face enhancing image, then extracts the DCP code of key point position, and calculate key point institute
In region, the statistic histogram of DCP code is to constitute the face characteristic of the character image.
After the DCP code for obtaining each pixel, counted according to the DCP that these DCP codes generate face in character image
It is crucial optionally to generate face in character image according to the DCP code of each key point region (i.e. key area) for histogram
The DCP statistic histogram in region (double eyebrows, eyes, mouth, nose);And using DCP statistic histogram as face characteristic with other people
Object image is matched, so as to effectively eliminate influence of the facial expression to identification.
Step 136, face characteristic is compared with the reference face characteristic of who object, by face characteristic and reference man
The similarity of face feature is as character image and with reference to the default similarity of face characteristic.
It include many spare facial images in face database, the facial image in each face database has pair
The reference face characteristic answered, this is straight with reference to the DCP statistics that face characteristic may be corresponding face (i.e. the face of who object)
Fang Tu, the reference face of facial image in the face characteristic and face database of who object in facial image to be identified is special
Sign is compared, using face characteristic with the similarity with reference to face characteristic as character image and with reference to the default phase of face characteristic
Like degree.
In the present embodiment, by acquisition character image to be identified, the human face region image in character image is determined, with
Eliminate the influence that background identifies human face region image identity in character image;Then according to the key point of preset faceform
Distribution is set, three-dimensional calibration is carried out to human face region image and direction adjusts, to obtain positive facial image, to eliminate figure map
Influence of personage's posture to identification as in;Image enhancement is carried out to positive facial image again, forming face enhances image, with
Reduce influence of the illumination variation to human face region image to identification;Extract the DCP of all pixels point in face enhancing image
Code, and the statistic histogram of key point region DCP code is calculated to constitute the face characteristic of the character image, effectively to disappear
Influence except facial expression to identification;By the reference face characteristic of the facial image in face characteristic and face database into
Row compares, and to identify identity of personage in character image to be identified, eliminates extraneous factor to the shadow of identification accuracy
It rings, improves the scene scope of application and accuracy of identification.
Further, in the another embodiment of living body faces detection method of the present invention, step S10 is based on human face detection tech
Include: before in video to be detected the step of the character image of interception who object
Step S41 acquires the sample facial image of predetermined number, wherein includes quantity in predetermined number sample facial image
The equal positive sample comprising real human face and the negative sample comprising photo face;
The order of magnitude of predetermined number need to be bigger, such as predetermined number is 400,000, that is, acquires 400,000 sample facial images,
It is divided into two major classes, wherein 20 Wan Zhangwei real human face images (i.e. positive 200,000, sample), other 20 Wan Zhangwei face photograph image
(i.e. negative 200,000, sample).
Specifically, the step of acquiring the sample facial image of predetermined number include:
Online and offline image of the acquisition comprising face is to generate face image set;
It is chosen from facial image and meets the corresponding subsample facial image of each default characteristic condition, all subsample faces
Image collectively forms sample facial image, wherein default characteristic condition includes different picture size, personage's patterning positions, personage
It takes pictures posture, character face's expression and intensity of illumination.
Acquiring portrait photographs' image and when real human face image, using the picture under different shooting conditions, including it is any
Size, position, posture, direction, facial expression and illumination condition real human face image and human face photo.Sample facial image can
It searches for and downloads on line, can be from voluntarily shooting and replicating under line.
Positive sample and negative sample are zoomed to pre-set dimension and carry out subtracting average value processing by step S42, are respectively obtained effectively
Positive sample and effective negative sample;The effective positive sample and effective negative sample of the first ratio, the second ratio and third ratio are selected at random
Product, respectively as training set, verifying collection and test set;
All positive samples and negative sample can be zoomed to the size of pre-set dimension (such as pixel 640*640), and subtract
Value processing, obtain that treated effectively positive sample and effective negative sample.All effectively positive samples and effective negative sample are selected at random
The first ratio (such as 60%) is as training set, the second ratio (such as 20%) as verifying collection, third ratio (such as 20%) conduct out
Test set.Such as positive sample and each 100, negative sample, then randomly select 60 positive samples and 60 negative samples as training set,
20 positive samples and 20 negative samples are randomly selected as verifying collection, randomly select 20 positive samples and 20 negative samples as survey
Examination collection.
Step S43 is trained according to multiple alternative depth residual error network models of the training set to pre-selection, based on training
During the alternative depth residual error network model of collection training, effect is trained to each alternative depth residual error network model based on verifying collection
Fruit verifying;
Step S44, using the optimal alternative depth residual error network model of training effect as the depth residual error net finally used
Network model;Test set tests depth residual error network model, obtains living body probability threshold value.
Depth residual error network model is parameter network from level to level, and parameter is substantially weight parameter, inputs depth residual error
The data of network model generate output by weight parameter processing from level to level.Select alternative depth residual error network model it
Afterwards, training set is input to one by one in each alternative depth residual error network model to be trained, trained purpose is that model is defeated
(predicted value is living body or non-living body) and actual value (such as positive sample-living body, numerical value 1 out;Negative sample-non-living body, numerical value 0)
Between difference be reduced to minimum, the method used is backpropagation.
After each alternative depth residual error network model starts to have trained default training numerical value (such as 10) wheel, by verifying collection input
Each alternative depth residual error network model runs one-time authentication collection and looks at that the effect of each alternative depth residual error network module output is (i.e. pre-
Whether measured value to actual value restrains effect), model is found in time or the problem of parameter, so that it may adjust ginseng or adjustment mould again
Type, without terminating until training;In this way, by after input verifying collection in each alternative depth residual error network model, output effect
The depth that the alternative depth residual error network model of preferably (i.e. whether predicted value is most fast to actual value convergence rate) finally uses is residual
Poor network model.And then continue that the depth residual error network model selected is continued to train with training set, continuous adjusting parameter,
Obtain optimal depth residual error network model.Finally, test set tests depth residual error network model, it is general to obtain living body
Rate threshold value.
Specifically, test set tests depth residual error network model in step S44, obtains living body probability threshold value
Step includes:
Step S441, by test set positive sample and negative sample be input in depth residual error network model, obtain each
The test living body probability of positive sample and each negative sample;
Step S442 chooses test probability threshold value one by one in preset threshold section, according to test probability threshold value and test
Living body probability obtains In vivo detection accuracy rate of the depth residual error network model under different test probability threshold values;
Step S443, by test probability threshold value corresponding to the maximum In vivo detection accuracy rate of numerical value, as depth residual error
The final living body probability threshold value of network model.
By in test set positive sample and negative sample run all in trained depth residual error network model once, obtain survey
Examination concentrate each positive sample and each negative sample test living body Probability p, according to probability from small to large to all test living body probability into
Row sequence.Preset threshold section can be the numeric distribution section of all test living body probability, be also possible to 0 to 1 numerical intervals.
Choose test probability threshold value one by one in preset threshold section, each test probability threshold value corresponds to positive sample in one group of test set
The In vivo detection of product and negative sample is as a result, In vivo detection result includes the test based on test probability threshold value and test probability threshold value
Value and the different error result of actual value and test value and the identical correct result of actual value.To be based on each test
Error result number and correct result number under probability threshold value, obtain the In vivo detection accuracy rate of each test probability threshold value,
That is correct result number/(error result number+correct result number).It finally can will be by the maximum In vivo detection accuracy rate of numerical value
Corresponding test probability threshold value, the living body probability threshold value final as depth residual error network model.
Specifically, step S442 includes:
It chooses test probability threshold value one by one in preset threshold section, obtains test living body probability and be greater than or equal to test generally
True living body number, the test living body probability of the positive sample of rate threshold value are less than the false living body number of the positive sample of test probability threshold value, test
Living body probability is less than the true non-living body number of the negative sample of test probability threshold value and test living body probability is greater than or equal to test probability
The false non-living body number of the negative sample of threshold value, until test probability threshold value traverses preset threshold section;
By under different test probability threshold values, true living body number and true non-living body number sum of the two and true living body number, false living body number,
The ratio of true non-living body number and false non-living body several the sum of four, as depth residual error network model under different test probability threshold values
In vivo detection accuracy rate.
Supplemented by assistant solve the present embodiment, aided in illustrating below with specific example:
A test probability threshold value M is chosen from preset threshold section, that is, thinks the positive sample of p < M or negative sample is all non-live
Body, the positive sample of p >=M or negative sample are all living bodies.It is being considered as that the photo of non-living body finds out the photo that reality is living body,
This kind of photo is false non-living body photo (FN), that is, tests the positive sample that living body probability is less than test probability threshold value, and this kind of mistake is
False Rejects;The practical photo for non-living body is found out in the photo for being considered as living body, this kind of photo is false living body photo
(FP), i.e., for test living body probability more than or equal to the negative sample of test probability threshold value, this kind of mistake is that mistake receives.In addition, surveying
The positive sample (TP) that living body probability is greater than or equal to test probability threshold value is tried, test living body probability is less than the negative of test probability threshold value
Sample (TN).Referring to the classification results confusion matrix of following table 1, the In vivo detection accuracy rate T of each test probability threshold value M is calculated,
In vivo detection error rate R,
In vivo detection accuracy rate T=(TP+TN)/(TP+TN+FN+FP),
In vivo detection error rate is R=(FN+FP)/(TP+TN+FN+FP).
Table 1
Between [0,1], the value of M is constantly adjusted, draws M-T curve, referring to Fig. 4, vertical pivot is that In vivo detection is accurate in Fig. 4
Rate, horizontal axis are test probability threshold value M.
In addition, in living body faces detection method another embodiment of the present invention, by test set to depth residual error network model
Tested, obtain living body probability threshold value the step of include:
By in test set positive sample and negative sample be input in depth residual error network model, obtain each positive sample and each
The test living body probability of negative sample;
It chooses test probability threshold value one by one in preset threshold section, obtains test living body probability and be greater than or equal to test generally
True living body number, the test living body probability of the positive sample of rate threshold value are less than the false living body number of the positive sample of test probability threshold value, test
Living body probability is less than the true non-living body number of the negative sample of test probability threshold value and test living body probability is greater than or equal to test probability
The false non-living body number of the negative sample of threshold value, until test probability threshold value traverses preset threshold section;
It obtains under different test probability threshold values, false living body number and the false rejection rate of the ratio between positive sample number and vacation are non-live
The false acceptance rate of the ratio between body number and negative sample number;
Externally input In vivo detection demand is received, the false rejection rate and false acceptance rate that In vivo detection demand is adapted to
Corresponding test probability threshold value, the living body probability threshold value final as depth residual error network model.
Supplemented by assistant solve the present embodiment, aided in illustrating below with specific example, connect above-mentioned table 1:
Calculate at different test probability threshold value M, obtain the ratio between false living body number and positive sample number false rejection rate and
The false acceptance rate of the ratio between false non-living body number and negative sample number,
Wherein false rejection rate is equal to FN/ (TP+FN), and false acceptance rate is equal to FP/ (FP+TN).That is a test probability
Threshold value, corresponding one group of false rejection rate and false acceptance rate.
In the application scenarios of living body faces detection terminal, different scenes are corresponding with different In vivo detection demands.Such as exist
It is high for false rejection rate tolerance in the very high international summit of security requirement or airport security, false acceptance rate is tolerated
Degree is very low, i.e., would rather be non-living body by In vivo detection, non-living body can not be detected as living body, in this scene, mistake may be selected
Test probability threshold value corresponding to false rejection rate height and the low combination of false acceptance rate is as depth residual error network model final
Living body probability threshold value.
For another example in security requirement relatively less high bus station, market etc., for false rejection rate tolerance phase
Relatively high to false acceptance rate tolerance to lower, as guarantee detection efficiency is detected as the inclusiveness of living body to non-living body
It is relatively higher, in this scene, may be selected that false rejection rate is relatively low and the relatively high combination of false acceptance rate corresponding to
The test probability threshold value living body probability threshold value final as depth residual error network model.
The present invention also provides a kind of living body faces to detect terminal, and the living body faces detection terminal includes memory, processing
Device and it is stored in the computer-readable instruction that can be run on the memory and on the processor, it is described computer-readable
The step of instruction realizes above-mentioned living body faces detection method when being executed by the processor.
The present invention also provides a kind of computer readable storage medium, calculating is stored on the computer readable storage medium
Machine readable instruction realizes the step such as above-mentioned living body faces detection method when the computer-readable instruction is executed by processor
Suddenly.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone,
Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of living body faces detection method, which is characterized in that the living body faces detection method includes:
The character image for intercepting who object from video to be detected based on human face detection tech, by the character image and reference
Face characteristic is compared;
If the character image, to passing through, the character image is input to and is used for In vivo detection with reference to face aspect ratio
Depth residual error network model;
Based on the depth residual error network model, judge whether the character image is living body.
2. living body faces detection method as described in claim 1, which is characterized in that it is described based on human face detection tech to be checked
It surveys in video and intercepts the character image of who object, by the character image and packet the step of being compared with reference to face characteristic
It includes:
It is photographed based on camera to who object, to obtain video to be detected;
It obtains comprising the video frame images for meeting the face characteristic of preset faceform in the video to be detected, by current moment out
The video frame images being mutually interposed between in preset duration are extracted as the character image of the who object;
The human face region image of the character image is compared with the reference face characteristic of the who object, if all institutes
State character image and both greater than or be equal to default similar threshold value with reference to the similarity of face characteristic, then determine the character image with
With reference to face aspect ratio to passing through;
It is less than the character image of default similar threshold value with the similarity of reference face characteristic if it exists, then determines the character image
With reference face aspect ratio to not passing through.
3. living body faces detection method as claimed in claim 2, which is characterized in that the face area by the character image
The step of area image is compared with the reference face characteristic of the who object include:
Obtain the histograms of oriented gradients feature of character image;According to the histograms of oriented gradients feature of character image and linear point
Class device detects the human face region image in character image;
According to the key point position of preset faceform and support vector regression algorithm, the key point of human face region image is obtained
The affine camera with threedimensional model to two dimensional image is set, using the key point position of human face region image as current key point
It sets;
Triangulation is carried out to current key point position, to obtain the corresponding triangular facet in each current key point position;According to institute
It states affine camera and affine transformation is carried out to each triangular facet, obtain the positive key point position of each current key point position;Root
Direction adjustment is carried out to human face region image according to positive key point position, to obtain positive facial image;
Image enhancement is carried out to human face region image, forming face enhances image;
Obtain the gray value of each pixel in face enhancing image;According to the gray value of each pixel, each pixel is determined
The corresponding DCP code of point;The DCP code of each key point position is extracted, and calculates the statistics histogram of key point region DCP code
Figure, and using the statistic histogram of key point region DCP code as face characteristic;
Face characteristic is compared with the reference face characteristic of who object, face characteristic is similar to reference face characteristic
It spends as character image and with reference to the default similarity of face characteristic.
4. living body faces detection method as claimed in claim 2, which is characterized in that it is described based on human face detection tech to
Include: before the step of intercepting the character image of who object in detection video
Acquire the sample facial image of predetermined number, wherein include in predetermined number sample facial image quantity it is equal comprising true
The positive sample of real face and negative sample comprising photo face;
Positive sample and negative sample are zoomed into pre-set dimension and carry out subtracting average value processing, respectively obtains effectively positive sample and effectively
Negative sample;
The effective positive sample and effective negative sample for selecting the first ratio, the second ratio and third ratio at random, respectively as training
Collection, verifying collection and test set;
It is trained according to multiple alternative depth residual error network models of the training set to pre-selection, based on the training training
During practicing alternative depth residual error network model, effect is trained to each alternative depth residual error network model based on verifying collection
Fruit verifying;
Using the optimal alternative depth residual error network model of training effect as the depth residual error network model finally used;
The test set tests depth residual error network model, obtains living body probability threshold value.
5. living body faces detection method as claimed in claim 4, which is characterized in that the sample face of the acquisition predetermined number
The step of image includes:
Online and offline image of the acquisition comprising face is to generate face image set;
It is chosen from the facial image and meets the corresponding subsample facial image of each default characteristic condition, all subsample faces
Image collectively forms sample facial image, wherein default characteristic condition includes different picture size, personage's patterning positions, personage
It takes pictures posture, character face's expression and intensity of illumination.
6. living body faces detection method as claimed in claim 4, which is characterized in that it is described by the test set to depth residual error
Network model is tested, obtain living body probability threshold value the step of include:
By in the test set positive sample and negative sample be input in the depth residual error network model, obtain each positive sample
With the test living body probability of each negative sample;
It chooses test probability threshold value one by one in preset threshold section, according to test probability threshold value and test living body probability, obtains
In vivo detection accuracy rate of the depth residual error network model under different test probability threshold values;
By test probability threshold value corresponding to the maximum In vivo detection accuracy rate of numerical value, finally as depth residual error network model
Living body probability threshold value.
7. living body faces detection method as claimed in claim 6, which is characterized in that described to be selected one by one in preset threshold section
Test probability threshold value is taken, according to test probability threshold value and test living body probability, obtains depth residual error network model in different tests
The step of In vivo detection accuracy rate under probability threshold value includes:
It chooses test probability threshold value one by one in preset threshold section, obtains test living body probability and be greater than or equal to test probability threshold
True living body number, the test living body probability of the positive sample of value are less than the false living body number of the positive sample of test probability threshold value, test living body
Probability is less than the true non-living body number of the negative sample of test probability threshold value and test living body probability is greater than or equal to test probability threshold value
Negative sample false non-living body number, until test probability threshold value traverse preset threshold section;
It is true living body number and true non-living body number sum of the two and true living body number, false living body number, very non-by under different test probability threshold values
The ratio of living body number and false non-living body several the sum of four, as work of the depth residual error network model under different test probability threshold values
Body Detection accuracy.
8. living body faces detection method as claimed in claim 4, which is characterized in that it is described by the test set to depth residual error
Network model is tested, obtain living body probability threshold value the step of include:
By in the test set positive sample and negative sample be input in the depth residual error network model, obtain each positive sample
With the test living body probability of each negative sample;
It chooses test probability threshold value one by one in preset threshold section, obtains test living body probability and be greater than or equal to test probability threshold
True living body number, the test living body probability of the positive sample of value are less than the false living body number of the positive sample of test probability threshold value, test living body
Probability is less than the true non-living body number of the negative sample of test probability threshold value and test living body probability is greater than or equal to test probability threshold value
Negative sample false non-living body number, until test probability threshold value traverse preset threshold section;
It obtains under different test probability threshold values, the false rejection rate and vacation non-living body number of the ratio between false living body number and positive sample number
With the false acceptance rate of the ratio between negative sample number;
Externally input In vivo detection demand is received, the false rejection rate and false acceptance rate that the In vivo detection demand is adapted to
Corresponding test probability threshold value, the living body probability threshold value final as depth residual error network model.
9. a kind of living body faces detect terminal, which is characterized in that the living body faces detection terminal include memory, processor,
And it is stored in the computer-readable instruction that can be run on the memory and on the processor, the computer-readable instruction
It realizes when being executed by the processor such as the step of living body faces detection method described in any item of the claim 1 to 8.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Readable instruction realizes such as living body described in any item of the claim 1 to 8 when the computer-readable instruction is executed by processor
The step of method for detecting human face.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811652890.9A CN109858375B (en) | 2018-12-29 | 2018-12-29 | Living body face detection method, terminal and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811652890.9A CN109858375B (en) | 2018-12-29 | 2018-12-29 | Living body face detection method, terminal and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109858375A true CN109858375A (en) | 2019-06-07 |
CN109858375B CN109858375B (en) | 2023-09-26 |
Family
ID=66893627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811652890.9A Active CN109858375B (en) | 2018-12-29 | 2018-12-29 | Living body face detection method, terminal and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109858375B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363111A (en) * | 2019-06-27 | 2019-10-22 | 平安科技(深圳)有限公司 | Human face in-vivo detection method, device and storage medium based on lens distortions principle |
CN110430419A (en) * | 2019-07-12 | 2019-11-08 | 北京大学 | A kind of multiple views naked eye three-dimensional image composition method anti-aliasing based on super-resolution |
CN112118410A (en) * | 2019-06-20 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Service processing method, device, terminal and storage medium |
CN112115831A (en) * | 2020-09-10 | 2020-12-22 | 深圳印像数据科技有限公司 | Living body detection image preprocessing method |
CN112507798A (en) * | 2020-11-12 | 2021-03-16 | 上海优扬新媒信息技术有限公司 | Living body detection method, electronic device, and storage medium |
CN112528909A (en) * | 2020-12-18 | 2021-03-19 | 平安银行股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium |
CN113095180A (en) * | 2021-03-31 | 2021-07-09 | 上海商汤智能科技有限公司 | Living body detection method and device, living body detection equipment and computer storage medium |
CN113128258A (en) * | 2019-12-30 | 2021-07-16 | 杭州海康威视数字技术股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN113869219A (en) * | 2021-09-29 | 2021-12-31 | 平安银行股份有限公司 | Face living body detection method, device, equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103678984A (en) * | 2013-12-20 | 2014-03-26 | 湖北微模式科技发展有限公司 | Method for achieving user authentication by utilizing camera |
CN104361326A (en) * | 2014-11-18 | 2015-02-18 | 新开普电子股份有限公司 | Method for distinguishing living human face |
CN105426815A (en) * | 2015-10-29 | 2016-03-23 | 北京汉王智远科技有限公司 | Living body detection method and device |
WO2016197298A1 (en) * | 2015-06-08 | 2016-12-15 | 北京旷视科技有限公司 | Living body detection method, living body detection system and computer program product |
CN106295672A (en) * | 2015-06-12 | 2017-01-04 | 中国移动(深圳)有限公司 | A kind of face identification method and device |
CN106446779A (en) * | 2016-08-29 | 2017-02-22 | 深圳市软数科技有限公司 | Method and apparatus for identifying identity |
CN106570489A (en) * | 2016-11-10 | 2017-04-19 | 腾讯科技(深圳)有限公司 | Living body determination method and apparatus, and identity authentication method and device |
CN106682578A (en) * | 2016-11-21 | 2017-05-17 | 北京交通大学 | Human face recognition method based on blink detection |
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
CN107247947A (en) * | 2017-07-07 | 2017-10-13 | 北京智慧眼科技股份有限公司 | Face character recognition methods and device |
CN107844784A (en) * | 2017-12-08 | 2018-03-27 | 广东美的智能机器人有限公司 | Face identification method, device, computer equipment and readable storage medium storing program for executing |
CN108108676A (en) * | 2017-12-12 | 2018-06-01 | 北京小米移动软件有限公司 | Face identification method, convolutional neural networks generation method and device |
CN108596082A (en) * | 2018-04-20 | 2018-09-28 | 重庆邮电大学 | Human face in-vivo detection method based on image diffusion velocity model and color character |
-
2018
- 2018-12-29 CN CN201811652890.9A patent/CN109858375B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103678984A (en) * | 2013-12-20 | 2014-03-26 | 湖北微模式科技发展有限公司 | Method for achieving user authentication by utilizing camera |
CN104361326A (en) * | 2014-11-18 | 2015-02-18 | 新开普电子股份有限公司 | Method for distinguishing living human face |
WO2016197298A1 (en) * | 2015-06-08 | 2016-12-15 | 北京旷视科技有限公司 | Living body detection method, living body detection system and computer program product |
CN106295672A (en) * | 2015-06-12 | 2017-01-04 | 中国移动(深圳)有限公司 | A kind of face identification method and device |
CN105426815A (en) * | 2015-10-29 | 2016-03-23 | 北京汉王智远科技有限公司 | Living body detection method and device |
CN106446779A (en) * | 2016-08-29 | 2017-02-22 | 深圳市软数科技有限公司 | Method and apparatus for identifying identity |
CN106570489A (en) * | 2016-11-10 | 2017-04-19 | 腾讯科技(深圳)有限公司 | Living body determination method and apparatus, and identity authentication method and device |
CN106682578A (en) * | 2016-11-21 | 2017-05-17 | 北京交通大学 | Human face recognition method based on blink detection |
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
CN107247947A (en) * | 2017-07-07 | 2017-10-13 | 北京智慧眼科技股份有限公司 | Face character recognition methods and device |
CN107844784A (en) * | 2017-12-08 | 2018-03-27 | 广东美的智能机器人有限公司 | Face identification method, device, computer equipment and readable storage medium storing program for executing |
CN108108676A (en) * | 2017-12-12 | 2018-06-01 | 北京小米移动软件有限公司 | Face identification method, convolutional neural networks generation method and device |
CN108596082A (en) * | 2018-04-20 | 2018-09-28 | 重庆邮电大学 | Human face in-vivo detection method based on image diffusion velocity model and color character |
Non-Patent Citations (2)
Title |
---|
易锋 等: "基于深度残差网络的行人人脸识别算法研究", 《电脑知识与技术》, vol. 14, no. 23, pages 233 - 235 * |
易锋等: "基于深度残差网络的行人人脸识别算法研究", 《电脑知识与技术》, no. 23, 15 August 2018 (2018-08-15), pages 233 - 235 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112118410B (en) * | 2019-06-20 | 2022-04-01 | 腾讯科技(深圳)有限公司 | Service processing method, device, terminal and storage medium |
CN112118410A (en) * | 2019-06-20 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Service processing method, device, terminal and storage medium |
CN110363111A (en) * | 2019-06-27 | 2019-10-22 | 平安科技(深圳)有限公司 | Human face in-vivo detection method, device and storage medium based on lens distortions principle |
CN110363111B (en) * | 2019-06-27 | 2023-08-25 | 平安科技(深圳)有限公司 | Face living body detection method, device and storage medium based on lens distortion principle |
CN110430419A (en) * | 2019-07-12 | 2019-11-08 | 北京大学 | A kind of multiple views naked eye three-dimensional image composition method anti-aliasing based on super-resolution |
CN110430419B (en) * | 2019-07-12 | 2021-06-04 | 北京大学 | Multi-view naked eye three-dimensional image synthesis method based on super-resolution anti-aliasing |
CN113128258A (en) * | 2019-12-30 | 2021-07-16 | 杭州海康威视数字技术股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN113128258B (en) * | 2019-12-30 | 2022-10-04 | 杭州海康威视数字技术股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and storage medium |
CN112115831A (en) * | 2020-09-10 | 2020-12-22 | 深圳印像数据科技有限公司 | Living body detection image preprocessing method |
CN112115831B (en) * | 2020-09-10 | 2024-03-15 | 深圳印像数据科技有限公司 | Living body detection image preprocessing method |
CN112507798A (en) * | 2020-11-12 | 2021-03-16 | 上海优扬新媒信息技术有限公司 | Living body detection method, electronic device, and storage medium |
CN112507798B (en) * | 2020-11-12 | 2024-02-23 | 度小满科技(北京)有限公司 | Living body detection method, electronic device and storage medium |
CN112528909A (en) * | 2020-12-18 | 2021-03-19 | 平安银行股份有限公司 | Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium |
CN112528909B (en) * | 2020-12-18 | 2024-05-21 | 平安银行股份有限公司 | Living body detection method, living body detection device, electronic equipment and computer readable storage medium |
WO2022205643A1 (en) * | 2021-03-31 | 2022-10-06 | 上海商汤智能科技有限公司 | Living body detection method and apparatus, and device and computer storage medium |
CN113095180A (en) * | 2021-03-31 | 2021-07-09 | 上海商汤智能科技有限公司 | Living body detection method and device, living body detection equipment and computer storage medium |
CN113095180B (en) * | 2021-03-31 | 2024-06-11 | 上海商汤智能科技有限公司 | Living body detection method and device, equipment and computer storage medium |
CN113869219A (en) * | 2021-09-29 | 2021-12-31 | 平安银行股份有限公司 | Face living body detection method, device, equipment and storage medium |
CN113869219B (en) * | 2021-09-29 | 2024-05-21 | 平安银行股份有限公司 | Face living body detection method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109858375B (en) | 2023-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109858375A (en) | Living body faces detection method, terminal and computer readable storage medium | |
KR102596897B1 (en) | Method of motion vector and feature vector based fake face detection and apparatus for the same | |
WO2020151489A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
KR102147052B1 (en) | Emotional recognition system and method based on face images | |
CN106557726B (en) | Face identity authentication system with silent type living body detection and method thereof | |
CN108829900B (en) | Face image retrieval method and device based on deep learning and terminal | |
JP5010905B2 (en) | Face recognition device | |
US12056954B2 (en) | System and method for selecting images for facial recognition processing | |
CN104123543B (en) | A kind of eye movement recognition methods based on recognition of face | |
JP5629803B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
CN109840565A (en) | A kind of blink detection method based on eye contour feature point aspect ratio | |
JP4743823B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
US7925093B2 (en) | Image recognition apparatus | |
CN108124486A (en) | Face living body detection method based on cloud, electronic device and program product | |
CN109325462B (en) | Face recognition living body detection method and device based on iris | |
WO2016084072A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
CN106295522A (en) | A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information | |
CN106056064A (en) | Face recognition method and face recognition device | |
CN111008971B (en) | Aesthetic quality evaluation method of group photo image and real-time shooting guidance system | |
US10915739B2 (en) | Face recognition device, face recognition method, and computer readable storage medium | |
CN110543848B (en) | Driver action recognition method and device based on three-dimensional convolutional neural network | |
CN111222433A (en) | Automatic face auditing method, system, equipment and readable storage medium | |
CN113450369A (en) | Classroom analysis system and method based on face recognition technology | |
Willoughby et al. | DrunkSelfie: intoxication detection from smartphone facial images | |
Hadiprakoso | Face anti-spoofing method with blinking eye and hsv texture analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230828 Address after: Building 3, 102, Zhongliang Chuangzhi Factory, Zone 67, Xingdong Community, Xin'an Street, Bao'an District, Shenzhen City, Guangdong Province, 518000 Applicant after: Jiantu Chuangzhi (Shenzhen) Technology Co.,Ltd. Address before: 518100 Room 1401, Building 1, Building 1, COFCO Chuangzhi Factory Area, Zone 67, Xin'an Street, Bao'an District, Shenzhen City, Guangdong Province Applicant before: SHENZHEN RUNSDATA TECHNOLOGY CO.,LTD. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |