CN110363111A - Human face in-vivo detection method, device and storage medium based on lens distortions principle - Google Patents
Human face in-vivo detection method, device and storage medium based on lens distortions principle Download PDFInfo
- Publication number
- CN110363111A CN110363111A CN201910567529.4A CN201910567529A CN110363111A CN 110363111 A CN110363111 A CN 110363111A CN 201910567529 A CN201910567529 A CN 201910567529A CN 110363111 A CN110363111 A CN 110363111A
- Authority
- CN
- China
- Prior art keywords
- face image
- face
- vivo detection
- short distance
- remote
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The present invention proposes a kind of human face in-vivo detection method based on lens distortions principle, this method comprises: using photographic device obtain user to be measured remote face image and short distance face image each one;Face critical point detection is carried out by remote face image and short distance face image of the Dlib database to acquired user to be measured, obtains the multidimensional characteristic vectors of two face images of the user to be measured;Classification and Identification, the In vivo detection result exported are carried out to the multidimensional characteristic vectors using In vivo detection classifier.The present invention also proposes a kind of electronic device and computer readable storage medium.The present invention by by optical lens distortion characteristics principle and neural network combine realize living body detection, can be suitable for each scene, not will receive the influence of light etc., have stronger generalization ability.
Description
Technical field
The present invention relates to technical field of biometric identification more particularly to a kind of face In vivo detections based on lens distortions principle
Method, apparatus and computer readable storage medium.
Background technique
Nowadays, with the rapid development of Face datection and face verification technology, more and more face unlock projects are answered
In life and work.Therefore, how to guarantee that the safety of face verification is got more and more attention.It is verified, is used using living body
To judge that before camera lens be true man or photo or video, to guarantee the safety of face verification.
In the living body verifying of traditional algorithm, by consider the difference between living body and non-living body (such as color and vein, it is non-just
Property movement deformation, material and image or video quality), using the filter of hand-designed, obtain features described above in image,
It is fabricated to positive sample and negative sample, is then placed in svm and is trained, to judge that target is living body or non-living body.But
Be that there are drawbacks is as follows for traditional algorithm: filter design first is complicated, engineer must manual change parameter, by making repeated attempts
Final filter can be produced;Secondly, the generalization ability of conventional learning algorithms is poor, because existing when video acquisition each
The environmental change of kind various kinds, such as light block, angle, bloom, and shade etc., conventional learning algorithms can not be according to each feelings
Condition produces corresponding filter.Therefore traditional algorithm cannot be widely applied to each scene.
Currently, more and more researchs are by deep learning network application to In vivo detection, wherein CNN model can be mentioned
The feature for taking out image, is further analyzed image, can learn automatically personnel and debug by hand;Moreover, deep learning
Network is able to observe that the feature that the mankind possibly can not observe, whole network is a black box, can voluntarily be learned according to data
It practises.The algorithm of current some mainstreams is suitable sorter network injection VGG16, ResNet or DenseNet of selection etc., then
Positive sample is mixed with negative sample to be put into after upsetting and goes to train in neural network, it will be able to be obtained a result, to be inferred.
But there are loopholes for deep learning, can be cracked by rocking video or picture still.
Therefore a kind of highly-safe human face in-vivo detection method is needed.
Summary of the invention
The present invention provides a kind of human face in-vivo detection method based on lens distortions principle, electronic device and computer-readable
Storage medium, main purpose be by by optical lens distortion characteristics and neural network combine realize living body inspection
It surveys, to will not be influenced by light etc., each scene can be suitable for, there is stronger generalization ability.
To achieve the above object, the present invention provides a kind of human face in-vivo detection method based on lens distortions principle, the party
Method includes:
S110, the remote face image that user to be measured is obtained using photographic device and short distance face image each one;
S120, face is carried out by remote face image and short distance face image of the Dlib database to acquired user to be measured
Critical point detection obtains the multidimensional characteristic vectors of two face images of the user to be measured;S130, In vivo detection classifier is utilized
Classification and Identification, the In vivo detection result exported are carried out to the multidimensional characteristic vectors.
Preferably, before the step of S130, the method also includes: the In vivo detection is obtained by training step
Classifier, wherein the training step includes:
S210, the Positive training sample and negative training sample for obtaining each legitimate user;Wherein, the Positive training sample is institute
The remote face image and short distance face image of the living body of legitimate user are stated, the negative training sample is the legitimate user
Non-living body remote face image and short distance face image;S220, to the Positive training sample and negative training sample into
Row neural metwork training obtains In vivo detection classifier.
Preferably, the training step further include:
S310, the remote face image for obtaining legitimate user's living body and short distance face image each one;
S320, by Dlib database to the remote face image and short distance face image of acquired user to be measured
Face critical point detection is carried out, and respectively chooses multiple groups key point therein;
S330, using extraction it is remote when multiple groups key point obtain remote when the distance between every group of key point number
According to;Using extraction short distance when multiple groups key point obtain short distance when the distance between every group of key point data;
S340, using it is obtained remote when and the distance between every group of key point when short distance data, obtain one
A multidimensional characteristic vectors;
S350, the multidimensional characteristic vectors are inputted into In vivo detection classifier, carries out neural metwork training.
Preferably, the neural network includes five-layer structure, and layer 5 is output layer, the output layer output detection knot
Fruit.
Preferably, the output layer carries out two classification using sigmoid function, determines that user to be measured is living body or non-live
Body.
In addition, to achieve the above object, the present invention also provides a kind of electronic device, which includes: memory, place
Device and photographic device are managed, includes face In vivo detection program in the memory, the face In vivo detection program is by the place
Reason device realizes following steps when executing:
S110, the remote face image that user to be measured is obtained using photographic device and short distance face image each one;
S120, face is carried out by remote face image and short distance face image of the Dlib database to acquired user to be measured
Critical point detection obtains the multidimensional characteristic vectors of two face images of the user to be measured;S130, In vivo detection classifier is utilized
Classification and Identification, the In vivo detection result exported are carried out to the multidimensional characteristic vectors.
Preferably, before the step of S130, the method also includes: the In vivo detection is obtained by training step
Classifier, wherein the training step includes:
S210, the Positive training sample and negative training sample for obtaining each legitimate user;Wherein, the Positive training sample is institute
The remote face image and short distance face image of the living body of legitimate user are stated, negative training sample is the non-of the legitimate user
The remote face image and short distance face image of living body;S220, mind is carried out to the Positive training sample and negative training sample
Through network training, In vivo detection classifier is obtained.
Preferably, the training step further include:
S310, the remote face image for obtaining legitimate user's living body and short distance face image each one;
S320, by Dlib database to the remote face image and short distance face image of acquired user to be measured
Face critical point detection is carried out, and respectively chooses multiple groups key point therein;
S330, using extraction it is remote when multiple groups key point obtain remote when the distance between every group of key point number
According to;Using extraction short distance when multiple groups key point obtain short distance when the distance between every group of key point data;
S340, using it is obtained remote when and the distance between every group of key point when short distance data, obtain one
A multidimensional characteristic vectors;
S350, the multidimensional characteristic vectors are inputted into In vivo detection classifier, carries out neural metwork training.
Preferably, the neural network includes five-layer structure, and wherein layer 5 is output layer, and the output layer utilizes
Sigmoid function carries out two classification, determines that user to be measured is living body or non-living body.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
It include the face In vivo detection program based on lens distortions principle in storage medium, the face In vivo detection program is by processor
When execution, the step of realizing human face in-vivo detection method as described above based on lens distortions principle.
It human face in-vivo detection method proposed by the present invention based on lens distortions principle, electronic device and computer-readable deposits
Storage media, this method is that a large amount of positive and negative sample data is put into neural network to be trained, wherein each legitimate user
Living body remote and short distance face sample as a kind of training sample, and negative sample (i.e. non-living body, photo or view
Frequently remote and short distance face sample) is as another kind of training sample;Neural network can learn to living body and non-living body
The difference of key point distance ratio, to correctly be inferred.The present invention is by by the distortion characteristics and mind on optical lens
The detection for realizing living body is combined through network, can be suitable for each scene, not will receive the influence of light etc., as long as can be just
Often detect face, so that it may calculate the distance between key point, not consider that other are easy affected feature, therefore general
Change ability is stronger.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow charts of the human face in-vivo detection method preferred embodiment of lens distortions principle;
Fig. 2 is the flow chart of the training method preferred embodiment of predetermined In vivo detection classifier of the invention;
Fig. 3 is the flow chart of the training method preferred embodiment of the invention using Positive training sample training neural network;
Fig. 4 is the application environment signal of the human face in-vivo detection method based on lens distortions principle of the embodiment of the present invention
Figure;
Fig. 5 is face critical point detection schematic diagram of the invention;
Fig. 6 is that the full articulamentum of the embodiment of the present invention extracts the schematic illustration of 29 dimensional feature vectors;
Fig. 7 is the structural schematic diagram of the neural network of the embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of human face in-vivo detection method based on lens distortions principle.Shown in referring to Fig.1, for the present invention
The flow chart of human face in-vivo detection method preferred embodiment based on lens distortions principle.This method can be held by a device
Row, which can be by software and or hardware realization.
In the present embodiment, the human face in-vivo detection method based on lens distortions principle includes: step S110- step
S130。
Step S110 obtains the remote face image and short distance face image each one of user to be measured using photographic device
?.
When photographic device takes a realtime graphic, photographic device sends processor for this realtime graphic, works as place
After reason device receives the realtime graphic, the size of picture is first obtained, the gray level image of a same size is established;The coloured silk that will acquire
Chromatic graph picture is converted into gray level image, while creating a memory headroom;By gray level image histogram equalization, make gray level image
Information content is reduced, and accelerates detection speed, then loads training library, detects the face in picture, and is returned to one and believed comprising face
The object of breath, obtains the data of face position, and records number;It finally obtains the region of head portrait and preserves, in this way
Just complete the process of a real-time face image zooming-out.
It should be noted that for the processing of realtime graphic further include sample image is pre-processed as scaling, cutting,
The operation such as overturning and distortion.
Step S120, remote face image and short distance face by Dlib database to acquired user to be measured
Image carries out face critical point detection, obtains the multidimensional characteristic vectors of two face images of the user to be measured;Wherein, specific
In implementation process, multidimensional characteristic vectors are 29 dimensional feature vectors.
Next, by the face key point extracted from realtime graphic using Dlib database and face away from camera lens two
Kind apart from when, the variation of the distance between face key point.
Wherein, the key point of face is extracted using Dlib database, and Dlib is the Face datection library an of comparative maturity,
One C++ Open-Source Tools packet comprising machine learning algorithm, which possesses C++ interface and python interface, contains many machines
Device learning algorithm can easily carry out Face datection and critical point detection.By using Dlib, face location is obtained, simultaneously
The position of available 68 key points, and each key point can be labeled.Wherein, critical point detection is to utilize
Gradient promotes decision tree face Keypoint detector and carries out face critical point detection to picture, obtains the face on above-mentioned picture
Key point location information;And it is that training obtains in advance based on Dlib that gradient, which promotes decision tree face Keypoint detector,.
The extraction of key point basis first extracts preset template picture, wherein wraps in preset template picture
The colour of skin, eyebrow containing target person and target face key point such as set at the information, and target face key point confidence breath is mesh
Mark the spectacle rims of personage, the coordinates of targets value of nose frame, mouth frame and face frame.
Using Dlib database, the extraction of 68 key points of face may be implemented, still, only choose in the present invention wherein
58.It referring to Figure 5, is the schematic diagram of 68 key points of face;Remove box in 10 key points (i.e. 28,29,30,
31,34,52,63,67,9 and 58 this 10 key points), for 58 key points selected by the present invention;58 passes described herein
Key point is based on the key point other than face central axes, and 58 key points are distributed with central axes at bilateral symmetry, each is crucial
Point can find symmetric points;That is, 58 key points include 29 groups, each group includes two or so mutually symmetrical with passes
Key point, illustratively, such as key point 7 in Fig. 5 and key point 11, as one group of key point.That is pass through 29 groups of key points
Obtain 29 dimensional feature vectors.
In the present invention, only interception about 58 key points, by for camera lens at a distance with short distance when, every group or so
The variation of distance between symmetrical key point, so that it may determine whether user is photo or video.Wherein, user face is obtained
Photo at distant location photo and short distance includes following region: facial contour, eyebrow, nose, eyes and mouth.And wherein
At a distance can be that face is away from 50-70 centimetres of camera lens;It and can be closely face away from 30-50 centimetres of camera lens.
Step S130 carries out Classification and Identification to 29 dimensional feature vector using In vivo detection classifier, is exported
In vivo detection result.
When input picture is non-living body, although the key point of face can be detected, when attack body is close to screen
When, either photo or video are all 2 dimensions;Therefore the distance between every group of symmetrical key point is constant;It is exemplary
, the ratio regular meeting between the ratio between nose key point and eyes key point remains unchanged;It therefore will a large amount of positive negative sample
Data are put into neural network and are trained, and neural network can learn to living body and non-living body key point distance ratio not
Together, to correctly be inferred.
Referring to shown in Fig. 2, Fig. 2 is the training method preferred embodiment of predetermined In vivo detection classifier of the invention
Flow chart.
Before the step of S130, method further include: In vivo detection classifier is obtained by training step, wherein instruction
Practicing step includes: step S210-S220.
S210, each legitimate user's Positive training sample and negative training sample are obtained;Wherein, the Positive training sample is described
The remote face image and short distance face image of the living body of legitimate user, the negative training sample are the legitimate user's
The remote face image and short distance face image of non-living body;
S220, neural metwork training is carried out using to the Positive training sample and negative training sample, obtains In vivo detection point
Class device.
That is, method of the invention is that a large amount of positive and negative samples data are put into neural network to be trained,
Wherein the remote and short distance face sample of the living body of each legitimate user is as a kind of training sample, and negative sample is (i.e. non-
Living body, photo or video) remote and short distance face sample as another kind of training sample;Neural network can learn
To the difference of living body and non-living body key point distance ratio, to correctly be inferred.
Referring to shown in Fig. 3, Fig. 3 is that the training method of the invention using Positive training sample training neural network is preferably implemented
The flow chart of example.To the training step of Positive training sample training neural network further include: step S310-S350.
S310, the remote face image for obtaining legitimate user's living body and short distance face image each one;S320, pass through
Dlib database carries out the inspection of face key point to the remote face image and short distance face image of acquired user to be measured
It surveys, and respectively chooses 29 groups of key points therein;S330, using extraction it is remote when 29 groups of key points obtain it is remote when
The distance between every group of key point data, using extraction short distance when 29 groups of key points obtain short distance when every group of key point
The distance between data;S340, using it is obtained remote when and the distance between every group of key point when short distance data,
Obtain 29 dimensional feature vectors;S350,29 dimensional feature vector is inputted into In vivo detection classifier, carries out neural network instruction
Practice.
Illustratively it is described as follows, by taking key point 7 and key point 11 as an example, the group extracted in remote face image
The distance between key point data are denoted as LFar;The distance between this group of key point extracted in the face image of short distance data note
For LClosely;And feature vector=LFar/LClosely;Such feature vector shares 29 groups, that is, the feature vector of 29 dimensions.
Therefore the feature vector of 29 dimensions of positive and negative samples is put into neural network and is trained, neural network can
Learn the difference to living body and non-living body key point distance ratio, to be that living body or non-living body carry out correctly to user to be measured
Judgement.
Referring to shown in Fig. 6, Fig. 6 is the schematic illustration that full articulamentum of the invention extracts 29 dimensional feature vectors.
Wherein, due to there was only 29 dimension datas, input data dimension is smaller;Therefore, feature is carried out using full articulamentum
It extracts, full articulamentum is each point that each point of this layer is connected to next layer, to retain same key as far as possible
Feature of the point in remote and short distance.Neural network can learn automatically to due to the change between distortion bring key point
Change, the distance between other key points of key point distance and face between nose that distortion is mentioned before being exactly are with camera lens
It is far and near and change.
Referring to shown in Fig. 7, Fig. 7 is the structural schematic diagram of neural network of the invention.Neural network uses 5 layers of structure altogether,
Crucial point data is tieed up in input 29, and first layer uses 64 neurons, and the second layer uses 128 neurons, and third layer uses 256
Neuron, the 4th layer uses 512 neurons, and the last layer output is classified as a result, carrying out two using sigmoid function, determined
User to be measured is living body or non-living body.
Wherein, it should be noted that Sigmoid function, i.e. f (x)=1/ (1+e-x).It is the nonlinear interaction of neuron
Function, the activation primitive in neural network, effect be exactly introduce it is non-linear.Specific non-linear form, then there are many selections.
The advantages of sigmoid, is that output area is limited, so data are not easy to dissipate during transmitting.Certainly also have corresponding
Disadvantage, gradient is too small when exactly saturation.Sigmoid is still yet another advantage is that output area is (0,1), it is possible to be used as
Output layer, output indicate probability.
The present invention provides a kind of human face in-vivo detection method based on lens distortions principle, is applied to a kind of electronic device 4.
Referring to shown in Fig. 4, for the present invention is based on the application environments of the human face in-vivo detection method preferred embodiment of lens distortions principle to show
It is intended to.
In the present embodiment, electronic device 1 can be server, smart phone, tablet computer, portable computer, on table
Type computer etc. has the terminal device of calculation function.
The electronic device 4 includes: processor 42, memory 41, photographic device 43, network interface 44 and communication bus 45.
Memory 41 includes the readable storage medium storing program for executing of at least one type.The readable storage medium storing program for executing of at least one type
It can be the non-volatile memory medium of such as flash memory, hard disk, multimedia card, card-type memory 41.In some embodiments, described
Readable storage medium storing program for executing can be the internal storage unit of the electronic device 4, such as the hard disk of the electronic device 4.At other
In embodiment, the readable storage medium storing program for executing is also possible to the external memory 41 of the electronic device 4, such as the electronic device
The plug-in type hard disk being equipped on 4, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital,
SD) block, flash card (Flash Card) etc..
In the present embodiment, the readable storage medium storing program for executing of the memory 41 is installed on the electronic device commonly used in storage
4 face In vivo detection program 40, In vivo detection classifier etc..The memory 41 can be also used for temporarily storing defeated
Out or the data that will export.
Processor 42 can be in some embodiments a central processing unit (Central Processing Unit,
CPU), microprocessor or other data processing chips, program code or processing data for being stored in run memory 41, example
Such as execute face In vivo detection program 40.
Photographic device 43 either the electronic device 4 a part, can also be independently of electronic device 4.Some
In embodiment, the electronic device 4 is the terminal device with camera such as smart phone, tablet computer, portable computer, then
The photographic device 43 is the camera of the electronic device 4.In other embodiments, the electronic device 4 can be clothes
Business device, the photographic device 43 passes through network connection independently of the electronic device 4, with the electronic device 4, for example, the camera shooting fills
It sets 43 and is installed on particular place, such as office space, monitoring area, the target captured in real-time for entering the particular place is obtained in real time
The realtime graphic that shooting obtains is transmitted to processor 42 by network by image.
Network interface 44 optionally may include standard wireline interface and wireless interface (such as WI-FI interface), be commonly used in
Communication connection is established between the electronic device 4 and other electronic equipments.
Communication bus 45 is for realizing the connection communication between these components.
Fig. 4 illustrates only the electronic device 4 with component 41-45, it should be understood that being not required for implementing all show
Component out, the implementation that can be substituted is more or less component.
In the specific embodiment of the present invention, which can also include user interface, and user interface can
To include that input unit such as keyboard (Keyboard), speech input device such as microphone (microphone) etc. have voice
Equipment, instantaneous speech power such as sound equipment, earphone of identification function etc., optionally user interface can also include the wired of standard
Interface, wireless interface.
In addition, the electronic device 4 can also include display, display is referred to as display screen or display unit.?
It can be light-emitting diode display, liquid crystal display, touch-control liquid crystal display and Organic Light Emitting Diode in some embodiments
(Organic Light-Emitting Diode, OLED) touches device etc..Display is used to be shown in handle in electronic device 4
Information and for showing visual user interface.
In addition, the electronic device 4 further includes touch sensor.It is touched provided by the touch sensor for user
The region of operation is known as touch area.In addition, touch sensor described here can be resistive touch sensor, condenser type
Touch sensor etc..Moreover, the touch sensor not only includes the touch sensor of contact, the touching of proximity may also comprise
Touch sensor etc..In addition, the touch sensor can be single sensor, or such as multiple sensings of array arrangement
Device.
In addition, the area of the display of the electronic device 4 can be identical as the area of the touch sensor, it can also not
Together.Optionally, display and touch sensor stacking are arranged, to form touch display screen.The device is based on touching aobvious
Display screen detects the touch control operation of user's triggering.
Optionally, which can also include radio frequency (Radio Frequency, RF) circuit, sensor, audio
Circuit etc., details are not described herein.
In Installation practice shown in Fig. 4, as may include in a kind of memory 41 of computer storage medium behaviour
Make system and face In vivo detection program 40;Processor 42 executes the face In vivo detection program 40 stored in memory 41
Shi Shixian following steps:
S110, the remote face image that user to be measured is obtained using photographic device and short distance face image each one;
S120, face is carried out by remote face image and short distance face image of the Dlib database to acquired user to be measured
Critical point detection obtains the multidimensional characteristic vectors of two face images of the user to be measured;S130, In vivo detection classifier is utilized
Classification and Identification, the In vivo detection result exported are carried out to the multidimensional characteristic vectors.
The electronic device that above-described embodiment proposes, by by the remote figure of the living body of a large amount of legitimate user and non-living body
Picture and close-up images are put into neural network and are trained, and neural network can learn remote with non-living body key point to living body
The difference of nearly ratio, to correctly be inferred.
In other embodiments, face In vivo detection program 40 can also be divided into one or more module, and one
Or multiple modules are stored in memory 41, and are executed by processor 42, to complete the present invention.The so-called module of the present invention
It is the series of computation machine program instruction section for referring to complete specific function.
The face In vivo detection program 40 can be divided into: obtain module, key point identification module and In vivo detection
Module.It is described obtain module, functions or operations step that key point identification module and In vivo detection module are realized and above
Similar, and will not be described here in detail, illustratively, such as wherein: obtaining module, the reality taken for obtaining photographic device 43
When image, a real-time face image is extracted from the realtime graphic using face recognition algorithms;Key point identification module, is used for
The extraction of 29 groups of key points of face is carried out to the real-time face image by Dlib database realizing, and forms 29 dimensional features
Vector;In vivo detection module obtained feature vector is compared with the feature vector in In vivo detection classifier, to push away
Whether user to be measured of breaking is living body.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium
In include face In vivo detection program, following operation is realized when the face In vivo detection program is executed by processor:
S110, the remote face image that user to be measured is obtained using photographic device and short distance face image each one;
S120, face is carried out by remote face image and short distance face image of the Dlib database to acquired user to be measured
Critical point detection obtains the multidimensional characteristic vectors of two face images of the user to be measured;S130, In vivo detection classifier is utilized
Classification and Identification, the In vivo detection result exported are carried out to the multidimensional characteristic vectors.
In the specific embodiment of the present invention,
Before the step of S130, the method also includes: the In vivo detection classifier is obtained by training step,
Wherein, the training step includes:
S210, the Positive training sample and negative training sample for obtaining each legitimate user;Positive training sample therein is legal
The remote face image and short distance face image of the living body of user, negative training sample are the long distance of the non-living body of legitimate user
From face image and short distance face image;S220, neural network is carried out to Positive training sample obtained and negative training sample
Training, obtains In vivo detection classifier.
The training step further include::
S310, the remote face image for obtaining legitimate user's living body and short distance face image each one;
S320, by Dlib database to the remote face image and short distance face image of acquired user to be measured
Face critical point detection is carried out, and respectively chooses multiple groups key point therein;
S330, using extraction it is remote when multiple groups key point obtain remote when the distance between every group of key point number
According to;Using extraction short distance when multiple groups key point obtain short distance when the distance between every group of key point data;
S340, using it is obtained remote when and the distance between every group of key point when short distance data, obtain one
A multidimensional characteristic vectors;
S350, the multidimensional characteristic vectors are inputted into In vivo detection classifier, carries out neural metwork training.
Neural network used by the present embodiment includes five-layer structure, and wherein first layer uses 64 neurons, the second layers
512 neurons are used using 256 neurons, the 4th layer using 128 neurons, third layer, layer 5 is output layer, is used
In output test result.
In the specific embodiment of the present invention, the output layer carries out two classification using sigmoid function, determines
User to be measured is living body or non-living body.
The specific embodiment of the computer readable storage medium of the present invention and the above-mentioned face based on lens distortions principle
Biopsy method, the specific embodiment of electronic device are roughly the same, and details are not described herein.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, device, article or the method that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, device, article or method institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, device of element, article or method.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.Pass through above embodiment party
The description of formula, it is required general that those skilled in the art can be understood that above-described embodiment method can add by software
The mode of hardware platform is realized, naturally it is also possible to which by hardware, but in many cases, the former is more preferably embodiment.It is based on
Such understanding, substantially the part that contributes to existing technology can be with software product in other words for technical solution of the present invention
Form embody, which is stored in a storage medium (such as ROM/RAM, magnetic disk, light as described above
Disk) in, including some instructions use is so that a terminal device (can be mobile phone, computer, server or the network equipment
Deng) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of human face in-vivo detection method based on lens distortions principle is applied to electronic device, which is characterized in that the side
Method includes:
S110, the remote face image that user to be measured is obtained using photographic device and short distance face image each one;
S120, it is carried out by remote face image and short distance face image of the Dlib database to acquired user to be measured
Face critical point detection obtains the multidimensional characteristic vectors of two face images of the user to be measured;
S130, Classification and Identification, the In vivo detection exported are carried out to the multidimensional characteristic vectors using In vivo detection classifier
As a result.
2. the human face in-vivo detection method according to claim 1 based on lens distortions principle, which is characterized in that described
Before the step of S130, the method also includes: the In vivo detection classifier is obtained by training step;Wherein, the instruction
Practicing step includes:
S210, the Positive training sample and negative training sample for obtaining each legitimate user;Wherein, the Positive training sample is the conjunction
The remote face image and short distance face image of the living body of method user, negative training sample are the non-living body of the legitimate user
Remote face image and short distance face image;
S220, neural metwork training is carried out to the Positive training sample and negative training sample, obtains In vivo detection classifier.
3. the human face in-vivo detection method according to claim 2 based on lens distortions principle, which is characterized in that the instruction
Practice step further include:
S310, the remote face image for obtaining legitimate user's living body and short distance face image each one;
S320, it is carried out by remote face image and short distance face image of the Dlib database to acquired user to be measured
Face critical point detection, and respectively choose multiple groups key point therein;
S330, using extraction it is remote when multiple groups key point obtain remote when the distance between every group of key point data;
Using extraction short distance when multiple groups key point obtain short distance when the distance between every group of key point data;
S340, using it is obtained remote when and the distance between every group of key point when short distance data, obtain more than one
Dimensional feature vector;
S350, the multidimensional characteristic vectors are inputted into In vivo detection classifier, carries out neural metwork training.
4. the human face in-vivo detection method according to claim 2 based on lens distortions principle, which is characterized in that
The neural network includes five-layer structure, and layer 5 is output layer, the output layer output test result.
5. the human face in-vivo detection method according to claim 4 based on lens distortions principle, which is characterized in that described defeated
Layer carries out two classification using sigmoid function out, determines that user to be measured is living body or non-living body.
6. a kind of electronic device, which is characterized in that the electronic device includes memory, processor and photographic device, the storage
Include face In vivo detection program in device, the face In vivo detection program realizes following steps when being executed by the processor:
S110, the remote face image that user to be measured is obtained using photographic device and short distance face image each one;
S120, it is carried out by remote face image and short distance face image of the Dlib database to acquired user to be measured
Face critical point detection obtains the multidimensional characteristic vectors of two face images of the user to be measured;
S130, Classification and Identification, the In vivo detection exported are carried out to the multidimensional characteristic vectors using In vivo detection classifier
As a result.
7. electronic device according to claim 6, which is characterized in that before the step of the S130, the method is also wrapped
It includes: the In vivo detection classifier is obtained by training step, wherein the training step includes:
S210, each legitimate user's Positive training sample and negative training sample are obtained, wherein the Positive training sample is described legal
The remote face image and short distance face image of user's living body;The negative training sample is legitimate user's non-living body
Remote face image and short distance face image;
S230, neural metwork training is carried out to the Positive training sample and negative training sample, obtains In vivo detection classifier.
8. electronic device according to claim 7, which is characterized in that the training step further include:
S310, obtain legitimate user's living body it is remote when face image and face image each one when short distance;
S320, by Dlib database to acquired user to be measured remote face image and short distance face image into
Pedestrian's face critical point detection, and respectively choose multiple groups key point therein;
S330, using extraction it is remote when multiple groups key point obtain remote when the distance between every group of key point data;
Using extraction short distance when multiple groups key point obtain short distance when the distance between every group of key point data;
S340, using it is obtained remote when and the distance between every group of key point when short distance data, obtain more than one
Dimensional feature vector;
S350, the multidimensional characteristic vectors are inputted into In vivo detection classifier, carries out neural metwork training.
9. electronic device according to claim 7, which is characterized in that the neural network includes five-layer structure, layer 5
For output layer, the output layer carries out two classification using sigmoid function, determines that user to be measured is living body or non-living body.
10. a kind of computer readable storage medium, which is characterized in that include being based on camera lens in the computer readable storage medium
It is distorted the face In vivo detection program of principle, when the face In vivo detection program is executed by processor, realizes such as claim 1
The step of to human face in-vivo detection method based on lens distortions principle described in any one of 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910567529.4A CN110363111B (en) | 2019-06-27 | 2019-06-27 | Face living body detection method, device and storage medium based on lens distortion principle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910567529.4A CN110363111B (en) | 2019-06-27 | 2019-06-27 | Face living body detection method, device and storage medium based on lens distortion principle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110363111A true CN110363111A (en) | 2019-10-22 |
CN110363111B CN110363111B (en) | 2023-08-25 |
Family
ID=68215807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910567529.4A Active CN110363111B (en) | 2019-06-27 | 2019-06-27 | Face living body detection method, device and storage medium based on lens distortion principle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363111B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091112A (en) * | 2019-12-30 | 2020-05-01 | 支付宝实验室(新加坡)有限公司 | Living body detection method and device |
CN112699811A (en) * | 2020-12-31 | 2021-04-23 | 中国联合网络通信集团有限公司 | Living body detection method, apparatus, device, storage medium, and program product |
CN114743253A (en) * | 2022-06-13 | 2022-07-12 | 四川迪晟新达类脑智能技术有限公司 | Living body detection method and system based on distance characteristics of key points of adjacent faces |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100177968A1 (en) * | 2009-01-12 | 2010-07-15 | Fry Peter T | Detection of animate or inanimate objects |
CN105023010A (en) * | 2015-08-17 | 2015-11-04 | 中国科学院半导体研究所 | Face living body detection method and system |
CN106372629A (en) * | 2016-11-08 | 2017-02-01 | 汉王科技股份有限公司 | Living body detection method and device |
CN107590430A (en) * | 2017-07-26 | 2018-01-16 | 百度在线网络技术(北京)有限公司 | Biopsy method, device, equipment and storage medium |
CN109858375A (en) * | 2018-12-29 | 2019-06-07 | 深圳市软数科技有限公司 | Living body faces detection method, terminal and computer readable storage medium |
-
2019
- 2019-06-27 CN CN201910567529.4A patent/CN110363111B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100177968A1 (en) * | 2009-01-12 | 2010-07-15 | Fry Peter T | Detection of animate or inanimate objects |
CN105023010A (en) * | 2015-08-17 | 2015-11-04 | 中国科学院半导体研究所 | Face living body detection method and system |
CN106372629A (en) * | 2016-11-08 | 2017-02-01 | 汉王科技股份有限公司 | Living body detection method and device |
CN107590430A (en) * | 2017-07-26 | 2018-01-16 | 百度在线网络技术(北京)有限公司 | Biopsy method, device, equipment and storage medium |
CN109858375A (en) * | 2018-12-29 | 2019-06-07 | 深圳市软数科技有限公司 | Living body faces detection method, terminal and computer readable storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091112A (en) * | 2019-12-30 | 2020-05-01 | 支付宝实验室(新加坡)有限公司 | Living body detection method and device |
WO2021135639A1 (en) * | 2019-12-30 | 2021-07-08 | 支付宝实验室(新加坡)有限公司 | Living body detection method and apparatus |
CN111091112B (en) * | 2019-12-30 | 2021-10-15 | 支付宝实验室(新加坡)有限公司 | Living body detection method and device |
CN112699811A (en) * | 2020-12-31 | 2021-04-23 | 中国联合网络通信集团有限公司 | Living body detection method, apparatus, device, storage medium, and program product |
CN112699811B (en) * | 2020-12-31 | 2023-11-03 | 中国联合网络通信集团有限公司 | Living body detection method, living body detection device, living body detection apparatus, living body detection storage medium, and program product |
CN114743253A (en) * | 2022-06-13 | 2022-07-12 | 四川迪晟新达类脑智能技术有限公司 | Living body detection method and system based on distance characteristics of key points of adjacent faces |
CN114743253B (en) * | 2022-06-13 | 2022-08-09 | 四川迪晟新达类脑智能技术有限公司 | Living body detection method and system based on distance characteristics of key points of adjacent faces |
Also Published As
Publication number | Publication date |
---|---|
CN110363111B (en) | 2023-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633204B (en) | Face occlusion detection method, apparatus and storage medium | |
US8750573B2 (en) | Hand gesture detection | |
US20200380279A1 (en) | Method and apparatus for liveness detection, electronic device, and storage medium | |
CN112162930B (en) | Control identification method, related device, equipment and storage medium | |
WO2016172872A1 (en) | Method and device for verifying real human face, and computer program product | |
CN109657533A (en) | Pedestrian recognition methods and Related product again | |
US20120027252A1 (en) | Hand gesture detection | |
CN107679447A (en) | Facial characteristics point detecting method, device and storage medium | |
CN107438854A (en) | The system and method that the image captured using mobile device performs the user authentication based on fingerprint | |
CN112052186B (en) | Target detection method, device, equipment and storage medium | |
CN106874826A (en) | Face key point-tracking method and device | |
WO2014137806A2 (en) | Visual language for human computer interfaces | |
CN108388878A (en) | The method and apparatus of face for identification | |
CN109376631A (en) | A kind of winding detection method and device neural network based | |
CN110363111A (en) | Human face in-vivo detection method, device and storage medium based on lens distortions principle | |
CN111275685A (en) | Method, device, equipment and medium for identifying copied image of identity document | |
CN109670517A (en) | Object detection method, device, electronic equipment and target detection model | |
CN111414858B (en) | Face recognition method, target image determining device and electronic system | |
CN111008971B (en) | Aesthetic quality evaluation method of group photo image and real-time shooting guidance system | |
CN114424258A (en) | Attribute identification method and device, storage medium and electronic equipment | |
CN106126067B (en) | A kind of method, device and mobile terminal that triggering augmented reality function is opened | |
CN111784665A (en) | OCT image quality assessment method, system and device based on Fourier transform | |
CN111722717B (en) | Gesture recognition method, gesture recognition device and computer-readable storage medium | |
KR101961462B1 (en) | Object recognition method and the device thereof | |
CN110443122A (en) | Information processing method and Related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |