CN108537152A - Method and apparatus for detecting live body - Google Patents
Method and apparatus for detecting live body Download PDFInfo
- Publication number
- CN108537152A CN108537152A CN201810259543.3A CN201810259543A CN108537152A CN 108537152 A CN108537152 A CN 108537152A CN 201810259543 A CN201810259543 A CN 201810259543A CN 108537152 A CN108537152 A CN 108537152A
- Authority
- CN
- China
- Prior art keywords
- image
- initial
- facial image
- detected
- living body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for detecting live body.One specific implementation mode of this method includes:Obtain facial image to be detected;By facial image to be detected input Feature Selection Model trained in advance, characteristics of image corresponding with facial image to be detected is obtained, wherein Feature Selection Model is used to extract the feature of facial image;By the generator in the generation confrontation network of obtained characteristics of image input training in advance, obtain generation image corresponding with facial image to be detected, wherein, it includes generator and arbiter to generate confrontation network, and Feature Selection Model and generation confrontation network train to obtain based on living body faces image collection;Based on facial image to be detected and the obtained similarity generated between image, In vivo detection result corresponding with facial image to be detected is generated.This embodiment improves the comfort levels that In vivo detection is carried out to user, and improve the speed of In vivo detection.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for detecting live body.
Background technology
In In vivo detection, can record first user make required movement (for example, nodding, shake the head, come back, bow, blink
Eyes etc.) video, then the video recorded is analyzed to provide In vivo detection result.However, making specified dynamic
Make to bring using upper inconvenience to user, and the time for analyzing video is also relatively long.
Invention content
The embodiment of the present application proposes the method and apparatus for detecting live body.
In a first aspect, the embodiment of the present application provides a kind of method for detecting live body, this method includes:It obtains to be checked
Survey facial image;By facial image to be detected input Feature Selection Model trained in advance, obtain and facial image pair to be detected
The characteristics of image answered, wherein Feature Selection Model is used to extract the feature of facial image;The input of obtained characteristics of image is pre-
Generator in the generation confrontation network of first training, obtains generation image corresponding with facial image to be detected, wherein generation pair
Anti- network includes generator and arbiter, and Feature Selection Model and generation confrontation network are trained based on living body faces image collection
It obtains;Based on facial image to be detected and the obtained similarity generated between image, generate and facial image to be detected
Corresponding In vivo detection result.
In some embodiments, it based on facial image to be detected and the obtained similarity generated between image, generates
In vivo detection corresponding with facial image to be detected is as a result, include:Determine facial image to be detected and obtained generation image
Between similarity whether be more than default similarity threshold;It is more than in response to determination, determines the face in facial image to be detected
It is living body faces.
In some embodiments, it based on facial image to be detected and the obtained similarity generated between image, generates
In vivo detection corresponding with facial image to be detected is as a result, further include:It is not more than in response to determination, determines facial image to be detected
In face be not living body faces.
In some embodiments, Feature Selection Model and generation confrontation network are to train to obtain by following training step
's:Obtain living body faces image collection;For the living body faces image in living body faces image collection, execute to lower ginseng step
Suddenly:The living body faces image is inputted into initial characteristics extraction model, obtains characteristics of image corresponding with the living body faces image;It will
Obtained characteristics of image inputs initial generator, obtains generating facial image;Based on obtained generation facial image and it is somebody's turn to do
Similarity between living body faces image adjusts the parameter of initial characteristics extraction model and initial generator;By obtained life
Input initial arbiter respectively at facial image and the living body faces image, obtain the first differentiation result and second differentiate as a result,
Wherein, the first differentiation result and the second differentiation result are respectively used to characterize obtained generation facial image and the living body faces figure
Seem no it is real human face image;Based on the first difference and the second difference, adjustment initial characteristics extraction model, initial generator and
The parameter of initial arbiter, wherein the first difference is the first differentiation result and inputs the image of initial arbiter not for characterizing
Be real human face image no differentiation result between difference, the second difference is the second differentiation result and initial for characterizing input
The image of arbiter be real human face image be differentiate result between difference.
In some embodiments, it for the living body faces image in living body faces image collection, is executing to lower ginseng step
Before rapid, training step further includes:Determine model structure information, the network structure of initial generator of initial characteristics extraction model
The network structure information of information and initial arbiter, and initialize the model parameter of initial characteristics extraction model, be initially generated
The network parameter of the network parameter of device and initial arbiter.
In some embodiments, Feature Selection Model is convolutional neural networks.
Second aspect, the embodiment of the present application provide a kind of device for detecting live body, which includes:It obtains single
Member is configured to obtain facial image to be detected;Feature extraction unit is configured to facial image to be detected inputting instruction in advance
Experienced Feature Selection Model obtains characteristics of image corresponding with facial image to be detected, wherein Feature Selection Model is for extracting
The feature of facial image;Image generation unit is configured to obtained characteristics of image input generation confrontation trained in advance
Generator in network obtains generation image corresponding with facial image to be detected, wherein it includes generator to generate confrontation network
And arbiter, Feature Selection Model and generation confrontation network train to obtain based on living body faces image collection;In vivo detection
Unit is configured to, based on facial image to be detected and the obtained similarity generated between image, generate and people to be detected
The corresponding In vivo detection result of face image.
In some embodiments, In vivo detection unit includes:First determining module is configured to determine face figure to be detected
Whether picture and the obtained similarity generated between image are more than default similarity threshold;Second determining module, is configured to
It is more than in response to determination, determines that the face in facial image to be detected is living body faces.
In some embodiments, In vivo detection unit further includes:Third determining module is configured to little in response to determining
In it is living body faces to determine the face in facial image to be detected not.
In some embodiments, Feature Selection Model and generation confrontation network are to train to obtain by following training step
's:Obtain living body faces image collection;For the living body faces image in living body faces image collection, execute to lower ginseng step
Suddenly:The living body faces image is inputted into initial characteristics extraction model, obtains characteristics of image corresponding with the living body faces image;It will
Obtained characteristics of image inputs initial generator, obtains generating facial image;Based on obtained generation facial image and it is somebody's turn to do
Similarity between living body faces image adjusts the parameter of initial characteristics extraction model and initial generator;By obtained life
Input initial arbiter respectively at facial image and the living body faces image, obtain the first differentiation result and second differentiate as a result,
Wherein, the first differentiation result and the second differentiation result are respectively used to characterize obtained generation facial image and the living body faces figure
Seem no it is real human face image;Based on the first difference and the second difference, adjustment initial characteristics extraction model, initial generator and
The parameter of initial arbiter, wherein the first difference is the first differentiation result and inputs the image of initial arbiter not for characterizing
Be real human face image no differentiation result between difference, the second difference is the second differentiation result and initial for characterizing input
The image of arbiter be real human face image be differentiate result between difference.
In some embodiments, it for the living body faces image in living body faces image collection, is executing to lower ginseng step
Before rapid, training step further includes:Determine model structure information, the network structure of initial generator of initial characteristics extraction model
The network structure information of information and initial arbiter, and initialize the model parameter of initial characteristics extraction model, be initially generated
The network parameter of the network parameter of device and initial arbiter.
In some embodiments, Feature Selection Model is convolutional neural networks.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes:One or more processing
Device;Storage device, for storing one or more programs, when said one or multiple programs are by said one or multiple processors
When execution so that said one or multiple processors realize the method as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey
Sequence, wherein the method as described in any realization method in first aspect is realized when the computer program is executed by processor.
Method and apparatus provided by the embodiments of the present application for detecting live body, by the figure for extracting facial image to be detected
As feature, obtained characteristics of image is then input to the generator in the generation confrontation network of training in advance, obtains and waits for
The corresponding generation image of facial image is detected, and based on similar between facial image to be detected and obtained generation image
Degree, determines whether the face in facial image to be detected is living body faces.Here, it is only necessary to which the facial image for acquiring user can
To realize In vivo detection, the video information of required movement is made without acquiring user, is lived to user to improve
The comfort level that physical examination is surveyed, and relative to the biopsy method based on video, In vivo detection provided by the embodiments of the present application
Method only analyzes facial image, improves the speed of In vivo detection.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for detecting live body of the application;
Fig. 3 is the reality for training characteristics extraction model and the training step for generating confrontation network according to the application
Apply the flow chart of example;
Fig. 4 is the flow chart according to another embodiment of the method for detecting live body of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for detecting live body of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the implementation of the method for detecting live body or the device for detecting live body that can apply the application
The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as Image Acquisition class is answered on terminal device 101,102,103
With the application of, image processing class, the application of In vivo detection class, searching class application etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard
Can be the various electronic equipments with display screen when part, including but not limited to smart mobile phone, tablet computer, on knee portable
Computer and desktop computer etc..When terminal device 101,102,103 is software, above-mentioned cited electricity may be mounted at
In sub- equipment.Multiple softwares or software module may be implemented into (such as providing Image Acquisition service or In vivo detection in it
Service), single software or software module can also be implemented as.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as wait for 101,102,103 upload of terminal device
Detect the detection service device that facial image carries out In vivo detection.Detection service device can be to the facial image to be detected etc. that receives
Data carry out the processing such as analyzing, and handling result (for example, In vivo detection result) is fed back to terminal device.
It should be noted that the method for detecting live body that the embodiment of the present application is provided generally is held by server 105
Row, correspondingly, the device for detecting live body is generally positioned in server 105.
It should be pointed out that the local of server 105 can also directly store facial image to be detected, server 105 can
In vivo detection is carried out directly to extract local facial image to be detected, at this point, exemplary system architecture 100 can not include eventually
End equipment 101,102,103 and network 104.
It may also be noted that can also be equipped with In vivo detection class application in terminal device 101,102,103, terminal is set
Standby 101,102,103, which can also be based on facial image to be detected, carries out In vivo detection, at this point, the method for detecting live body also may be used
With by terminal device 101,102,103 execute, correspondingly, the device for detecting live body can also be set to terminal device 101,
102, in 103.At this point, exemplary system architecture 100 can not also include server 105 and network 104.
It should be noted that server 105 can be hardware, can also be software.It, can when server 105 is hardware
To be implemented as the distributed server cluster that multiple servers form, individual server can also be implemented as.When server is soft
When part, multiple softwares or software module (such as providing In vivo detection service) may be implemented into, can also be implemented as single
Software or software module.It is not specifically limited herein.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, it illustrates the flows according to one embodiment of the method for detecting live body of the application
200.The method for being used to detect live body, includes the following steps:
Step 201, facial image to be detected is obtained.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting live body can obtain
Take facial image to be detected.
Here, above-mentioned facial image to be detected can be communicated with above-mentioned executive agent (such as server shown in FIG. 1)
The terminal device (such as terminal device shown in FIG. 1 101,102,103) of connection passes through wired connection mode or wireless connection side
Formula is uploaded in above-mentioned executive agent.It at this moment, can in the terminal device (for example, mobile phone) communicated to connect with above-mentioned executive agent
To be equipped with camera.Above-mentioned terminal device can control the facial image of the camera shooting user wherein installed, and will shoot
To facial image be sent to above-mentioned executive agent.In this way, above-mentioned executive agent can will be received from above-mentioned terminal device
Image is as facial image to be detected.It should be pointed out that above-mentioned radio connection can include but is not limited to 3G/4G companies
Connect, WiFi connections, bluetooth connection, WiMAX connections, Zigbee connections, UWB (ultra wideband) connections and other are existing
In known or exploitation in the future radio connection.
Above-mentioned facial image to be detected can also be that above-mentioned executive agent is locally stored.For example, working as above-mentioned execution master
When body is terminal device, then camera can be equipped in terminal device.Above-mentioned terminal device can control the phase wherein installed
Machine shoots the facial image of user, and the facial image that shooting obtains is locally stored, and obtains the above-mentioned face being locally stored
Image is as facial image to be detected.
Step 202, it by image to be detected input Feature Selection Model trained in advance, obtains and facial image pair to be detected
The characteristics of image answered.
In the present embodiment, above-mentioned executive agent (such as server shown in FIG. 1) can will wait for acquired in step 201
Detection image input Feature Selection Model trained in advance, obtains characteristics of image corresponding with facial image to be detected.
Here, Feature Selection Model trained in advance can be the various models for extracting characteristics of image.Here figure
As feature can also be various features, including but not limited to color characteristic, textural characteristics, two-dimensional shape feature, two-dimensional space close
It is feature, three-dimensional shape features, three-dimensional relationship feature, shape of face feature, the shape feature of face, the position of face and ratio
Feature etc..
In some optional realization methods of the present embodiment, Feature Selection Model can be convolutional neural networks.Here,
Convolutional neural networks (Convolutional Neural Network, CNN) may include at least one convolutional layer and at least one
A pond layer, wherein convolutional layer can be used for extracting characteristics of image, and pond layer can be used for carrying out down-sampling to the information of input
(Down Sample).In practice, convolutional neural networks are a kind of feedforward neural networks, its artificial neuron can respond one
Surrounding cells in partial coverage have outstanding performance for image procossing, therefore, it is possible to be carried out using convolutional neural networks
The feature of the extraction of characteristics of image, image can be the various fundamentals of image (such as color, lines, texture etc.).Herein,
Characteristics of image corresponding with facial image to be detected is for characterizing the feature in facial image to be detected, while realization pair
Facial image to be detected carries out dimensionality reduction, to reduce later stage calculation amount.In practice, above-mentioned convolutional neural networks can also include activation
Function layer, activation primitive layer using various nonlinear activation functions (such as ReLU (Rectified Linear Units, correct
Linear unit) function, Sigmoid functions etc.) NONLINEAR CALCULATION is carried out to the information of input.
Step 203, by obtained characteristics of image input in advance training generation confrontation network in generator, obtain with
The corresponding generation image of facial image to be detected.
In the present embodiment, the executive agent of the method for detecting live body can be by obtained image in step 202
Generator in the generation confrontation network of feature input training in advance, obtains generation image corresponding with facial image to be detected.
Here, above-mentioned generation confrontation network may include generator and arbiter, and wherein generator is used to be generated according to characteristics of image and scheme
Picture, arbiter is for determining that the image of input arbiter is the image generated or true image.
Here, the Feature Selection Model used in step 202 and the confrontation network of the generation used in step 203 can be with
It trains to obtain based on living body faces image collection.Wherein, the living body faces image in living body faces image collection is to clap
Take the photograph the obtained image of living body faces.So Feature Selection Model and generation confrontation network may learn living body faces image
Feature, rather than the feature of non-living body facial image.
In some optional realization methods of the present embodiment, features described above extraction model and generation confrontation network can be
It is trained by training step.Referring to FIG. 3, Fig. 3 show according to the application for training characteristics extraction models and
The flow 300 of one embodiment of the training step of confrontation network is generated, which may comprise steps of 301 to step
Rapid 303:
Step 301, living body faces image collection is obtained.
Here, the executive agent of training step can be identical as the executive agent of method for detecting live body or not
Together.If identical, the executive agent of training step can put forward trained feature after training obtains Feature Selection Model
The model structure information of modulus type and the parameter value of model parameter are stored in local.If it is different, then the execution master of training step
Body can be after training obtains Feature Selection Model by the model structure information and model parameter of trained Feature Selection Model
Parameter value be sent to the executive agent of the method for detecting live body.
Here, the executive agent of training step can locally or from the executive agent network connection with training step its
He obtains living body faces image collection by electronic equipment, wherein each living body faces image is that shooting living body faces are obtained
Image.
Step 302, it for the living body faces image in living body faces image collection, executes and adjusts ginseng step, wherein adjust ginseng step
Suddenly may include that following sub-step 3021 arrives sub-step 3025:
The living body faces image is inputted initial characteristics extraction model, obtained and the living body faces image by sub-step 3021
Corresponding characteristics of image.
Here, which can be inputted initial characteristics extraction model by the executive agent of training step, be obtained
Characteristics of image corresponding with the living body faces image.Here, initial characteristics extraction model can be in order to which training characteristics extract mould
Type and it is predetermined for extracting the model of facial image feature, initial characteristics extraction model can be unbred feature
The Feature Selection Model that extraction model or training do not complete.
Optionally, the executive agent of training step can execute following first initialization behaviour before executing step 302
Make:
It is possible, firstly, to determine the model structure information of initial characteristics extraction model.It is understood that due to initial characteristics
Extraction model may include the model that various types are used to extract characteristics of image, for different types of for extracting characteristics of image
Model, the model structure information of required determination also differs.Optionally, initial characteristics extraction model can be convolutional Neural
Network.Since convolutional neural networks are the neural networks of a multilayer, every layer is made of multiple two dimensional surfaces, and each plane by
Multiple independent neuron compositions, then need exist for determining which layer the initial characteristics extraction model of convolutional neural networks type includes
(for example, convolutional layer, pond layer, excitation function layer etc.), order of connection relationship between layers, and each layer include
Which parameter (for example, step-length of weight weight, bias term bias, convolution) etc..Wherein, convolutional layer can be used for extraction figure
As feature.How many convolution kernel can be determined for each convolutional layer, and the size of each convolution kernel is each in each convolution kernel
The weight of a neuron, the corresponding bias term of each convolution kernel, the step-length between adjacent convolution twice, if need to fill, fill out
How many pixel and the numerical value (being usually filled with 0) of filling etc. filled.And pond layer can be used for the information of input into
Row down-sampling (Down Sample) reduces over-fitting with the amount of compressed data and parameter.It can be determined for each pond layer
The pond method (for example, take region averages or take maximum regional value) of the pond layer.Excitation function layer is used for input
Information carries out NONLINEAR CALCULATION.Specific excitation function can be determined for each excitation function layer.For example, activation primitive can be with
It is the various mutation activation primitives of ReLU and ReLU, Sigmoid functions, Tanh (tanh) function, Maxout functions etc..
In practice, convolutional neural networks are a kind of feedforward neural networks, its artificial neuron can respond in a part of coverage area
Surrounding cells, for image procossing have outstanding performance, being carried therefore, it is possible to carry out characteristics of image using convolutional neural networks
It takes, the feature of image can be the various fundamentals of image (such as color, lines, texture etc.).
Optionally, initial characteristics extraction model can also be active shape model (Active Shape Model, ASM),
Pivot analysis (Principal Component Analysis, PCA) model, independent component analysis (Independent
Component Analysis, ICA) model and linear discriminant analysis (Linear Discriminant Analysis, LDA) mould
Type, Local Features Analysis (Local Feature Analysis, LFA) model etc. are used to extract the mould of facial image feature
Type.Correspondingly, corresponding different Feature Selection Model, it is thus necessary to determine that model structure information be also different.
It is then possible to initialize the model parameter of initial characteristics extraction model.In practice, initial characteristics can be extracted mould
Each model parameter of type is initialized with some different small random numbers." small random number " is used for ensureing that model will not be because of power
It is worth excessive and enters saturation state, so as to cause failure to train, " difference " is used for ensureing that model can normally learn.
It is obtained corresponding with the living body faces image since the concrete model of Feature Selection Model is different in practice
Characteristics of image can also be feature vector form either characteristic pattern (feature map) form.
Obtained characteristics of image is inputted initial generator by sub-step 3022, obtains generating facial image.
In the present embodiment, the executive agent of the first training step can be by obtained characteristics of image in sub-step 3021
Initial generator is inputted, obtains generating facial image.Wherein, above-mentioned initial generator is the generation being initially generated in confrontation network
Device.Here, it can be generation confrontation network (GAN, the Generative for including generator and arbiter to be initially generated confrontation network
Adversarial Networks), wherein the generator and arbiter being initially generated in confrontation network are respectively initial generator
With initial arbiter, wherein initial generator is used to generate image according to characteristics of image, and initial arbiter is inputted for determination
Image be generate image or true picture.In practice, initial generator and initial arbiter can be various nerve nets
Network model.
In some optional realization methods of the present embodiment, the executive agent of training step can execute step 302
Before, following second initialization operation is executed:
It is possible, firstly, to determine the network structure information for being initially generated confrontation network.
Here, include initial generator and initial arbiter due to being initially generated confrontation network.Therefore, here, training step
Rapid executive agent can determine the network structure information of initial generator, and determine the network structure letter of initial arbiter
Breath.
It is understood that initial generator and initial arbiter can be various neural networks, it thus can be true respectively
It is which kind of neural network, including several layers of neurons, every layer of how many neuron, each layer to determine initial generator and initial arbiter
Order of connection relationship between neuron, every layer of neuron all include which parameter, the corresponding activation primitive class of every layer of neuron
Type etc..It will be appreciated that for different neural network types, the network structure information of required determination is also different.
It is then possible to initialize the ginseng for being initially generated the network parameter of initial generator and initial arbiter in confrontation network
Numerical value.In practice, can by each network parameter of initial generator and initial arbiter with some different small random numbers into
Row initialization." small random number " is used for ensureing that network will not enter saturation state because weights are excessive, so as to cause failure to train,
" difference " is used for ensureing that network can normally learn.
Sub-step 3023, based on the similarity between obtained generation facial image and the living body faces image, adjustment
The parameter of initial characteristics extraction model and initial generator.
In the present embodiment, the executive agent of training step can be based on obtained generation face figure in sub-step 3022
Similarity between picture and the living body faces image adjusts the parameter of initial characteristics extraction model and initial generator.In practice,
Goal-setting target letter can be turned to the similarity maximum between obtained generation facial image and the living body faces image
Number, then using optimization algorithm is preset, adjust the parameter of initial characteristics extraction model and initial generator with to object function into
Row optimization, and in the case where meeting the first default training termination condition, terminate to adjust ginseng step.For example, first presetting instruction here
Practice termination condition can include but is not limited to:Training time is more than preset duration, executes and adjust the number for joining step more than default time
Several, the obtained similarity generated between facial image and the living body faces image is more than default similarity threshold.
Here, default optimization algorithm can include but is not limited to gradient descent method (Gradient Descent), Newton method
(Newton's method), quasi-Newton method (Quasi-Newton Methods), conjugate gradient method (Conjugate
Gradient), Heuristic Method and the various optimization algorithms of other currently known or following exploitations.Wherein it is possible to
The similarity between two images is calculated using various methods, for example, Histogram Matching, matrix decomposition mathematically may be used
The image similarity calculation method etc. of (for example, singular value decomposition and Non-negative Matrix Factorization), feature based point.
Obtained generation facial image and the living body faces image are inputted initial arbiter by sub-step 3024 respectively,
It obtains the first differentiation result and second and differentiates result.
Here, the executive agent of training step can be by obtained generation image in sub-step 3022 and the living body faces
Image inputs initial arbiter respectively, obtains the first differentiation result and second and differentiates result.Initial arbiter is for characterizing face
Image and for characterize input facial image whether be real human face image differentiation result between correspondence.First sentences
Not the result is that initial arbiter is directed to and is input to obtained generation facial image institute in the sub-step 3022 in initial arbiter
The differentiation of output is as a result, the first differentiation result is used to characterize whether obtained generation facial image in sub-step 3022 is true
Facial image.Second differentiates the result is that initial arbiter is exported for the living body faces image being input in initial arbiter
Differentiation as a result, the second differentiation result for characterizing whether the living body faces image is real human face image.Here, initial to differentiate
The differentiation result of device output can be various forms.For example, it is real human face that differentiation result, which can be for characterizing facial image,
Image is to differentiate result (for example, number 1 is either vectorial (1 0)) or is real human face image for characterizing facial image not
The no differentiation result (for example, number 0 or vectorial (0,1)) of (that is, the facial image generated);In another example differentiating that result may be used also
It is real human face figure to include for characterizing probability that facial image is real human face image and/or for characterizing facial image not
As the probability of (that is, the facial image generated), for example, differentiate that result can be the vector for including the first probability and the second probability,
Wherein, the first probability is for characterizing the probability that facial image is real human face image, and the second probability is for characterizing facial image not
It is the probability of real human face image (that is, the facial image generated).
Sub-step 3025, is based on the first difference and the second difference, adjustment initial characteristics extraction model, initial generator and just
The parameter of beginning arbiter.
Here, the executive agent of training step can be first, in accordance with preset loss function (such as L1 norms or L2 models
Number etc.) calculate the first difference and the second difference.Here, the first difference be in sub-step 3024 it is obtained first differentiate result and
For characterize input initial arbiter image be not real human face image no differentiation result between difference, the second difference is
Second differentiation result and for characterize input initial arbiter image be real human face image be differentiation result between difference
It is different.It is understood that when the form difference of the differentiation result of initial generator output, specific loss function can not
Together.
Then, the executive agent of the first training step can be based at the beginning of the first difference and the second discrepancy adjustment for calculating gained
The parameter of beginning Feature Selection Model, initial generator and initial arbiter, and tied when meeting the second default training termination condition
Beam tune joins step.For example, the second default training termination condition can include but is not limited to here:When training time is more than default
It is more than preset times that the number of ginseng step is adjusted in long, execution, calculates difference between obtained first probability and the second probability is less than
First default discrepancy threshold.
Here it is possible to using various realization methods based on the first difference and the second discrepancy adjustment initial characteristics for calculating gained
The parameter of extraction model, initial generator and initial arbiter.For example, BP (Back Propagation, reversed biography may be used
Broadcasting) algorithm or SGD (Stochastic Gradient Descent, stochastic gradient descent) extract mould to adjust initial characteristics
The model parameter of type, initial generator and initial arbiter.
In this way, excellent by repeatedly being carried out to initial characteristics extraction model, initial generator and initial arbiter in step 302
Change, can make after facial image is inputted initial characteristics extraction model, obtains characteristics of image, and obtained image is special
Obtained generation image is more similar to the input facial image of initial characteristics extraction model after sign input initial generator,
I.e. so that the feature in living body faces image has been arrived in initial characteristics extraction model and initial generator study.
Step 303, initial characteristics extraction model is determined as Feature Selection Model, and by initial generator and initially sentence
Other device is identified as generating the generator and arbiter in confrontation network.
In the present embodiment, the executive agent of training step will can later train obtained initial spy by step 302
Sign extraction model is determined as Feature Selection Model, and initial generator and initial arbiter are identified as generating and fight net
Generator in network and arbiter.To arrive the training step that step 303 describes using step 301, can be trained in advance
Feature Selection Model and generate confrontation network.
Step 204, based on facial image to be detected and it is obtained generate image between similarity, generate with it is to be detected
The corresponding In vivo detection result of facial image.
Since Feature Selection Model and generation confrontation network train to obtain based on living body faces image collection, feature carries
It modulus type and generates confrontation network and can only learn the feature to living body faces image, rather than the feature of non-living body facial image.
Therefore, if the obtained facial image input feature vector extraction model to be detected of living body faces will be shot and generated in confrontation network
Generator, then the image generated is more similar to facial image to be detected., whereas if will shoot obtained by non-living body face
Facial image input feature vector extraction model to be detected and generate the generator in confrontation network, then the image generated is to be detected
Facial image is not much like.That is, facial image to be detected and the obtained similarity generated between image are higher, then it is to be checked
The face surveyed in facial image is that the possibility of living body faces is bigger, conversely, facial image to be detected and obtained generation figure
Similarity as between is lower, then the face in facial image to be detected is that the possibility of living body faces is smaller.
Based on above-mentioned understanding, in the present embodiment, above-mentioned executive agent can be based on facial image to be detected and acquired
The similarity generated between image, generate corresponding with facial image to be detected In vivo detection result.Wherein, In vivo detection knot
Whether the face that fruit is used to characterize in facial image is living body faces.For example, In vivo detection result can be for characterizing face
It is testing result mark (for example, number 1 is either vectorial (1 0)) or for characterizing people that face in image, which is living body faces,
Face in face image is not the no testing result mark (for example, number 0 or vectorial (0,1)) of living body faces;In another example living
Body testing result can also include that the face in facial image is the probability of living body faces and/or the face right and wrong in facial image
The probability of living body faces, for example, In vivo detection result can be the vector for including third probability and the 4th probability, wherein third
Probability is used to characterize the probability that the face in facial image is living body faces, and the 4th probability is used to characterize the face in facial image
It is the probability of non-living body faces.
In some optional realization methods of the present embodiment, step 204 can carry out as follows:
It is possible, firstly, to calculate facial image to be detected and the obtained similarity generated between image.
Here it is possible to using various methods calculate two images between similarity, for example, may be used Histogram Matching,
The image similarity calculating side of matrix decomposition (for example, singular value decomposition and Non-negative Matrix Factorization), feature based point mathematically
Method etc..
It is then possible to determine that facial image to be detected is the probability value of living body faces according to the similarity being calculated.
For example, when calculating numerical value (can be decimal or percents) of the similarity of gained between 0 and 1,
Can be that the similarity being calculated directly is determined as the probability value that facial image to be detected is living body faces.
Can be similar by what is be calculated in another example when the similarity for calculating gained is not the numerical value between 0 and 1
The numerical value normalized between 0 and 1 is spent (for example, can be by the similarity being calculated divided by default value, or use
Sigmoid functions etc.), and the value after normalization is determined as the probability value that facial image to be detected is living body faces.
Finally, using the probability value being calculated as In vivo detection result corresponding with facial image to be detected.
In some optional realization methods of the present embodiment, step 204 may be carried out as follows:
It is possible, firstly, to calculate facial image to be detected and the obtained similarity generated between image.
Secondly, it can determine that facial image to be detected is the probability value of living body faces according to the similarity being calculated.
Again, predetermined probabilities value threshold value can be more than in response to the above-mentioned identified probability of determination, determines face to be detected
Image is living body faces.
Finally, predetermined probabilities value threshold value can be not more than in response to the above-mentioned identified probability of determination, determines people to be detected
Face image is not living body faces.
The method for detecting live body that above-described embodiment of the application provides passes through the figure of extraction facial image to be detected
As feature, obtained characteristics of image is then input to the generator in the generation confrontation network of training in advance, obtains and waits for
The corresponding generation image of facial image is detected, and based on similar between facial image to be detected and obtained generation image
Degree, determines whether the face in facial image to be detected is living body faces.Here, it is only necessary to which the facial image for acquiring user can
To realize In vivo detection, the video information of required movement is made without acquiring user, is lived to user to improve
The comfort level that physical examination is surveyed, and relative to the biopsy method based on video, In vivo detection provided by the embodiments of the present application
Method only analyzes facial image, improves the speed of In vivo detection.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of the method for detecting live body.The use
In the flow 400 of the method for detection live body, include the following steps:
Step 401, facial image to be detected is obtained.
Step 402, it by image to be detected input Feature Selection Model trained in advance, obtains and facial image pair to be detected
The characteristics of image answered.
Step 403, by obtained characteristics of image input in advance training generation confrontation network in generator, obtain with
The corresponding generation image of facial image to be detected.
In the present embodiment, the concrete operations of step 401, step 402 and step 403 are walked with embodiment shown in Fig. 2
Rapid 201, the operation of step 202 and step 403 is essentially identical, and details are not described herein.
Step 404, it is default to determine whether facial image to be detected and the obtained similarity generated between image are more than
Similarity threshold.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for detecting live body can be first
First calculate facial image to be detected and the obtained similarity generated between image.
Then, above-mentioned executive agent can determine whether the similarity for calculating gained is more than default similarity threshold, if
Determination is more than, and goes to step 405, if it is determined that be not more than, go to step 406.
Here, default similarity threshold can manually be set by technical staff, can also be based on to great amount of samples
It is obtained after data are for statistical analysis.For example, the process of statistical analysis can carry out as follows:
First, sample face image set is obtained.
Wherein, sample face image set includes that living body faces image subset is closed and non-living body facial image subclass;
Secondly, for each sample facial image in sample face image set, similarity calculation step is executed.This
In, similarity calculation step may include:
The first step obtains sample facial image input Feature Selection Model trained in advance and the sample face figure
As corresponding characteristics of image.
Second step, by obtained characteristics of image input in advance training generation confrontation network in generator, obtain with
The corresponding generation image of the sample facial image.
Third portion calculates the obtained similarity generated between image and the sample facial image.
Again, the corresponding generation image of each living body faces image and living body faces during living body faces image subset is closed are determined
The minimum value in similarity between image.
Then, it is determined that the corresponding generation image of each non-living body facial image and non-live in non-living body facial image subclass
The maximum value in similarity between body facial image.
Finally, using the average value of identified minimum value and identified maximum value as default similarity threshold.
Step 405, determine that the face in facial image to be detected is living body faces.
In the present embodiment, above-mentioned executive agent can determine facial image to be detected and obtained in step 404
In the case of the similarity between image is generated more than default similarity threshold, determine that the face in facial image to be detected is living
Body face.
Step 406, it is living body faces to determine the face in facial image to be detected not.
In the present embodiment, above-mentioned executive agent can determine facial image to be detected and obtained in step 404
It generates in the case that the similarity between image is not more than default similarity threshold, determines face in facial image to be detected not
It is living body faces.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the method for detecting live body in the present embodiment
Flow 400 highlight according to facial image to be detected and it is obtained generate image between similarity, directly determine it is to be checked
The step of whether face surveyed in facial image is living body faces.It is multiple can to reduce calculating for the scheme of the present embodiment description as a result,
Miscellaneous degree further speeds up the speed of In vivo detection.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for detecting work
One embodiment of the device of body, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for detecting live body of the present embodiment includes:Acquiring unit 501, feature extraction list
Member 502, image generation unit 503 and In vivo detection unit 504.Wherein, acquiring unit 501 are configured to obtain people to be detected
Face image;Feature extraction unit 502 is configured to above-mentioned facial image input to be detected feature extraction mould trained in advance
Type obtains characteristics of image corresponding with above-mentioned facial image to be detected, wherein features described above extraction model is for extracting face figure
The feature of picture;Image generation unit 503 is configured to obtained characteristics of image input generation trained in advance fighting network
In generator, obtain generation image corresponding with above-mentioned facial image to be detected, wherein above-mentioned generation confrontation network includes life
It grows up to be a useful person and arbiter, features described above extraction model and above-mentioned generation confrontation network are to train to obtain based on living body faces image collection
's;In vivo detection unit 504 is configured to based on similar between above-mentioned facial image to be detected and obtained generation image
Degree generates In vivo detection result corresponding with above-mentioned facial image to be detected.
In the present embodiment, the acquiring unit 501, feature extraction unit 502 of the device 500 for detecting live body, image
The specific processing of generation unit 503 and In vivo detection unit 504 and its caused technique effect can correspond to reality with reference to figure 2 respectively
The related description of step 201 in example, step 202, step 203 and step 204 is applied, details are not described herein.
In some optional realization methods of the present embodiment, above-mentioned In vivo detection unit 504 may include:First determines
Module 5041 is configured to determine whether above-mentioned facial image to be detected and the obtained similarity generated between image are more than
Default similarity threshold;Second determining module 5042 is configured to be more than in response to determination, determines above-mentioned facial image to be detected
In face be living body faces.
In some optional realization methods of the present embodiment, above-mentioned In vivo detection unit 504 can also include:Third is true
Cover half block 5043 is configured to be not more than in response to determination, determines that the face in above-mentioned facial image to be detected is not live body people
Face.
In some optional realization methods of the present embodiment, features described above extraction model and above-mentioned generation confrontation network can
To train to obtain by following training step:Obtain living body faces image collection;For above-mentioned living body faces image collection
In living body faces image, execute to lower ginseng step:By the living body faces image input initial characteristics extraction model, obtain with
The corresponding characteristics of image of living body faces image;Obtained characteristics of image is inputted into initial generator, obtains generating face figure
Picture;Based on the similarity between obtained generation facial image and the living body faces image, above-mentioned initial characteristics extraction is adjusted
The parameter of model and above-mentioned initial generator;Obtained generation facial image and the living body faces image are inputted initially respectively
Arbiter obtains the first differentiation result and second and differentiates result, wherein above-mentioned first differentiates that result and above-mentioned second differentiates result
It is respectively used to characterize whether obtained generation facial image and the living body faces image are real human face images;It is poor based on first
Different and the second difference adjusts the parameter of above-mentioned initial characteristics extraction model, above-mentioned initial generator and above-mentioned initial arbiter,
In, above-mentioned first difference is above-mentioned first differentiation result and to input the image of above-mentioned initial arbiter be not true people for characterizing
Difference between the no differentiation result of face image, above-mentioned second difference are above-mentioned second differentiation result and are inputted for characterization above-mentioned
It is the difference differentiated between result that the image of initial arbiter, which is real human face image,.
In some optional realization methods of the present embodiment, for the live body people in above-mentioned living body faces image collection
Face image is executed so that before lowering ginseng step, above-mentioned training step can also include:Determine above-mentioned initial characteristics extraction model
The network structure information of model structure information, the network structure information of above-mentioned initial generator and above-mentioned initial arbiter, and
Initialize the model parameter of above-mentioned initial characteristics extraction model, the network parameter of above-mentioned initial generator and above-mentioned initial arbiter
Network parameter.
In some optional realization methods of the present embodiment, features described above extraction model can be convolutional neural networks.
It should be noted that it is provided by the embodiments of the present application for detect in the device of live body the realization details of each unit and
Technique effect can refer to the explanation of other embodiments in the application, and details are not described herein.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the electronic equipment for realizing the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU, Central Processing Unit)
601, it can be according to the program being stored in read-only memory (ROM, Read Only Memory) 602 or from storage section
608 programs being loaded into random access storage device (RAM, Random Access Memory) 603 and execute various appropriate
Action and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.CPU 601、ROM
602 and RAM 603 is connected with each other by bus 604.Input/output (I/O, Input/Output) interface 605 is also connected to
Bus 604.
It is connected to I/O interfaces 605 with lower component:Storage section 606 including hard disk etc.;And including such as LAN (locals
Net, Local Area Network) card, modem etc. network interface card communications portion 607.Communications portion 607 passes through
Communication process is executed by the network of such as internet.Driver 608 is also according to needing to be connected to I/O interfaces 605.Detachable media
609, such as disk, CD, magneto-optic disk, semiconductor memory etc., as needed be mounted on driver 608 on, in order to from
The computer program read thereon is mounted into storage section 606 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 607 from network, and/or from detachable media
609 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.
The more specific example of computer readable storage medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media may include the data letter propagated in a base band or as a carrier wave part
Number, wherein carrying computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute on the user computer, partly execute, executed as an independent software package on the user computer,
Part executes or executes on a remote computer or server completely on the remote computer on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include acquiring unit, feature extraction unit, image generation unit and In vivo detection unit.Wherein, the title of these units is at certain
In the case of do not constitute restriction to the unit itself, for example, acquiring unit is also described as " obtaining face figure to be detected
The unit of picture ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should
Device:Obtain facial image to be detected;By facial image to be detected input Feature Selection Model trained in advance, obtain with it is to be checked
Survey the corresponding characteristics of image of facial image, wherein Feature Selection Model is used to extract the feature of facial image;By obtained figure
As the generator of feature input training in advance generated in confrontation network, generation figure corresponding with facial image to be detected is obtained
Picture, wherein it includes generator and arbiter to generate confrontation network, and Feature Selection Model and generation confrontation network are to be based on live body people
Face image set is trained;Based on facial image to be detected and it is obtained generate image between similarity, generate with
The corresponding In vivo detection result of facial image to be detected.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (14)
1. a kind of method for detecting live body, including:
Obtain facial image to be detected;
By the facial image input to be detected Feature Selection Model trained in advance, obtain and the facial image pair to be detected
The characteristics of image answered, wherein the Feature Selection Model is used to extract the feature of facial image;
By the generator in the generation confrontation network of obtained characteristics of image input training in advance, obtain and the people to be detected
The corresponding generation image of face image, wherein the generation confrontation network includes generator and arbiter, the Feature Selection Model
It trains to obtain based on living body faces image collection with generation confrontation network;
Based on the facial image to be detected and the obtained similarity generated between image, generate and the face to be detected
The corresponding In vivo detection result of image.
2. described based on the facial image to be detected and obtained generations figure according to the method described in claim 1, wherein
Similarity as between generates In vivo detection corresponding with the facial image to be detected as a result, including:
Determine whether the facial image to be detected and the obtained similarity generated between image are more than default similarity threshold
Value;
It is more than in response to determination, determines that the face in the facial image to be detected is living body faces.
3. described based on the facial image to be detected and obtained generations figure according to the method described in claim 2, wherein
Similarity as between generates In vivo detection corresponding with the facial image to be detected as a result, further including:
It is not more than in response to determination, determines that the face in the facial image to be detected is not living body faces.
4. according to the method described in claim 1, wherein, the Feature Selection Model and generation confrontation network are by such as
What lower training step was trained:
Obtain living body faces image collection;
For the living body faces image in the living body faces image collection, execute to lower ginseng step:By the living body faces figure
As input initial characteristics extraction model, characteristics of image corresponding with the living body faces image is obtained;By obtained characteristics of image
Initial generator is inputted, obtains generating facial image;Based between obtained generation facial image and the living body faces image
Similarity, adjust the parameter of the initial characteristics extraction model and the initial generator;By obtained generation face figure
Picture and the living body faces image input initial arbiter respectively, obtain the first differentiation result and second and differentiate result, wherein is described
First differentiation result and the second differentiation result are respectively used to characterize obtained generation facial image and the living body faces figure
Seem no it is real human face image;Based on the first difference and the second difference, the initial characteristics extraction model, described initial is adjusted
The parameter of generator and the initial arbiter, wherein first difference is described first to differentiate result and defeated for characterizing
The image for entering the initial arbiter is not the difference between the no differentiation result of real human face image, and second difference is institute
State the second differentiation result and for characterize input the initial arbiter image be real human face image be differentiate result it
Between difference;
The initial characteristics extraction model is determined as the Feature Selection Model, and by the initial generator and it is described just
Beginning arbiter is identified as the generator and arbiter generated in confrontation network.
5. according to the method described in claim 4, wherein, for the living body faces figure in the living body faces image collection
Picture is executed so that before lowering ginseng step, the training step further includes:
Determine the model structure information of the initial characteristics extraction model, the network structure information of the initial generator and described
The network structure information of initial arbiter, and the model parameter of the initialization initial characteristics extraction model, the initial life
The network parameter of the network parameter and the initial arbiter grown up to be a useful person.
6. according to any method in claim 1-5, wherein the Feature Selection Model is convolutional neural networks.
7. a kind of device for detecting live body, including:
Acquiring unit is configured to obtain facial image to be detected;
Feature extraction unit is configured to, by the facial image input to be detected Feature Selection Model trained in advance, obtain
Characteristics of image corresponding with the facial image to be detected, wherein the Feature Selection Model is used to extract the spy of facial image
Sign;
Image generation unit, the generation being configured in the generation confrontation network by obtained characteristics of image input training in advance
Device obtains generation image corresponding with the facial image to be detected, wherein the generation confrontation network includes generator and sentences
Other device, the Feature Selection Model and generation confrontation network train to obtain based on living body faces image collection;
In vivo detection unit is configured to based on similar between the facial image to be detected and obtained generation image
Degree generates In vivo detection result corresponding with the facial image to be detected.
8. device according to claim 7, wherein the In vivo detection unit includes:
First determining module is configured to determine the facial image to be detected and the obtained similarity generated between image
Whether default similarity threshold is more than;
Second determining module is configured to be more than in response to determination, determines that the face in the facial image to be detected is live body
Face.
9. device according to claim 8, wherein the In vivo detection unit further includes:
Third determining module is configured to be not more than in response to determination, determines that the face in the facial image to be detected is not
Living body faces.
10. device according to claim 7, wherein the Feature Selection Model and the generation confrontation network be to pass through
What following training step was trained:
Obtain living body faces image collection;
For the living body faces image in the living body faces image collection, execute to lower ginseng step:By the living body faces figure
As input initial characteristics extraction model, characteristics of image corresponding with the living body faces image is obtained;By obtained characteristics of image
Initial generator is inputted, obtains generating facial image;Based between obtained generation facial image and the living body faces image
Similarity, adjust the parameter of the initial characteristics extraction model and the initial generator;By obtained generation face figure
Picture and the living body faces image input initial arbiter respectively, obtain the first differentiation result and second and differentiate result, wherein is described
First differentiation result and the second differentiation result are respectively used to characterize obtained generation facial image and the living body faces figure
Seem no it is real human face image;Based on the first difference and the second difference, the initial characteristics extraction model, described initial is adjusted
The parameter of generator and the initial arbiter, wherein first difference is described first to differentiate result and defeated for characterizing
The image for entering the initial arbiter is not the difference between the no differentiation result of real human face image, and second difference is institute
State the second differentiation result and for characterize input the initial arbiter image be real human face image be differentiate result it
Between difference;
The initial characteristics extraction model is determined as the Feature Selection Model, and by the initial generator and it is described just
Beginning arbiter is identified as the generator and arbiter generated in confrontation network.
11. device according to claim 10, wherein for the living body faces figure in the living body faces image collection
Picture is executed so that before lowering ginseng step, the training step further includes:
Determine the model structure information of the initial characteristics extraction model, the network structure information of the initial generator and described
The network structure information of initial arbiter, and the model parameter of the initialization initial characteristics extraction model, the initial life
The network parameter of the network parameter and the initial arbiter grown up to be a useful person.
12. according to any device in claim 7-11, wherein the Feature Selection Model is convolutional neural networks.
13. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors so that one or more of processors
Realize the method as described in any in claim 1-6.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
The now method as described in any in claim 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810259543.3A CN108537152B (en) | 2018-03-27 | 2018-03-27 | Method and apparatus for detecting living body |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810259543.3A CN108537152B (en) | 2018-03-27 | 2018-03-27 | Method and apparatus for detecting living body |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108537152A true CN108537152A (en) | 2018-09-14 |
CN108537152B CN108537152B (en) | 2022-01-25 |
Family
ID=63483752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810259543.3A Active CN108537152B (en) | 2018-03-27 | 2018-03-27 | Method and apparatus for detecting living body |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108537152B (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151443A (en) * | 2018-10-15 | 2019-01-04 | Oppo广东移动通信有限公司 | High degree of comfort three-dimensional video-frequency generation method, system and terminal device |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN109800730A (en) * | 2019-01-30 | 2019-05-24 | 北京字节跳动网络技术有限公司 | The method and apparatus for generating model for generating head portrait |
CN110059624A (en) * | 2019-04-18 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for detecting living body |
CN110298295A (en) * | 2019-06-26 | 2019-10-01 | 中国海洋大学 | Mobile terminal on-line study measure of supervision based on recognition of face |
CN110490076A (en) * | 2019-07-18 | 2019-11-22 | 平安科技(深圳)有限公司 | Biopsy method, device, computer equipment and storage medium |
CN110599487A (en) * | 2019-09-23 | 2019-12-20 | 北京海益同展信息科技有限公司 | Article detection method, apparatus and storage medium |
CN110941986A (en) * | 2019-10-10 | 2020-03-31 | 平安科技(深圳)有限公司 | Training method and device of living body detection model, computer equipment and storage medium |
CN111008294A (en) * | 2018-10-08 | 2020-04-14 | 阿里巴巴集团控股有限公司 | Traffic image processing and image retrieval method and device |
CN111080626A (en) * | 2019-12-19 | 2020-04-28 | 联想(北京)有限公司 | Detection method and electronic equipment |
CN111145455A (en) * | 2018-11-06 | 2020-05-12 | 天地融科技股份有限公司 | Method and system for detecting face risk in surveillance video |
CN111260545A (en) * | 2020-01-20 | 2020-06-09 | 北京百度网讯科技有限公司 | Method and device for generating image |
CN111275784A (en) * | 2020-01-20 | 2020-06-12 | 北京百度网讯科技有限公司 | Method and device for generating image |
CN111291730A (en) * | 2020-03-27 | 2020-06-16 | 深圳阜时科技有限公司 | Face anti-counterfeiting detection method, server and storage medium |
CN111507262A (en) * | 2020-04-17 | 2020-08-07 | 北京百度网讯科技有限公司 | Method and apparatus for detecting living body |
CN111539903A (en) * | 2020-04-16 | 2020-08-14 | 北京百度网讯科技有限公司 | Method and device for training face image synthesis model |
CN111553202A (en) * | 2020-04-08 | 2020-08-18 | 浙江大华技术股份有限公司 | Training method, detection method and device of neural network for detecting living body |
WO2020199577A1 (en) * | 2019-03-29 | 2020-10-08 | 北京市商汤科技开发有限公司 | Method and device for living body detection, equipment, and storage medium |
CN111754596A (en) * | 2020-06-19 | 2020-10-09 | 北京灵汐科技有限公司 | Editing model generation method, editing model generation device, editing method, editing device, editing equipment and editing medium |
CN112330526A (en) * | 2019-08-05 | 2021-02-05 | 深圳Tcl新技术有限公司 | Training method of face conversion model, storage medium and terminal equipment |
WO2021046773A1 (en) * | 2019-09-11 | 2021-03-18 | 深圳市汇顶科技股份有限公司 | Facial anti-counterfeiting detection method and apparatus, chip, electronic device and computer-readable medium |
CN112633113A (en) * | 2020-12-17 | 2021-04-09 | 厦门大学 | Cross-camera human face living body detection method and system |
CN113033305A (en) * | 2021-02-21 | 2021-06-25 | 云南联合视觉科技有限公司 | Living body detection method, living body detection device, terminal equipment and storage medium |
CN110473137B (en) * | 2019-04-24 | 2021-09-14 | 华为技术有限公司 | Image processing method and device |
CN113516107A (en) * | 2021-09-09 | 2021-10-19 | 浙江大华技术股份有限公司 | Image detection method |
CN113689527A (en) * | 2020-05-15 | 2021-11-23 | 武汉Tcl集团工业研究院有限公司 | Training method of face conversion model and face image conversion method |
CN114445918A (en) * | 2022-02-21 | 2022-05-06 | 支付宝(杭州)信息技术有限公司 | Living body detection method, device and equipment |
CN116070695A (en) * | 2023-04-03 | 2023-05-05 | 中国科学技术大学 | Training method of image detection model, image detection method and electronic equipment |
WO2023109551A1 (en) * | 2021-12-15 | 2023-06-22 | 腾讯科技(深圳)有限公司 | Living body detection method and apparatus, and computer device |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8457367B1 (en) * | 2012-06-26 | 2013-06-04 | Google Inc. | Facial recognition |
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
CN106203305A (en) * | 2016-06-30 | 2016-12-07 | 北京旷视科技有限公司 | Human face in-vivo detection method and device |
CN106997380A (en) * | 2017-03-21 | 2017-08-01 | 北京工业大学 | Imaging spectrum safe retrieving method based on DCGAN depth networks |
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
US20170286788A1 (en) * | 2016-04-01 | 2017-10-05 | Beijing Kuangshi Technology Co., Ltd. | Liveness detection method, liveness detection system, and computer program product |
US20170345146A1 (en) * | 2016-05-30 | 2017-11-30 | Beijing Kuangshi Technology Co., Ltd. | Liveness detection method and liveness detection system |
CN107423701A (en) * | 2017-07-17 | 2017-12-01 | 北京智慧眼科技股份有限公司 | The non-supervisory feature learning method and device of face based on production confrontation network |
CN107563355A (en) * | 2017-09-28 | 2018-01-09 | 哈尔滨工程大学 | Hyperspectral abnormity detection method based on generation confrontation network |
CN107563283A (en) * | 2017-07-26 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of generation attack sample |
CN107766820A (en) * | 2017-10-20 | 2018-03-06 | 北京小米移动软件有限公司 | Image classification method and device |
-
2018
- 2018-03-27 CN CN201810259543.3A patent/CN108537152B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8457367B1 (en) * | 2012-06-26 | 2013-06-04 | Google Inc. | Facial recognition |
US20170286788A1 (en) * | 2016-04-01 | 2017-10-05 | Beijing Kuangshi Technology Co., Ltd. | Liveness detection method, liveness detection system, and computer program product |
CN105956572A (en) * | 2016-05-15 | 2016-09-21 | 北京工业大学 | In vivo face detection method based on convolutional neural network |
US20170345146A1 (en) * | 2016-05-30 | 2017-11-30 | Beijing Kuangshi Technology Co., Ltd. | Liveness detection method and liveness detection system |
CN106203305A (en) * | 2016-06-30 | 2016-12-07 | 北京旷视科技有限公司 | Human face in-vivo detection method and device |
CN107066942A (en) * | 2017-03-03 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of living body faces recognition methods and system |
CN106997380A (en) * | 2017-03-21 | 2017-08-01 | 北京工业大学 | Imaging spectrum safe retrieving method based on DCGAN depth networks |
CN107423701A (en) * | 2017-07-17 | 2017-12-01 | 北京智慧眼科技股份有限公司 | The non-supervisory feature learning method and device of face based on production confrontation network |
CN107563283A (en) * | 2017-07-26 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of generation attack sample |
CN107563355A (en) * | 2017-09-28 | 2018-01-09 | 哈尔滨工程大学 | Hyperspectral abnormity detection method based on generation confrontation network |
CN107766820A (en) * | 2017-10-20 | 2018-03-06 | 北京小米移动软件有限公司 | Image classification method and device |
Non-Patent Citations (2)
Title |
---|
JIANWEI YANG ET AL: "Learn Convolutional Neural Network for face Anti-Spoofing", 《ARXIV》 * |
THOMAS SCHLEGL ET AL: "Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery", 《ARXIV》 * |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008294B (en) * | 2018-10-08 | 2023-06-20 | 阿里巴巴集团控股有限公司 | Traffic image processing and image retrieval method and device |
CN111008294A (en) * | 2018-10-08 | 2020-04-14 | 阿里巴巴集团控股有限公司 | Traffic image processing and image retrieval method and device |
CN109151443A (en) * | 2018-10-15 | 2019-01-04 | Oppo广东移动通信有限公司 | High degree of comfort three-dimensional video-frequency generation method, system and terminal device |
CN111145455A (en) * | 2018-11-06 | 2020-05-12 | 天地融科技股份有限公司 | Method and system for detecting face risk in surveillance video |
CN109635770A (en) * | 2018-12-20 | 2019-04-16 | 上海瑾盛通信科技有限公司 | Biopsy method, device, storage medium and electronic equipment |
CN109800730A (en) * | 2019-01-30 | 2019-05-24 | 北京字节跳动网络技术有限公司 | The method and apparatus for generating model for generating head portrait |
WO2020199577A1 (en) * | 2019-03-29 | 2020-10-08 | 北京市商汤科技开发有限公司 | Method and device for living body detection, equipment, and storage medium |
CN111753595A (en) * | 2019-03-29 | 2020-10-09 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, device, and storage medium |
JP2021519962A (en) * | 2019-03-29 | 2021-08-12 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Biological detection methods and devices, equipment and storage media |
JP7013077B2 (en) | 2019-03-29 | 2022-01-31 | ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド | Biological detection methods and devices, equipment and storage media |
CN110059624A (en) * | 2019-04-18 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for detecting living body |
CN110473137B (en) * | 2019-04-24 | 2021-09-14 | 华为技术有限公司 | Image processing method and device |
CN110298295A (en) * | 2019-06-26 | 2019-10-01 | 中国海洋大学 | Mobile terminal on-line study measure of supervision based on recognition of face |
CN110490076B (en) * | 2019-07-18 | 2024-03-01 | 平安科技(深圳)有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN110490076A (en) * | 2019-07-18 | 2019-11-22 | 平安科技(深圳)有限公司 | Biopsy method, device, computer equipment and storage medium |
CN112330526A (en) * | 2019-08-05 | 2021-02-05 | 深圳Tcl新技术有限公司 | Training method of face conversion model, storage medium and terminal equipment |
CN112330526B (en) * | 2019-08-05 | 2024-02-09 | 深圳Tcl新技术有限公司 | Training method of face conversion model, storage medium and terminal equipment |
WO2021046773A1 (en) * | 2019-09-11 | 2021-03-18 | 深圳市汇顶科技股份有限公司 | Facial anti-counterfeiting detection method and apparatus, chip, electronic device and computer-readable medium |
CN110599487A (en) * | 2019-09-23 | 2019-12-20 | 北京海益同展信息科技有限公司 | Article detection method, apparatus and storage medium |
CN110941986A (en) * | 2019-10-10 | 2020-03-31 | 平安科技(深圳)有限公司 | Training method and device of living body detection model, computer equipment and storage medium |
CN110941986B (en) * | 2019-10-10 | 2023-08-01 | 平安科技(深圳)有限公司 | Living body detection model training method, living body detection model training device, computer equipment and storage medium |
CN111080626A (en) * | 2019-12-19 | 2020-04-28 | 联想(北京)有限公司 | Detection method and electronic equipment |
CN111275784A (en) * | 2020-01-20 | 2020-06-12 | 北京百度网讯科技有限公司 | Method and device for generating image |
CN111260545A (en) * | 2020-01-20 | 2020-06-09 | 北京百度网讯科技有限公司 | Method and device for generating image |
CN111291730A (en) * | 2020-03-27 | 2020-06-16 | 深圳阜时科技有限公司 | Face anti-counterfeiting detection method, server and storage medium |
CN111553202A (en) * | 2020-04-08 | 2020-08-18 | 浙江大华技术股份有限公司 | Training method, detection method and device of neural network for detecting living body |
CN111553202B (en) * | 2020-04-08 | 2023-05-16 | 浙江大华技术股份有限公司 | Training method, detection method and device for neural network for living body detection |
CN111539903B (en) * | 2020-04-16 | 2023-04-07 | 北京百度网讯科技有限公司 | Method and device for training face image synthesis model |
CN111539903A (en) * | 2020-04-16 | 2020-08-14 | 北京百度网讯科技有限公司 | Method and device for training face image synthesis model |
CN111507262A (en) * | 2020-04-17 | 2020-08-07 | 北京百度网讯科技有限公司 | Method and apparatus for detecting living body |
CN111507262B (en) * | 2020-04-17 | 2023-12-08 | 北京百度网讯科技有限公司 | Method and apparatus for detecting living body |
CN113689527B (en) * | 2020-05-15 | 2024-02-20 | 武汉Tcl集团工业研究院有限公司 | Training method of face conversion model and face image conversion method |
CN113689527A (en) * | 2020-05-15 | 2021-11-23 | 武汉Tcl集团工业研究院有限公司 | Training method of face conversion model and face image conversion method |
CN111754596A (en) * | 2020-06-19 | 2020-10-09 | 北京灵汐科技有限公司 | Editing model generation method, editing model generation device, editing method, editing device, editing equipment and editing medium |
CN111754596B (en) * | 2020-06-19 | 2023-09-19 | 北京灵汐科技有限公司 | Editing model generation method, device, equipment and medium for editing face image |
CN112633113A (en) * | 2020-12-17 | 2021-04-09 | 厦门大学 | Cross-camera human face living body detection method and system |
CN113033305A (en) * | 2021-02-21 | 2021-06-25 | 云南联合视觉科技有限公司 | Living body detection method, living body detection device, terminal equipment and storage medium |
CN113516107A (en) * | 2021-09-09 | 2021-10-19 | 浙江大华技术股份有限公司 | Image detection method |
WO2023109551A1 (en) * | 2021-12-15 | 2023-06-22 | 腾讯科技(深圳)有限公司 | Living body detection method and apparatus, and computer device |
CN114445918A (en) * | 2022-02-21 | 2022-05-06 | 支付宝(杭州)信息技术有限公司 | Living body detection method, device and equipment |
CN116070695A (en) * | 2023-04-03 | 2023-05-05 | 中国科学技术大学 | Training method of image detection model, image detection method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108537152B (en) | 2022-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537152A (en) | Method and apparatus for detecting live body | |
CN108416324A (en) | Method and apparatus for detecting live body | |
CN108961369B (en) | Method and device for generating 3D animation | |
CN109214343B (en) | Method and device for generating face key point detection model | |
US10496898B2 (en) | State detection using machine-learning model trained on simulated image data | |
CN108776786A (en) | Method and apparatus for generating user's truth identification model | |
CN109255830A (en) | Three-dimensional facial reconstruction method and device | |
CN107644209A (en) | Method for detecting human face and device | |
CN108898186A (en) | Method and apparatus for extracting image | |
CN110298319B (en) | Image synthesis method and device | |
CN108446651A (en) | Face identification method and device | |
CN108416323A (en) | The method and apparatus of face for identification | |
CN109191514A (en) | Method and apparatus for generating depth detection model | |
CN108197618A (en) | For generating the method and apparatus of Face datection model | |
CN108509892A (en) | Method and apparatus for generating near-infrared image | |
CN109086719A (en) | Method and apparatus for output data | |
CN108363995A (en) | Method and apparatus for generating data | |
CN108388889B (en) | Method and device for analyzing face image | |
CN108511066A (en) | information generating method and device | |
CN108491823A (en) | Method and apparatus for generating eye recognition model | |
CN108446650A (en) | The method and apparatus of face for identification | |
CN114821734A (en) | Method and device for driving expression of virtual character | |
CN108510454A (en) | Method and apparatus for generating depth image | |
CN108510466A (en) | Method and apparatus for verifying face | |
CN108985228A (en) | Information generating method and device applied to terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |