CN113989871A - Living body detection model training method and living body detection method - Google Patents

Living body detection model training method and living body detection method Download PDF

Info

Publication number
CN113989871A
CN113989871A CN202110858764.4A CN202110858764A CN113989871A CN 113989871 A CN113989871 A CN 113989871A CN 202110858764 A CN202110858764 A CN 202110858764A CN 113989871 A CN113989871 A CN 113989871A
Authority
CN
China
Prior art keywords
image
living body
body detection
feature
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110858764.4A
Other languages
Chinese (zh)
Inventor
张劲风
郑新莹
高通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN202110858764.4A priority Critical patent/CN113989871A/en
Publication of CN113989871A publication Critical patent/CN113989871A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The application relates to the technical field of biological identification, in particular to a living body detection model training method and a living body detection method. The living body detection model training method comprises the following steps: constructing a living body detection initial model, wherein the living body detection initial model comprises at least three convolution modules, a feature fusion module and a pooling module; acquiring a face image, and respectively acquiring at least three characteristic images when the face image sequentially passes through each convolution module; fusing the at least three characteristic images by utilizing a characteristic fusion module to obtain a fourth characteristic image; pooling the fourth feature image by using a pooling module, performing supervision training on the living body detection initial model by using the fourth feature image and the pooled fourth feature image, and iteratively updating the living body detection initial model to obtain a trained living body detection model. The embodiment of the application has higher reasoning speed; a lightweight model is adopted, and the system can be successfully deployed on a chip with low computational power; a living body detection model with higher accuracy can be obtained.

Description

Living body detection model training method and living body detection method
Technical Field
The application relates to the technical field of biological identification, in particular to a living body detection model training method and a living body detection method.
Background
As the face recognition technology becomes mature, the commercial application thereof is increasingly wide, for example, the face recognition technology is widely applied to the fields of financial transactions, access control systems, mobile terminals and the like. However, the face is very easy to copy by means of photos, videos, models or masks, and the like, so that the counterfeit of the face of a legal user is an important threat to the safety of the face recognition and authentication system. In order to prevent a malicious person from forging and stealing the biological characteristics of other persons for identity authentication, a biological identification system needs to have a living body detection function.
Currently, the living body detection technology, which is the current technology of face recognition, starts to adopt a neural network model. Then, the method can not capture fine-grained information of face data, so that the effects of high-precision resin masks, 3D head models and the like are unsatisfactory, and the resin masks, the 3D head models and the like can be judged as real persons by mistake; or the attack of a false body can be effectively prevented, but some real people can not pass the biopsy, thereby causing poor user experience; or even a model with a good anti-attack effect and a high real person passing rate cannot be deployed on a terminal or a chip with low computational power because the model is large and the reasoning speed is low.
Disclosure of Invention
In view of this, embodiments of the present application provide a method for training a living body detection model and a method for living body detection, which can solve at least one technical problem in the related art.
In a first aspect, an embodiment of the present application provides a method for training a living body detection model, including:
constructing a living body detection initial model, wherein the living body detection initial model comprises at least three convolution modules, a feature fusion module and a pooling module;
acquiring a face image, and respectively acquiring at least three characteristic images when the face image sequentially passes through each convolution module;
fusing the at least three feature images by using the feature fusion module to obtain a fourth feature image;
pooling the fourth feature image by using the pooling module, performing supervision training on the living body detection initial model by using the fourth feature image and the pooled fourth feature image, and iteratively updating the living body detection initial model to obtain a trained living body detection model.
As a first implementation manner of the first aspect, the obtaining the face image and respectively obtaining at least three feature images of the face image passing through each convolution module in sequence includes:
acquiring a face image, and performing up-sampling or down-sampling on the face image to obtain a target image;
and performing feature learning on the target image by utilizing the at least three convolution modules of the living body detection initial model to obtain at least three feature images.
As a first implementation manner of the first aspect, pooling the fourth feature image by the pooling module, performing supervised training on the living body detection initial model by using the fourth feature image and the pooled fourth feature image, and iteratively updating the living body detection initial model to obtain a trained living body detection model includes:
performing average pooling on the fourth feature image by using the pooling module to obtain a fifth feature image and a sixth feature image;
and calculating losses among the fourth characteristic image, the fifth characteristic image and the sixth characteristic image and corresponding preset supervision images according to a loss function, and iteratively updating the weight parameters of the living body detection initial model according to the losses to obtain a trained living body detection model.
As a first implementation manner of the first aspect, the loss function is a mean square error loss function; the fourth feature image, the fifth feature image, and the sixth feature image have different sizes.
In a second aspect, an embodiment of the present application provides a method for detecting a living body, including:
acquiring an image to be detected of a target object face;
inputting the image to be detected into a trained living body detection model obtained by the living body detection model training method according to the first aspect or any implementation manner of the first aspect, and outputting a characteristic image;
and carrying out binarization on the characteristic image, solving the mean value of the binarized characteristic image, and judging whether the target object is a living body according to the mean value.
As an implementation manner of the second aspect, the inputting the image to be detected into the trained living body detection model obtained by using the living body detection model training method according to the first aspect or any implementation manner of the first aspect, and outputting the characteristic image includes:
and if the image to be detected meets the preset quality condition, inputting the image to be detected into a trained living body detection model obtained by the living body detection model training method according to the first aspect or any one of the implementation manners of the first aspect, and outputting a characteristic image.
As an implementation manner of the second aspect, the image to be detected includes at least one of an infrared image, a color image and a depth image of a face of the target object;
the inputting the image to be detected into the trained living body detection model obtained by the living body detection model training method according to the first aspect or any implementation manner of the first aspect, and outputting the characteristic image includes:
inputting the image to be detected into a trained living body detection model which corresponds to the image to be detected and is obtained by the living body detection model training method according to the first aspect or any one of the implementation manners of the first aspect, and outputting a characteristic image.
In a third aspect, an embodiment of the present application provides a training apparatus for a living body detection model, including:
the living body detection system comprises a construction module, a data processing module and a data processing module, wherein the construction module is used for constructing a living body detection initial model which comprises at least three convolution modules, a feature fusion module and a pooling module;
the acquisition module is used for acquiring a face image;
the convolution execution module is used for respectively obtaining at least three characteristic images when the face image sequentially passes through each convolution module;
the feature fusion execution module is used for fusing the at least three feature images by using the feature fusion module to obtain a fourth feature image;
and the pooling execution module is used for pooling the fourth feature image by using the pooling module, performing supervision training on the living body detection initial model by using the fourth feature image and the pooled fourth feature image, and iteratively updating the living body detection initial model to obtain a trained living body detection model.
In a fourth aspect, an embodiment of the present application provides a door lock system based on living body detection, including:
the camera module is used for acquiring an image to be detected of a target object;
the image acquisition module to be detected is used for acquiring the image to be detected of the face of the target object;
a feature image acquisition module, configured to input the image to be detected into a trained living body detection model obtained by using the living body detection model training method according to the first aspect or any implementation manner of the first aspect, and output a feature image;
and the judging module is used for carrying out binarization on the characteristic image, solving the mean value of the binarized characteristic image, and judging whether the target object is a living body according to the mean value so as to carry out identity authentication.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the in-vivo detection model training method according to the first aspect or any implementation manner of the first aspect when executing the computer program; or implementing the in-vivo detection method according to the second aspect or any implementation manner of the second aspect.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the living body detection model training method according to the first aspect or any implementation manner of the first aspect; or implementing the in-vivo detection method according to the second aspect or any implementation manner of the second aspect.
In a seventh aspect, an embodiment of the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to execute a living body detection model training method according to the first aspect or any implementation manner of the first aspect, or execute a living body detection method according to the second aspect or any implementation manner of the second aspect.
The embodiment of the application adopts the complete face image instead of the image block, so that the speed is higher during reasoning; a lightweight model is adopted, and the system can be successfully deployed on a chip with low computational power; by adopting a self-defined backbone network structure and multiple surveillance images, the characteristics of the face data can be more effectively extracted, and the difference between a high-precision prosthesis and a real person can be captured so as to obtain a living body detection model with higher precision.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart illustrating an implementation of a method for training a living body detection model according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an initial model of a living body test provided by an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating an implementation of step S120 in a training method for a living body detection model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a supervised training sample image for use in a method for training a living body detection model according to an embodiment of the present application;
FIG. 5 is a schematic diagram of samples and corresponding labels used in a method for training a living body detection model according to an embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating an implementation of a method for detecting a living body according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart illustrating an implementation of a method for detecting a living body according to another embodiment of the present application;
FIG. 8 is a schematic flow chart illustrating an implementation of a method for detecting a living body according to another embodiment of the present disclosure;
FIG. 9 is a schematic flow chart illustrating an implementation of a method for detecting a living body according to another embodiment of the present application;
FIG. 10 is a schematic structural diagram of an apparatus for training a living body detection model according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of a door lock system based on liveness detection according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Further, in the description of the present application, "a plurality" means two or more. The terms "first" and "second," etc. are used merely to distinguish one description from another, and are not to be construed as indicating or implying relative importance.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic flow chart of an implementation of a living body detection model training method according to an embodiment of the present application, where the living body detection model training method in this embodiment can be executed by an electronic device. Electronic devices include, but are not limited to, computers, tablets, servers, cell phones, cameras, wearable devices, or the like. The server includes but is not limited to a stand-alone server or a cloud server, etc. As shown in fig. 1, the living body detection model training method may include steps S110 to S140.
S110, constructing a living body detection initial model, wherein the living body detection initial model comprises at least three convolution modules, a feature fusion module and a pooling module.
In step S110, a living body detection initial model is constructed as a model to be trained. The living body detection initial model comprises a set of weight parameters to be learned. The living body detection initial model comprises at least three convolution modules, a feature fusion module and a pooling module.
In some embodiments, the initial living body detection model is shown in fig. 2, and the initial living body detection model includes three convolution modules, a feature fusion module and a pooling module, wherein the three convolution modules are a first convolution module, a second convolution module and a third convolution module, respectively.
In some embodiments, the convolution kernels of the convolution module, the feature fusion module, and the pooling module in the initial model of liveness detection are all 3 × 3.
And S120, acquiring the face image, and respectively acquiring at least three characteristic images when the face image sequentially passes through each convolution module.
The face image is a face image sample and comprises a positive sample and a negative sample. The face image includes, but is not limited to, one or more of a face infrared image, a face color image and a face depth image.
In some embodiments, as shown in fig. 3, step S120 includes the following steps S121 to S122.
And S121, acquiring a face image, and performing up-sampling or down-sampling on the face image to obtain a target image.
Up-sampling or image interpolation or enlarged image, mainly aims to enlarge the face image, so that the enlarged face image, namely the target image, conforms to the size of the input image of the living body detection initial model. Down-sampling (subsampled), or down-sampling (downsampled), is mainly aimed at reducing the face image so that the reduced face image, i.e. the target image, conforms to the input image size of the living body detection model.
In some embodiments, it is preferable that the input image size of the living body detection initial model is 112 × 112.
And S122, performing feature learning on the target image by using at least three convolution modules of the living body detection initial model to obtain at least three feature images.
In some embodiments, each convolution module includes at least three convolution layers with convolution kernel of 3 × 3 to perform feature learning on the target image and output a feature image. The feature images output by each of the convolution modules may be the same size.
As a non-limiting example, the in vivo detection initial model includes three convolution modules. The feature image output by the first convolution module is 15 × 15, the feature image output by the second convolution module is 15 × 15, and the feature image output by the third convolution module is 15 × 15. The structure of the three convolution modules can be regarded as 1 convolution layer with convolution kernel of 3 × 3, 2 convolution layers with convolution kernel of 3 × 3 and 3 convolution layers with convolution kernel of 3 × 3 which are connected in parallel, and the three convolution modules are used for acquiring different feature layers of the input target image.
And S130, fusing at least three characteristic images by using a characteristic fusion module to obtain a fourth characteristic image.
The feature fusion module comprises a fusion (concat) function, and the effective features in the feature images acquired by each convolution module are fused and connected to acquire a fourth feature image only with 1 channel.
As a non-limiting example, the feature image output by the first convolution module is 15 × 15, the feature image output by the second convolution module is 15 × 15, the feature image output by the third convolution module is 15 × 15, and the feature fusion module is used to fuse these three feature images to obtain a fourth feature image which is also 15 × 15.
S140, averaging the pooled fourth feature images by using the pooling module, performing supervision training on the living body detection initial model by using the fourth feature images and the averaged pooled fourth feature images, and iteratively updating the living body detection initial model to obtain a trained living body detection model.
And carrying out average pooling on the fourth characteristic image by using a pooling module to obtain a fifth characteristic image and a sixth characteristic image, wherein the fifth characteristic image and the sixth characteristic image are the fourth characteristic image after the average pooling.
In some embodiments, the pooling module includes two convolution layers with convolution kernels of 3 x 3. And after the fourth characteristic image passes through the pooling module, outputting a fifth characteristic image and a sixth characteristic image with two different sizes. The pooling module is used for reducing the dimension of the characteristic image, so that parameters can be reduced, simultaneously, the receptive fields are increased for all layers of the convolution structure, and the significant characteristics of the characteristic image are saved.
As a non-limiting example, the fourth feature image is 15 × 15, and the 15 × 15 fourth feature image is subjected to two convolution operations by using the pooling module, and upsampling is performed to obtain 8 × 8 and 5 × 5 feature images, i.e., a fifth feature image and a sixth feature image, respectively.
In some embodiments, the supervised training and iteratively updating the in-vivo detection initial model using the fourth feature image and the averaged pooled fourth feature image to obtain a trained in-vivo detection model includes: and calculating the loss between the fourth characteristic image, the fifth characteristic image and the sixth characteristic image and the corresponding preset supervision images according to the loss function, and iteratively updating the weight parameters of the living body detection initial model according to the loss to obtain a trained living body detection model.
As an implementation manner, the fourth feature image, the fifth feature image and the sixth feature image are subjected to binarization processing, Mean Square Error (MSE) loss calculation is performed between each pixel of the image after binarization processing and each pixel of the preset supervised binary image corresponding to the image, and the weight parameters of the living body detection initial model are updated according to the loss calculation result, so as to obtain a model with high identification precision.
As a non-limiting example, first, a mean square error between the 15 × 15 fourth feature image, the 8 × 8 fifth feature image, and the 5 × 5 sixth feature image, each with a preset supervision image of the same size, is calculated. As shown in fig. 4, taking training of a living body detection model based on an infrared face image as an example for explanation, the living body detection initial model outputs a fourth feature image of 15 × 15, and the fourth feature image corresponds to a preset supervision image of 15 × 15; and averaging and pooling the 15 × 15 fourth feature images to obtain 8 × 8 fifth feature images and 5 × 5 sixth feature images, wherein the 8 × 8 fifth feature images correspond to the 8 × 8 preset supervision images, the 5 × 5 sixth feature images correspond to the 5 × 5 preset supervision images, and the 8 × 8 preset supervision images and the 5 × 5 preset supervision images have an auxiliary supervision function. The embodiment uses the feature maps of three sizes, such as 15 × 15, 8 × 8 and 5 × 5, for supervision, so that the model has higher precision.
As shown in fig. 5, taking an example of training a living body detection initial model based on an infrared face image as an example, a two-class label, for example, a living body (positive sample) is 1 and a prosthesis (negative sample) is 0, is used as a label in the conventional training of the living body detection initial model, and in the present application, a 15 × 15 tensor, for example, a 15 × 15 binary image, a pseudo depth map, or a face Mask binary image is used instead as a label. A large number of experiments prove that the living body detection model obtained by performing supervision training by using the binary image after the face Mask in the figure 5 has the highest precision, and therefore, in the application, the preset supervision image preferably adopts the binary image after the face Mask.
Specifically, binarizing the 15 × 15 fourth feature image, and calculating a first mean square error between the binarized fourth feature image and a preset 15 × 15 supervision binary image; binarizing the 8 × 8 fifth feature image, and calculating a second mean square error between the binarized fifth feature image and a preset 8 × 8 supervision binary image; and carrying out binarization on the sixth feature image of 5 multiplied by 5, and calculating a third mean square error between the sixth feature image after binarization and a preset supervision binary image of 5 multiplied by 5.
For example, the first mean square error, the second mean square error, and the third mean square error may be calculated using the following equations, respectively:
Figure BDA0003185082340000091
wherein, yiThe pixel value of the ith pixel in the fourth characteristic image, the fifth characteristic image or the sixth characteristic image after the binarization processing is represented,
Figure BDA0003185082340000092
and the pixel value of the ith pixel in the preset supervision binary image with the same size corresponding to the fourth feature image, the fifth feature image or the sixth feature image, which is acquired by adopting a complex model such as Resnet152, and n represents the number of pixels.
And then, calculating loss according to the first mean square error, the second mean square error and the third mean square error, and updating the weight parameters of the living body detection initial model according to the loss. Specifically, the three calculated mean square errors may add the calculated loss in a weight ratio, or the calculated loss may be added directly.
The embodiment of the application adopts the complete face image instead of the image block, so that the speed is higher during reasoning; a lightweight model is adopted, the size of the model is only 143kb after quantization, and the model can be successfully deployed on a low-calculation chip, such as a WQ5007 Internet of things 3D face recognition SoC chip, and the chip supports a 32MB Double-Rate synchronous Dynamic Random Access Memory (DDR DRAM) to the maximum extent; by adopting a self-defined backbone (backbone) network structure and multi-supervision images, the characteristics of the face data can be effectively extracted, and the difference between a high-precision prosthesis and a real person can be captured so as to obtain a living body detection model with higher precision.
The trained in vivo detection model can be obtained by adopting the in vivo detection model training method, and specifically comprises at least three convolution modules and a characteristic fusion module, and no longer comprises a pooling module of the in vivo detection initial model.
In some embodiments, the face image during training of the living body detection model adopts a face infrared image, a face color image and a face depth image, and at this time, by adopting the living body detection model training method of the embodiments of the present application, a living body detection model for determining whether the face image is a living body can be obtained. It should be noted that, the following embodiments shown in fig. 6 and fig. 7 can be referred to as a method for performing living body detection by using the trained living body detection model of the present embodiment.
In other embodiments, the face image during training of the living body detection model is a face infrared image, and at this time, by using the living body detection model training method according to the embodiments of the present application, a living body detection model for determining whether the face infrared image is a living body can be obtained, which may be referred to as an infrared living body detection model.
In other embodiments, the face image in training the living body detection model is a face color image, and in this case, by using the living body detection model training method according to the embodiments of the present application, a living body detection model for determining whether the face color image is a living body can be obtained, which may be referred to as a color living body detection model.
In other embodiments, the face image in training the living body detection model is a face depth image, and at this time, by using the living body detection model training method of the embodiments of the present application, a living body detection model for determining whether the face depth image is a living body can be obtained, which may be referred to as a deep living body detection model.
It should be noted that, the architectures of the infrared biopsy model, the color biopsy model and the depth biopsy model are the same, that is, the initial model, but specific weight parameters in the architectures are obtained through training, and the values of the weight parameters may be different.
When a single-mode image, namely a face infrared image, a face color image or a face depth image, is adopted to train a living body detection model, a living body detection result with higher precision in the mode can be obtained. And by combining the in-vivo detection models corresponding to different modes, a more accurate in-vivo detection result can be obtained. It should be noted that the living body detection method using the trained infrared living body detection model, the color living body detection model and the depth living body detection model can be referred to the following embodiments shown in fig. 8 and 9.
An embodiment of the present application further provides a living body detection method, which can be executed by an electronic device. The electronic device is deployed with a trained in-vivo detection model in advance. Electronic devices include, but are not limited to, computers, tablets, servers, cell phones, cameras, wearable devices, or the like. The server includes but is not limited to a stand-alone server or a cloud server, etc. And performing living body detection on the face image of the target object by using the trained living body detection model obtained by the living body detection model training method.
Fig. 6 is a schematic flow chart of an implementation of a living body detection method according to an embodiment of the present application, and as shown in fig. 6, the living body detection model training method may include steps S610 to S630.
S610, acquiring the image to be detected of the face of the target object.
Wherein, the image to be detected can be an infrared image, a color image or a depth image.
And S620, inputting the image to be detected into the trained living body detection model and outputting a characteristic image.
The feature image may be similar to the fourth feature image, and is not described herein again.
As a non-limiting example, the feature image output by the liveness detection model may be 15 × 15.
And S630, binarizing the output characteristic image, solving the mean value of the binarized characteristic image, and judging whether the target object is a living body according to the mean value.
And carrying out binarization processing on the output characteristic image, and normalizing the pixel value of each pixel in the characteristic image to be 0 or 1.
In some embodiments, determining whether the target object is a living body according to the mean includes: and comparing the average value with a preset threshold value, and judging whether the target object is a living body according to a comparison result.
As a non-limiting example, the preset threshold may be any value in the value interval [0.5, 0.8], such as 0.5, 0.6, 0.8, etc., preferably 0.5. For example, if the mean value of the binarized feature image is greater than or equal to 0.5, the target object is determined to be a living body; and if the mean value of the binarized feature image is less than 0.5, determining that the target object is not a living body, namely a prosthesis. Or if the mean value of the binarized feature image is greater than 0.5, determining that the target object is a living body; and if the mean value of the feature image after binarization is less than or equal to 0.5, determining that the target object is a prosthesis.
Fig. 7 is a schematic flow chart of an implementation of a method for detecting a living body according to another embodiment of the present application, which further optimizes the embodiment shown in fig. 6. As shown in fig. 7, the living body detecting method may include steps S710 to S730. It should be understood that the same points between the embodiment shown in FIG. 7 and the embodiment shown in FIG. 6 are not repeated here, and please refer to the foregoing description.
And S710, acquiring an image to be detected of the target object face, if the image to be detected meets the preset quality condition, entering the step S720, otherwise, acquiring the image to be detected of the target object face again until the image to be detected meets the preset quality condition.
In this embodiment, it is determined whether the acquired image to be detected meets a preset quality condition, including but not limited to: judging whether the head posture is reasonable; judging whether the face is shielded (the judgment can be carried out through the face contour edge detection of the depth image or the infrared image); and judging whether the illumination is normal or not (the illumination can be judged through the pixel value of the infrared image). If any one or more images are not used, the subsequent steps are not directly carried out, namely, the step of inputting the image to be detected into the trained living body detection model to output the characteristic image and the subsequent steps are not carried out, the image needs to be collected again until the newly collected image meets the preset quality condition, and the step of inputting the image to be detected into the trained living body detection model to output the characteristic image and the subsequent steps are carried out.
And S720, inputting the image to be detected into the trained living body detection model and outputting a characteristic image.
The feature image may be similar to the fourth feature image, and is not described herein again.
As a non-limiting example, the feature image output by the liveness detection model may be 15 × 15.
And S730, binarizing the output characteristic image, solving the mean value of the binarized characteristic image, and judging whether the target object is a living body according to the mean value.
In this embodiment, the quality of the image to be detected is screened, and the subsequent steps are performed only on the image meeting the preset quality condition, so that the accuracy of the detection result can be further improved.
Fig. 8 is a schematic flow chart illustrating an implementation of a method for detecting a living body according to another embodiment of the present application, and as shown in fig. 8, the method for detecting a living body may include steps S810 to S830. It should be understood that the same points between the embodiment shown in FIG. 8 and the embodiment shown in FIG. 6 are not repeated here, and please refer to the foregoing description.
And S810, acquiring at least one image of an infrared image, a color image and a depth image of the face of the target object as an image to be detected.
And S820, inputting the image to be detected into a living body detection model corresponding to the image to be detected and outputting a characteristic image.
And S830, binarizing the output feature image, solving the mean value of the binarized feature image, and judging whether the target object is a living body according to the mean value.
In this embodiment, the image to be detected may be at least one of an infrared image, a color image, and a depth image, and the corresponding biopsy model is used for performing biopsy according to the difference of the acquired images to be detected, so as to further improve the accuracy of the biopsy.
As an implementation manner, specifically, the image to be detected is an infrared image, a color image or a depth image, and the corresponding living body detection model is an infrared living body detection model, a color living body detection model or a depth living body detection model. At the moment, the living body detection model corresponding to the image to be detected is adopted to carry out living body detection on the image to be detected.
As another implementation, the image to be detected includes an infrared image and a depth image. And performing living body detection by adopting an infrared living body detection model and a depth living body detection model corresponding to the image to be detected.
At this time, the living body detecting method shown in fig. 8 specifically includes: acquiring an infrared image and a depth image of a target object face; inputting the infrared image into an infrared living body detection model to output an infrared characteristic image, and inputting the depth image into a depth living body detection model to output a depth characteristic image; and respectively binarizing the output infrared characteristic image and the output depth characteristic image, solving a first mean value of the binarized infrared characteristic image, solving a second mean value of the binarized depth characteristic image, and judging whether the target object is a living body according to the first mean value and the second mean value.
As a non-limiting example, determining whether the target object is a living body according to the first mean value and the second mean value includes: and if the first average value and the second average value are both larger than or equal to a preset threshold value, determining that the target object is a living body.
As another implementation, in particular, the image to be detected includes a color image and a depth image. And performing living body detection by adopting a color living body detection model and a depth living body detection model corresponding to the image to be detected.
At this time, the living body detecting method shown in fig. 8 specifically includes: acquiring a depth image and a color image of a target object face; inputting the color image into a color living body detection model to output a color characteristic image, and inputting the depth image into a depth living body detection model to output a depth characteristic image; and respectively binarizing the output color characteristic image and the output depth characteristic image, solving a third mean value of the binarized color characteristic image, solving a fourth mean value of the binarized depth characteristic image, and judging whether the target object is a living body according to the third mean value and the fourth mean value.
As a non-limiting example, determining whether the target object is a living body according to the third mean value and the fourth mean value includes: and if the third mean value and the fourth mean value are both larger than or equal to a preset threshold value, determining that the target object is a living body.
As another implementation, the image to be detected includes, in particular, a color image, an infrared image, and a depth image. And correspondingly detecting the image to be detected by adopting a color biopsy model, an infrared biopsy model and a depth biopsy model.
At this time, the living body detecting method shown in fig. 8 specifically includes: acquiring a color image, an infrared image and a depth image of a target object face; inputting a color image into a color living body detection model to output a color characteristic image, inputting an infrared image into an infrared living body detection model to output an infrared characteristic image, and inputting a depth image into a depth living body detection model to output a depth characteristic image; respectively binarizing the output color characteristic image, the output infrared characteristic image and the output depth characteristic image, solving a fifth mean value of the binarized color characteristic image, solving a sixth mean value of the binarized infrared characteristic image, solving a seventh mean value of the binarized depth characteristic image, and judging whether the target object is a living body according to the fifth mean value, the sixth mean value and the seventh mean value.
As a non-limiting example, determining whether the target object is a living body according to the fifth mean, the sixth mean, and the seventh mean includes: and if the fifth mean value, the sixth mean value and the seventh mean value are all larger than or equal to a preset threshold value, determining that the target object is a living body.
Fig. 9 is a schematic flow chart of an implementation of a method for detecting a living body according to another embodiment of the present application, which further optimizes the embodiment shown in fig. 8. As shown in fig. 9, the living body detecting method may include steps S910 to S930. It should be understood that the embodiment shown in fig. 9 is the same as the embodiment shown in fig. 7 and 8, and the description thereof is omitted here for brevity.
S910, acquiring at least one image of an infrared image, a color image and a depth image of the face of the target object as an image to be detected; and (S920) if the images to be detected all meet the preset quality condition, otherwise, acquiring the images to be detected of the target object face again until the images to be detected meet the preset quality condition.
S920, inputting the image to be detected into a living body detection model corresponding to the image to be detected and outputting a characteristic image.
And S930, binarizing the output characteristic image, solving the mean value of the binarized characteristic image, and judging whether the target object is a living body according to the mean value.
In the embodiment shown in fig. 8 and 9, the living body detection is performed by three living body detection models based on three single-mode data, namely, an infrared living body detection model, a color living body detection model and a depth living body detection model, trained by the living body detection model training method based on the whole face. Wherein, the depth living body detection model is used for eliminating attacks such as videos, paper photos, hole-digging and bending photos and the like; the infrared or color in-vivo detection model is used for preventing attacks such as a 3D head model, a mask, a facial mask and the like, and only through in-vivo detection of two or three of the models, the model is considered as a real human in-vivo, so that the accuracy of the in-vivo detection algorithm is greatly improved.
Through tests, the depth living body detection model provided by the embodiment of the application has a real person passing rate of more than 99.9%, and can effectively prevent more than 99.5% of paper attack or paper hole digging attack. The infrared and color living body detection model provided by the embodiment of the application has the real person passing rate of more than 99.9 percent and can prevent the attack of a high-precision 3D head model or a resin mask by more than 95 percent.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
An embodiment of the application also provides a training device for the living body detection model. The details of the parts of the training apparatus for the living body detection model not described in detail are described in the embodiments of the training method for the living body detection model.
Referring to fig. 10, fig. 10 is a schematic block diagram of an apparatus for training a living body detection model according to an embodiment of the present application. The living body detection model training device includes: a building module 101, an obtaining module 102, a convolution execution module 103, a feature fusion execution module 104 and a pooling execution module 105.
The system comprises a construction module 101, a data processing module and a data processing module, wherein the construction module 101 is used for constructing a living body detection initial model, and the living body detection initial model comprises at least three convolution modules, a feature fusion module and a pooling module;
an obtaining module 102, configured to obtain a face image;
a convolution executing module 103, configured to obtain at least three feature images of the face image passing through each convolution module in sequence;
a feature fusion executing module 104, configured to fuse the at least three feature images by using the feature fusion module to obtain a fourth feature image;
and the pooling execution module 105 is configured to pool the fourth feature image by using the pooling module, perform supervised training on the living body detection initial model by using the fourth feature image and the pooled fourth feature image, and iteratively update the living body detection initial model to obtain a trained living body detection model.
In some embodiments, the obtaining module 102 is specifically configured to:
the method comprises the steps of obtaining a face image, and carrying out up-sampling or down-sampling on the face image to obtain a target image.
The convolution executing module 103 is specifically configured to:
and performing feature learning on the target image by utilizing the at least three convolution modules of the living body detection initial model to obtain at least three feature images.
In some embodiments, the pooling execution module 105 is specifically configured to:
performing average pooling on the fourth feature image by using the pooling module to obtain a fifth feature image and a sixth feature image;
and calculating losses among the fourth characteristic image, the fifth characteristic image and the sixth characteristic image and corresponding preset supervision images according to a loss function, and iteratively updating the weight parameters of the living body detection initial model according to the losses to obtain a trained living body detection model.
In some embodiments, the loss function is a mean square error loss function.
In some embodiments, the fourth feature image, the fifth feature image, and the sixth feature image have different sizes.
An embodiment of the present application further provides a living body detection apparatus. The details of the biopsy device not described in detail are described in the aforementioned embodiments of the biopsy method.
Referring to fig. 11, fig. 11 is a schematic block diagram of a door lock system based on liveness detection according to an embodiment of the present application. The living body detecting apparatus includes: the image processing system comprises a camera module 110, an image to be detected acquisition module 111, a characteristic image acquisition module 112 and a judgment module 113, wherein:
the camera module is used for acquiring an image to be detected of a target object;
the image acquisition module to be detected 111 is used for acquiring an image to be detected of the face of the target object;
a characteristic image obtaining module 112, configured to input the image to be detected into the trained biopsy model obtained by using the biopsy model training method, and output a characteristic image;
and the judging module 113 is configured to binarize the feature image, solve a mean value of the binarized feature image, and judge whether the target object is a living body according to the mean value, so as to perform identity authentication.
Further, if the target object is judged to be a living body, identifying the face information on the target object and comparing the face information with the registered face information preset on the cloud end, and if the face information on the target object is consistent with the face information on the cloud end, opening a door lock; otherwise, the door lock is not opened.
It should be noted that the camera module 110 may be at least one or more of a depth camera, an infrared camera, a color camera, and the like, which is not limited herein.
In some embodiments, the feature image obtaining module 112 is specifically configured to:
and if the image to be detected meets the preset quality condition, inputting the image to be detected into the trained living body detection model obtained by adopting the living body detection model training method in the embodiment, and outputting the characteristic image.
In some embodiments, the image to be detected comprises at least one of an infrared image, a color image, and a depth image of the face of the target object.
In these embodiments, the feature image obtaining module 112 is specifically configured to:
inputting the image to be detected into a trained living body detection model which corresponds to the image to be detected and is obtained by adopting the living body detection model training method in the embodiment, and outputting a characteristic image; alternatively, the first and second electrodes may be,
and if the image to be detected meets the preset quality condition, inputting the image to be detected into a trained living body detection model which corresponds to the image to be detected and is obtained by adopting the living body detection model training method in the embodiment, and outputting the characteristic image.
As an implementation manner, specifically, the image to be detected is an infrared image, a color image or a depth image of a face of the target object, and the corresponding living body detection model is an infrared living body detection model, a color living body detection model or a depth living body detection model. At the moment, the living body detection model corresponding to the image to be detected is adopted to carry out living body detection on the image to be detected.
As an implementation manner, specifically, the image to be detected is an infrared image and a depth image of the face of the target object. And performing living body detection by adopting an infrared living body detection model and a depth living body detection model corresponding to the image to be detected.
In this implementation, the feature image obtaining module 112 is specifically configured to:
acquiring an infrared image and a depth image of a target object face; and inputting the infrared image into an infrared living body detection model to output an infrared characteristic image, and inputting the depth image into a depth living body detection model to output a depth characteristic image.
The determining module 113 is specifically configured to:
and respectively binarizing the output infrared characteristic image and the output depth characteristic image, solving a first mean value of the binarized infrared characteristic image, solving a second mean value of the binarized depth characteristic image, and judging whether the target object is a living body according to the first mean value and the second mean value.
As another implementation, in particular, the image to be detected includes a color image and a depth image. And performing living body detection by adopting a color living body detection model and a depth living body detection model corresponding to the image to be detected.
In this implementation, the feature image obtaining module 112 is specifically configured to:
and inputting the color image into a color living body detection model to output a color characteristic image, and inputting the depth image into a depth living body detection model to output a depth characteristic image.
The determining module 113 is specifically configured to:
and respectively binarizing the output color characteristic image and the output depth characteristic image, solving a third mean value of the binarized color characteristic image, solving a fourth mean value of the binarized depth characteristic image, and judging whether the target object is a living body according to the third mean value and the fourth mean value.
As another implementation, the image to be detected includes, in particular, a color image, an infrared image, and a depth image. And correspondingly detecting the image to be detected by adopting a color biopsy model, an infrared biopsy model and a depth biopsy model.
In this implementation, the feature image obtaining module 112 is specifically configured to:
inputting the color image into a color living body detection model to output a color characteristic image, inputting the infrared image into an infrared living body detection model to output an infrared characteristic image, and inputting the depth image into a depth living body detection model to output a depth characteristic image.
The determining module 113 is specifically configured to:
respectively binarizing the output color characteristic image, the output infrared characteristic image and the output depth characteristic image, solving a fifth mean value of the binarized color characteristic image, solving a sixth mean value of the binarized infrared characteristic image, solving a seventh mean value of the binarized depth characteristic image, and judging whether the target object is a living body according to the fifth mean value, the sixth mean value and the seventh mean value.
An embodiment of the present application also provides an electronic device, as shown in fig. 12, which may include one or more processors 120 (only one is shown in fig. 12), a memory 121, and a computer program 122 stored in the memory 121 and operable on the one or more processors 120, for example, a program for live body detection model training and/or a program for live body detection. The steps in the in-vivo detection model training method and/or in the in-vivo detection method embodiments may be implemented by one or more processors 120 executing a computer program 122. Alternatively, the one or more processors 120, when executing the computer program 122, may implement the functionality of the modules/units of the in-vivo detection model training device and/or in-vivo detection device embodiments, and is not limited herein.
Those skilled in the art will appreciate that fig. 12 is merely an example of an electronic device and is not intended to limit the electronic device. The electronic device may include more or fewer components than shown, or combine certain components, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
In one embodiment, the Processor 120 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In one embodiment, the storage 121 may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory 121 may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash memory card (flash card), and the like provided on the electronic device. Further, the memory 121 may also include both an internal storage unit and an external storage device of the electronic device. The memory 121 is used to store computer programs and other programs and data required by the electronic device. The memory 121 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps in the embodiment of the living body detection model training method and/or the living body detection method.
An embodiment of the present application provides a computer program product, which, when running on an electronic device, enables the electronic device to implement the steps in the living body detection model training method and/or the living body detection method embodiment.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments described above may be implemented by a computer program, which is stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (11)

1. A method for training a living body detection model, comprising:
constructing a living body detection initial model, wherein the living body detection initial model comprises at least three convolution modules, a feature fusion module and a pooling module;
acquiring a face image, and respectively acquiring at least three characteristic images when the face image sequentially passes through each convolution module;
fusing the at least three feature images by using the feature fusion module to obtain a fourth feature image;
pooling the fourth feature image by using the pooling module, performing supervision training on the living body detection initial model by using the fourth feature image and the pooled fourth feature image, and iteratively updating the living body detection initial model to obtain a trained living body detection model.
2. The in-vivo detection model training method as claimed in claim 1, wherein the obtaining of the face image and the obtaining of the at least three feature images of the face image passing through each convolution module in sequence respectively comprises:
acquiring a face image, and performing up-sampling or down-sampling on the face image to obtain a target image;
and performing feature learning on the target image by utilizing the at least three convolution modules of the living body detection initial model to obtain at least three feature images.
3. The in-vivo detection model training method as claimed in claim 1 or 2, wherein the pooling the fourth feature image with the pooling module, and performing supervised training and iteratively updating the in-vivo detection initial model with the fourth feature image and the pooled fourth feature image to obtain the trained in-vivo detection model comprises:
performing average pooling on the fourth feature image by using the pooling module to obtain a fifth feature image and a sixth feature image;
and calculating losses among the fourth characteristic image, the fifth characteristic image and the sixth characteristic image and corresponding preset supervision images according to a loss function, and iteratively updating the weight parameters of the living body detection initial model according to the losses to obtain a trained living body detection model.
4. The in-vivo detection model training method as in claim 3, wherein said loss function is a mean square error loss function; the fourth feature image, the fifth feature image, and the sixth feature image have different sizes.
5. A method of in vivo detection, comprising:
acquiring an image to be detected of a target object face;
inputting the image to be detected into a trained living body detection model obtained by the living body detection model training method according to any one of claims 1 to 4, and outputting a characteristic image;
and carrying out binarization on the characteristic image, solving the mean value of the binarized characteristic image, and judging whether the target object is a living body according to the mean value.
6. The in-vivo detection method according to claim 5, wherein the inputting the image to be detected into the trained in-vivo detection model obtained by the in-vivo detection model training method according to any one of claims 1 to 4 and outputting the characteristic image comprises:
if the image to be detected is determined to meet the preset quality condition, inputting the image to be detected into a trained living body detection model obtained by the living body detection model training method according to any one of claims 1 to 4, and outputting a characteristic image.
7. The live body detecting method according to claim 5 or 6, wherein the image to be detected includes at least one of an infrared image, a color image, and a depth image of the face of the target object;
inputting the image to be detected into a trained living body detection model obtained by the living body detection model training method according to any one of claims 1 to 4, and outputting a characteristic image, wherein the method comprises the following steps:
inputting the image to be detected into a trained living body detection model which corresponds to the image to be detected and is obtained by the living body detection model training method according to any one of claims 1 to 4, and outputting a characteristic image.
8. A living body detection model training device, comprising:
the living body detection system comprises a construction module, a data processing module and a data processing module, wherein the construction module is used for constructing a living body detection initial model which comprises at least three convolution modules, a feature fusion module and a pooling module;
the acquisition module is used for acquiring a face image;
the convolution execution module is used for respectively obtaining at least three characteristic images when the face image sequentially passes through each convolution module;
the feature fusion execution module is used for fusing the at least three feature images by using the feature fusion module to obtain a fourth feature image;
and the pooling execution module is used for pooling the fourth feature image by using the pooling module, performing supervision training on the living body detection initial model by using the fourth feature image and the pooled fourth feature image, and iteratively updating the living body detection initial model to obtain a trained living body detection model.
9. A door lock system based on in vivo detection, comprising:
the camera module is used for acquiring an image to be detected of a target object;
the image acquisition module to be detected is used for acquiring the image to be detected of the face of the target object;
a characteristic image acquisition module, configured to input the image to be detected into a trained biopsy model obtained by the biopsy model training method according to any one of claims 1 to 4, and output a characteristic image;
and the judging module is used for carrying out binarization on the characteristic image, solving the mean value of the binarized characteristic image, and judging whether the target object is a living body according to the mean value so as to carry out identity authentication.
10. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the liveness detection model training method of any one of claims 1 to 4, or implements the liveness detection method of any one of claims 5 to 7 when executing the computer program.
11. A computer storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the in-vivo detection model training method of any one of claims 1 to 4, or implements the in-vivo detection method of any one of claims 5 to 7.
CN202110858764.4A 2021-07-28 2021-07-28 Living body detection model training method and living body detection method Pending CN113989871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110858764.4A CN113989871A (en) 2021-07-28 2021-07-28 Living body detection model training method and living body detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110858764.4A CN113989871A (en) 2021-07-28 2021-07-28 Living body detection model training method and living body detection method

Publications (1)

Publication Number Publication Date
CN113989871A true CN113989871A (en) 2022-01-28

Family

ID=79735077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110858764.4A Pending CN113989871A (en) 2021-07-28 2021-07-28 Living body detection model training method and living body detection method

Country Status (1)

Country Link
CN (1) CN113989871A (en)

Similar Documents

Publication Publication Date Title
US10121059B2 (en) Liveness test method and apparatus
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN113033465B (en) Living body detection model training method, device, equipment and storage medium
US10650260B2 (en) Perspective distortion characteristic based facial image authentication method and storage and processing device thereof
WO2021137946A1 (en) Forgery detection of face image
CN108416291B (en) Face detection and recognition method, device and system
CN111783629B (en) Human face in-vivo detection method and device for resisting sample attack
CN110532746B (en) Face checking method, device, server and readable storage medium
CN112464690A (en) Living body identification method, living body identification device, electronic equipment and readable storage medium
CN113269010B (en) Training method and related device for human face living body detection model
CN113128481A (en) Face living body detection method, device, equipment and storage medium
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
CN112686191A (en) Living body anti-counterfeiting method, system, terminal and medium based on face three-dimensional information
Bresan et al. Facespoof buster: a presentation attack detector based on intrinsic image properties and deep learning
CN113642639A (en) Living body detection method, living body detection device, living body detection apparatus, and storage medium
CN113723215B (en) Training method of living body detection network, living body detection method and device
CN113807237B (en) Training of in vivo detection model, in vivo detection method, computer device, and medium
CN113989871A (en) Living body detection model training method and living body detection method
CN114913607A (en) Finger vein counterfeit detection method based on multi-feature fusion
CN113033305A (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN113989870A (en) Living body detection method, door lock system and electronic equipment
CN111814613A (en) Face recognition method, face recognition equipment and computer readable storage medium
CN112070022A (en) Face image recognition method and device, electronic equipment and computer readable medium
CN112016401B (en) Cross-mode pedestrian re-identification method and device
CN114092864B (en) Fake video identification method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination