CN115205939B - Training method and device for human face living body detection model, electronic equipment and storage medium - Google Patents

Training method and device for human face living body detection model, electronic equipment and storage medium Download PDF

Info

Publication number
CN115205939B
CN115205939B CN202210834544.2A CN202210834544A CN115205939B CN 115205939 B CN115205939 B CN 115205939B CN 202210834544 A CN202210834544 A CN 202210834544A CN 115205939 B CN115205939 B CN 115205939B
Authority
CN
China
Prior art keywords
image
living body
body detection
face
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210834544.2A
Other languages
Chinese (zh)
Other versions
CN115205939A (en
Inventor
王珂尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210834544.2A priority Critical patent/CN115205939B/en
Publication of CN115205939A publication Critical patent/CN115205939A/en
Application granted granted Critical
Publication of CN115205939B publication Critical patent/CN115205939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The disclosure provides a training method, a training device, electronic equipment and a storage medium for a human face living body detection model, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of deep learning, image processing and computer vision. The specific implementation scheme is as follows: the method comprises the steps of obtaining a sample image pair, inputting a visible light sample image into a pre-trained visible light human face living body detection model, inputting a near infrared sample image into a near infrared human face living body detection model to be trained, adjusting parameters of the near infrared human face living body detection model based on visible light image features, near infrared human face living body detection results and human face living body true values, obtaining a trained near infrared human face living body detection model, realizing the training of the human face living body detection model, and improving the generalization and accuracy of human face living body detection by utilizing the human face living body detection model.

Description

Training method and device for human face living body detection model, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular to the technical fields of deep learning, image processing, and computer vision.
Background
The human face living body detection is to distinguish whether an image is shot by a real person or not, is a basic composition module of the human face recognition system, and ensures the safety of the human face recognition system. In the process of human face living body detection, a deep learning technology is used for training a human face living body detection model, and human face living body detection is performed through the trained human face living body detection model, so that the method is a mainstream method in the current human face recognition field.
Disclosure of Invention
The disclosure provides a face living body detection model training method, a device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a face living body detection model training method, including:
obtaining a sample image pair, wherein the sample image pair comprises a visible light sample image and a near infrared sample image of the same human face object;
inputting the visible light sample image into a pre-trained visible light face living body detection model, and extracting visible light image features of the visible light sample image through a first feature extraction network of the visible light face living body detection model;
inputting the near-infrared sample image into a near-infrared human face living body detection model to be trained for living body detection, obtaining a near-infrared human face living body detection result, and extracting near-infrared image features of the near-infrared sample image through a second feature extraction network of the near-infrared human face living body detection model;
Acquiring a human face living body true value of the near infrared sample image;
and adjusting parameters of the near-infrared human face living body detection model based on the visible light image characteristics, the near-infrared human face living body detection result and the human face living body true value to obtain a trained near-infrared human face living body detection model.
According to another aspect of the present disclosure, there is provided a face in-vivo detection method including:
acquiring a near infrared face image to be identified;
performing feature extraction on the near-infrared face image to be identified by using a second feature extraction network in a pre-trained near-infrared face living body detection model to obtain near-infrared face image features;
classifying the near-infrared face image features by using a classifier network of the near-infrared face living body detection model to obtain a face living body detection result of a near-infrared face image to be identified;
the near infrared human face living body detection model is obtained by training any one of the human face living body detection model training methods in the method.
According to another aspect of the present disclosure, there is provided a face living body detection model training apparatus, including:
The image acquisition module is used for acquiring a sample image pair, wherein the sample image pair comprises a visible light sample image and a near infrared sample image of the same face object;
the first image feature extraction module is used for inputting the visible light sample image into a pre-trained visible light face living body detection model and extracting visible light image features of the visible light sample image through a first feature extraction network of the visible light face living body detection model;
the second image feature extraction module is used for inputting the near-infrared sample image into a near-infrared face living body detection model to be trained for living body detection, obtaining a near-infrared face living body detection result, and extracting near-infrared image features of the near-infrared sample image through a second feature extraction network of the near-infrared face living body detection model;
the truth value acquisition module is used for acquiring the human face living truth value of the near infrared sample image;
and the parameter adjustment module is used for adjusting parameters of the near-infrared face living body detection model based on the visible light image characteristics, the near-infrared face living body detection result and the face living body true value to obtain a trained near-infrared face living body detection model.
According to another aspect of the present disclosure, there is provided a face living body detection apparatus including:
the image acquisition module is used for acquiring a near infrared face image to be identified;
the image feature extraction module is used for extracting features of the near-infrared face image to be identified by utilizing a second feature extraction network in the pre-trained near-infrared face living body detection model to obtain near-infrared face image features;
the result output module is used for classifying the near-infrared face image features by utilizing a classifier network of the near-infrared face living body detection model to obtain a face living body detection result of the near-infrared face image to be identified;
the near infrared human face living body detection model is obtained by training any one of the human face living body detection model training devices in the near infrared human face living body detection model training device.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the face biopsy model training method or the face biopsy method of any one of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the face biopsy model training method or the face biopsy method of any one of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the face biopsy model training method or the face biopsy method of any one of the present disclosure.
According to the embodiment of the disclosure, the training of the human face living body detection model is realized, and the human face living body detection can be realized by using the human face living body detection model.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1a is a schematic illustration of a face in vivo detection model training method according to the present disclosure;
FIG. 1b is a schematic illustration of a method of acquiring a sample image pair according to the present disclosure;
FIG. 1c is a schematic illustration of a training process of a face biopsy model training method according to the present disclosure;
FIG. 2 is a schematic illustration of a face in vivo detection method according to the present disclosure;
FIG. 3 is a schematic diagram of a face biopsy model training device according to the present disclosure;
FIG. 4 is a schematic diagram of a face biopsy device according to the present disclosure;
fig. 5 is a block diagram of an electronic device used to implement a face biopsy model training method or a face biopsy method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The human face living body detection refers to that a computer judges whether a detected human face image is an image shot through a living human face or a fake human face attack image, namely, whether one image is shot by a real person or not, so that the safety of a human face recognition system is ensured. The face attack image may be various photo printing attacks (including color printing photos, black-and-white printing photos, infrared printing photos, etc.), screen playing attacks (mobile phone screen, tablet screen, computer screen, etc.), and high-definition 3D attacks (masks of various materials, head models, head masks, etc.).
In the related art, the face living body detection algorithm is mainly divided into two main categories: traditional manual face feature extraction and classification methods and deep learning methods using neural networks. The traditional manual face feature extraction and classification method is based on the fact that face features are extracted by a feature extractor which is designed manually, and then feature classification is carried out on the basis of a traditional classifier, so that a judgment result of a face living body is finally obtained. The face living body detection method using deep learning includes living body discrimination by convolutional neural network, living body discrimination based on Long short-Term Memory (LSTM), and the like. Such methods use neural networks for face feature extraction and classification. The deep learning method can extract the face features with stronger stability, and compared with the traditional method, the deep learning method has higher performance improvement.
However, the face gesture is too large or the illumination difference is large, so that the robustness is poor, and the recognition effect is not ideal. For example, in some application scenarios, the face living algorithm based on only visible light images has the problems of higher sensitivity to light rays, poorer generalization to plane attacks such as photos, videos and the like, and influences the actual application performance. But near infrared mode data in a real scene is far less than visible light mode data, so that the method has limited generalization for complex and diverse attack modes and samples.
To solve at least one of the above problems, an embodiment of the present disclosure provides a training method for a face living body detection model: obtaining a sample image pair, wherein the sample image pair comprises a visible light sample image and a near infrared sample image of the same human face object; inputting the visible light sample image into a pre-trained visible light face living body detection model, and extracting visible light image features of the visible light sample image through a first feature extraction network of the visible light face living body detection model; inputting the near-infrared sample image into a near-infrared human face living body detection model to be trained for living body detection, obtaining a near-infrared human face living body detection result, and extracting near-infrared image features of the near-infrared sample image through a second feature extraction network of the near-infrared human face living body detection model; acquiring a human face living body true value of the near infrared sample image; and adjusting parameters of the near-infrared human face living body detection model based on the visible light image characteristics, the near-infrared human face living body detection result and the human face living body true value to obtain a trained near-infrared human face living body detection model.
In the embodiment of the disclosure, the visible light face living body detection model obtained based on visible light face image training can be used as a pre-trained visible light face living body detection model to train a near infrared face living body detection model to be trained, so that the image characteristics of the near infrared face living body detection model can approach to the image characteristics of the visible light face living body detection model, the accuracy and generalization of the near infrared face living body detection model are greatly improved, and further, compared with a conventional method for training the near infrared face living body detection model by using a large number of samples, the method for training the near infrared face living body detection model by using a small number of samples is realized, the time and the intensity of model training are reduced, and the efficiency of training the model is improved.
The human face living body detection model training method and the human face living body detection method can be applied to the human face detection field. For example, the training method and the method for the face living body detection provided by the embodiment of the disclosure can be applied to a deep learning neural network model for face living body detection, and help to improve performance. The method can be applied to various application scenes such as attendance checking, entrance guard, security protection, financial payment and the like in the face recognition field. The method helps to promote effects and user experience of various applications based on the face living detection technology, and is beneficial to further popularization of business projects.
The training method of the human face living body detection model provided by the disclosure is described in detail below through specific embodiments.
The training method for the human face living body detection model provided by the embodiment of the disclosure can be realized through electronic equipment, and specifically, the electronic equipment can be a smart phone, a personal computer or a server.
Referring to fig. 1a, fig. 1a is a flow chart of a training method for a human face living body detection model according to an embodiment of the disclosure, including the following steps:
s101, a sample image pair is obtained, wherein the sample image pair comprises a visible light sample image and a near infrared sample image of the same face object.
The face object may be a living face object or a non-living face object, but the face objects included in the visible light sample image and the near infrared sample image in the same sample image pair are the same. In one example, an image of a face object is acquired by a single-lens camera, and a sample image pair is obtained by imaging on a visible light sensor and an infrared light sensor respectively by using a light splitting module.
In a possible implementation manner, referring to fig. 1b, fig. 1b is a schematic flow chart of a method for acquiring a sample image pair according to an embodiment of the disclosure, where the method includes the following steps:
S110, acquiring an original visible light image and an original near infrared image of the same face object.
The same face object can be shot simultaneously through a visible light camera and a near infrared camera, so that an original visible light image and an original near infrared image of the face object are obtained; the original visible light image and the original near infrared image of the same face object can be acquired simultaneously through the binocular camera.
S120, acquiring a first face key point of the original visible light image and a second face key point of the original near infrared image.
The number of the face key points can be defined according to the requirement, for example, in order to obtain a near infrared face living body detection model with higher accuracy, the number of the face key points can be set to be more, and the method is not limited. In one example, a face is defined to contain 72 key points (x 1 ,y 1 )…(x 72 ,y 72 )。
Before the key points are detected, the original visible light image and the original near infrared image can be processedPreprocessing a line image to obtain a visible light image and a near infrared image containing a human face, detecting the visible light image containing the human face by using a preset visible light human face detection model, and detecting the near infrared image containing the human face by using a preset near infrared human face detection model to obtain a position area of the human face. Then according to the detected face position area, detecting the key points of the face through a preset visible face key point detection model and a near infrared face key point detection model to obtain key point coordinate values of the face, in one example, 72 face key point coordinates are obtained, which are (x) 1 ,y 1 )…(x 72 ,y 72 ). The above-mentioned preset visible light face detection model and near infrared face detection model, preset visible light face key point detection model and near infrared face key point detection model may be existing in the art.
And S130, based on the first face key points and the second face key points, respectively aligning the face area in the original visible light image and the face area in the original near infrared image to obtain an aligned visible light image and an aligned near infrared image.
And when the face region in the original visible light image and the face region in the original near infrared image are aligned, only the face region can be intercepted through affine transformation and adjusted to a preset size, and the coordinates of the key points of the face are also remapped to new coordinates according to an affine transformation matrix. In one example, the region containing only the face is truncated by affine transformation and scaled to size 224x224.
And S140, respectively carrying out image normalization processing and random data enhancement processing on the aligned visible light image and the aligned near infrared image to obtain the visible light sample image and the near infrared sample image.
The normalization process may be performed sequentially on each pixel in the aligned visible light image and the aligned near infrared image, and in one example, the pixel value of each pixel is divided by 128 and divided by 256, so that the pixel value of each pixel is between [ -0.5,0.5 ].
After the aligned visible light image and the aligned near infrared image are subjected to image normalization processing respectively, the normalized image can be subjected to random data enhancement processing, so that a visible light sample image and a near infrared sample image are obtained, and the data enhancement method can be common data enhancement methods such as random overturn, random scaling, color disturbance and the like, so that the visible light sample image and the near infrared sample image are obtained. Of course, the normalized image may be directly used as the visible light sample image and the near infrared sample image.
In the embodiment of the disclosure, the visible light sample image and the near infrared sample image are obtained through the steps, so that the features of the original face image are kept as much as possible, the pixel values are simplified due to normalization processing, the data processing of the subsequent steps is facilitated, the face living body detection model is easy to optimize, the training efficiency of the face living body detection model is improved, the random enhancement processing is performed, the visible light sample image and the near infrared sample image are ensured to be as various as possible, the sample image is more attached to the actual situation, and the generalization of the face living body detection model is improved.
S102, inputting the visible light sample image into a pre-trained visible light face living body detection model, and extracting visible light image features of the visible light sample image through a first feature extraction network of the visible light face living body detection model.
The feature extraction network of the pre-trained visible light face living body detection model can use MobileNet as a main structure of the RGB convolutional neural network. The MobileNet is a lightweight neural network, and describes a high-efficiency network architecture, which can enable the neural network to run at a high speed in the processing process, and allows a model which is very small, low in delay and easy to meet the requirements of embedded equipment to be directly built through two super parameters. Other neural networks capable of feature extraction may be used, without limitation.
The pre-trained visible light face living body detection model can be obtained through training a visible light face image marked with a label, and the label is used for indicating whether the visible light face image is obtained by collecting a living body face or a non-living body face. The process of training the obtained visible light face living body detection model by using the visible light face image can be referred to the training process in the prior art, and the present disclosure is not limited specifically.
And S103, inputting the near-infrared sample image into a near-infrared face living body detection model to be trained for living body detection, obtaining a near-infrared face living body detection result, and extracting near-infrared image features of the near-infrared sample image through a second feature extraction network of the near-infrared face living body detection model.
In one possible implementation, the first feature extraction network is identical in structure to the second feature extraction network.
In the embodiment of the disclosure, the near infrared face living body detection model to be trained utilizes the same feature extraction network as the pre-trained visible light face living body detection model, so that the near infrared image features of the near infrared sample images extracted by the near infrared face living body detection model can be kept consistent with the visible light image features of the visible light sample images extracted by the visible light face living body detection model to the greatest extent, and the accuracy of the near infrared face living body detection model is improved.
And S104, acquiring a human face living body true value of the near infrared sample image.
In one possible embodiment, the human face in-vivo truth value includes any one of the following: and the visible light human face living body detection result of the visible light sample image output by the visible light human face living body detection model and the label of the sample image pair.
The labels corresponding to the sample image pairs are pre-labeled, and the labels of the sample image pairs may include a true result value of a near-infrared sample image and a true result value of a visible light sample image, in one example, the true result value of a near-infrared sample image is true, that is, the near-infrared sample image is an image captured through a living face, and in one example, the true result value of a near-infrared sample image is false, that is, the near-infrared sample image is not an image captured through a living face.
In the embodiment of the disclosure, the visible light face living body detection result or the face living body true value of the label corresponding to the sample image pair is output by the visible light face living body detection model as the face living body true value of the near infrared sample image, so that the accuracy of the face living body true value of the near infrared sample image is ensured, and the accuracy of the finally formed near infrared face living body detection model is ensured.
And S105, adjusting parameters of the near-infrared face living body detection model based on the visible light image features, the near-infrared face living body detection result and the face living body true value to obtain a trained near-infrared face living body detection model.
In the process of adjusting the parameters of the near-infrared face living body detection model, the model loss of the near-infrared face living body detection model can be obtained by calculation based on the difference between the visible light image characteristics and the near-infrared image characteristics and the difference between the near-infrared face living body detection result and the face living body true value, and the parameters of the near-infrared face living body detection model can be adjusted according to the model loss.
When the near-infrared face living body detection model is trained, other sample image pairs can be selected to continuously train the near-infrared face living body detection model until the preset training ending condition is met, and the trained near-infrared face living body detection model is obtained.
In the embodiment of the disclosure, the near-infrared face living body detection model to be trained is trained by using the visible face living body detection model obtained based on the visible face image training, so that the image features of the near-infrared face living body detection model continuously learn the image features of the visible face living body detection model, and the image features of the near-infrared face living body detection model can approach the image features of the visible face living body detection model, thereby greatly improving the accuracy and generalization of the near-infrared face living body detection model.
In a possible implementation manner, the adjusting the parameters of the near-infrared face living body detection model based on the visible light image feature, the near-infrared face living body detection result and the face living body true value includes:
step one, obtaining image feature loss according to the difference between the visible light image features and the near infrared image features.
In one example, the image feature loss may be obtained by subtracting the last feature layer of the near infrared face biopsy model.
And step two, obtaining result loss according to the difference between the near infrared human face living body detection result and the human face living body true value.
In one example, the predicted outcome penalty may be obtained by cross entropy penalty for a near infrared face biopsy model.
And thirdly, obtaining the model loss of the near infrared human face living body detection model according to the image characteristic loss and the result loss.
In one example, the model loss is the sum of the image feature loss and the predicted outcome loss.
In one example, the Loss1 is calculated as the image feature Loss in the last feature layer, meanwhile, the cross entropy Loss2 is calculated as the prediction result Loss for the near-infrared face living body detection model, and finally, loss loss=loss 1+loss2 optimized by the near-infrared face living body detection model is calculated as the model Loss of the near-infrared face living body detection model.
And step four, adjusting the parameters according to the model loss.
In the embodiment of the disclosure, parameters of the near-infrared face living body detection model are adjusted based on model loss, so that image features of the near-infrared face living body detection model are enabled to be continuously close to those of the visible light face living body detection model, a near-infrared face living body detection result is enabled to be continuously close to a true value, and accuracy and robustness of the near-infrared face living body detection model are improved.
The face living body detection model training method provided by the present disclosure is described below with a specific example, see fig. 1c:
1. a series of original visible light face images and original near infrared face images containing faces are obtained, wherein the original visible light face images and the original near infrared face images of the same face object form a group of image pairs, and the image pairs comprise a plurality of groups of image pairs.
2. And carrying out image preprocessing on each image in the series of images to obtain a visible light image and a near infrared image containing a human face, detecting the visible light human face image containing the human face by using a preset visible light human face detection model, and detecting the near infrared human face image containing the human face by using a preset near infrared human face detection model to obtain a position area of the human face.
3. And detecting key points of the human face through a preset visible light human face key point detection model and a near infrared human face key point detection model according to the detected human face position area to obtain key point coordinate values of the human face. And aligning a face region in a visible light face image containing a face and a face region in a near infrared face image containing a face to obtain an aligned visible light image and near infrared image.
In one example, we can define a face to contain 72 key points (x 1 ,y 1 )…(x 72 ,y 72 )。
In one example, an existing visible face key point detection model and a near-infrared face key point detection model are called, a visible face image and a near-infrared face image containing a face are input, and 72 face key point coordinates are obtained, which are (x 1 ,y 1 )…(x 72 ,y 72 )。
In one example, after obtaining the coordinates of the key points of the face, the target face including the visible face image and the near infrared face image of the face is aligned according to the coordinates of the key points of the face, and meanwhile, only the face area is intercepted through affine transformation and adjusted to 224x224 with the same size, and the coordinates of the key points of the face are also remapped to new coordinates according to an affine transformation matrix.
4. And respectively carrying out image normalization processing and random data enhancement processing on the aligned visible light image and the aligned near infrared image to obtain a visible light sample image and a near infrared sample image.
In one example, the normalization process is performed on each pixel in the image sequentially, e.g., the pixel value of each pixel is divided by 128 and divided by 256, such that the pixel value of each pixel is between [ -0.5,0.5 ].
In one example, the normalized image is subjected to random data enhancement processing, using common data enhancement methods such as random inversion, random scaling, color perturbation, and the like.
5. And inputting the visible light sample image into the trained visible light face living body detection model, and inputting the near infrared sample image into the trained near infrared face living body detection model.
In one example, the visible light face biopsy model and the near infrared face biopsy model use MobileNet as the main structure of the convolutional neural network.
6. And solving the image characteristic Loss Loss1 at the last characteristic layer of the visible light human face living body detection model and the near infrared human face living body detection model.
7. And solving a prediction result Loss2 for the full connection layer of the near infrared human face living body detection model.
In one example, cross entropy Loss2 is calculated for the full-connection layer of the near-infrared face living body detection model as a prediction result Loss.
8. Loss of optimization of the final near infrared face in vivo detection model Loss loss=loss 1+loss2. The loss of optimization is the model loss of the near infrared human face living body detection model.
9. And adjusting parameters of the near infrared human face living body detection model according to the model loss.
10. And selecting other sample image pairs to train the near-infrared face living body detection model continuously until the preset training ending condition is met, and obtaining the trained near-infrared face living body detection model.
The training method of the human face living body detection model can accelerate the convergence rate of network training and improve the generalization and the precision of the human face living body detection algorithm when the human face living body detection model is used in a real scene. The defending effect of the human face living body detection algorithm on an unknown attack sample mode is improved.
In practical deployment, a visible light face living body detection model or a near infrared face living body detection model or a dual-mode model formed by the visible light face living body detection model and the near infrared face living body detection model can be used according to the requirements of a scene.
The embodiment of the disclosure also provides a face living body detection method, referring to fig. 2, fig. 2 is a schematic flow chart of the face living body detection method provided by the embodiment of the disclosure, which includes the following steps:
s201, acquiring a near infrared face image to be recognized;
the near infrared face image contains a face, and can be an image of the face shot by electronic equipment, and can be various photo printing attacks (including color printing photos, black-and-white printing photos, infrared printing photos and the like), screen playing attacks (mobile phone screens, flat screens, computer screens and the like) and high-definition 3D attacks (masks, head models, head masks and the like of various materials).
S202, performing feature extraction on the near-infrared face image to be identified by using a second feature extraction network in the pre-trained near-infrared face living body detection model to obtain near-infrared face image features.
And S203, classifying the near-infrared face image features by using a classifier network of the near-infrared face living body detection model to obtain a face living body detection result of the near-infrared face image to be identified.
The near infrared human face living body detection model is obtained by training any one of the human face living body detection model training methods provided by the embodiment of the disclosure.
In the embodiment of the disclosure, the face living detection result is the near infrared face living detection model obtained by training by the face living detection model training method provided by the embodiment of the disclosure, so that the accuracy and generalization of face living detection in real scene use are greatly improved, and the defending effect on face attack images of unknown attack modes is also improved.
The human face living body detection model training method and the human face living body detection method can improve human face living body judgment performance, can be applied to the human face identification field, and can be particularly applied to various application scenes such as attendance checking, entrance guard, security protection, financial payment and the like in the human face identification field.
Based on the same concept as the face living body detection model training method, the embodiment of the disclosure also provides a face living body detection model training device, referring to fig. 3, fig. 3 is a schematic diagram of the face living body detection model training device provided by the embodiment of the disclosure, and the device includes:
an image acquisition module 31, configured to acquire a sample image pair, where the sample image pair includes a visible light sample image and a near infrared sample image of the same face object;
A first image feature extraction module 32, configured to input the visible light sample image into a pre-trained visible light face living body detection model, and extract visible light image features of the visible light sample image through a first feature extraction network of the visible light face living body detection model;
the second image feature extraction module 33 is configured to input the near-infrared sample image into a near-infrared face living body detection model to be trained to perform living body detection, obtain a near-infrared face living body detection result, and extract near-infrared image features of the near-infrared sample image through a second feature extraction network of the near-infrared face living body detection model;
a truth value acquisition module 34, configured to acquire a face living truth value of the near infrared sample image;
and the parameter adjustment module 35 is configured to adjust parameters of the near-infrared face living body detection model based on the visible light image feature, the near-infrared face living body detection result and the face living body true value, so as to obtain a trained near-infrared face living body detection model.
In one possible implementation, the first feature extraction network is identical in structure to the second feature extraction network.
In one possible implementation, the image acquisition module includes:
the original image acquisition sub-module is used for acquiring an original visible light image and an original near infrared image of the same face object;
the key point acquisition sub-module is used for acquiring a first face key point of the original visible light image and a second face key point of the original near infrared image;
the image alignment sub-module is used for respectively aligning the face area in the original visible light image and the face area in the original near infrared image based on the first face key point and the second face key point to obtain an aligned visible light image and an aligned near infrared image;
and the image normalization sub-module is used for respectively carrying out image normalization processing and random data enhancement processing on the aligned visible light image and the aligned near infrared image to obtain the visible light sample image and the near infrared sample image.
In one possible embodiment, the human face in-vivo truth value includes any one of the following:
and the visible light human face living body detection result of the visible light sample image output by the visible light human face living body detection model and the label of the sample image pair.
In one possible implementation manner, the parameter adjustment module includes:
the image feature loss determining module is used for obtaining image feature loss according to the difference between the visible light image features and the near infrared image features;
the prediction result loss determining module is used for obtaining result loss according to the difference between the near infrared human face living body detection result and the human face living body true value;
the model loss determining module is used for obtaining the model loss of the near infrared human face living body detection model according to the image characteristic loss and the result loss;
and the adjusting module is used for adjusting the parameters according to the model loss.
Based on the same concept as the face living body detection method, the embodiment of the disclosure also provides a face living body detection device, referring to fig. 4, fig. 4 is a schematic diagram of the face living body detection device provided by the embodiment of the disclosure, where the device includes:
an image acquisition module 41, configured to acquire a near infrared face image to be identified;
the image feature extraction module 42 is configured to perform feature extraction on the near-infrared face image to be identified by using a second feature extraction network in the pre-trained near-infrared face living body detection model, so as to obtain near-infrared face image features;
The result output module 43 is configured to classify the features of the near-infrared face image by using a classifier network of the near-infrared face living body detection model, so as to obtain a face living body detection result of the near-infrared face image to be identified;
the near infrared human face living body detection model is obtained by training any one of the human face living body detection model training devices provided by the embodiment of the disclosure.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Wherein, electronic equipment includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the face biopsy model training methods or face biopsy methods of the present disclosure.
A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the face biopsy model training method or the face biopsy method of any one of the present disclosure.
A computer program product comprising a computer program which, when executed by a processor, implements a face biopsy model training method or a face biopsy method as any of the present disclosure.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 501 performs the respective methods and processes described above, for example, a face biopsy model training method or a face biopsy method. For example, in some embodiments, the face biopsy model training method or the face biopsy method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the face biopsy model training method or the face biopsy method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the face biopsy model training method or the face biopsy method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (12)

1. A human face living body detection model training method comprises the following steps:
obtaining a sample image pair, wherein the sample image pair comprises a visible light sample image and a near infrared sample image of the same human face object;
inputting the visible light sample image into a pre-trained visible light face living body detection model, and extracting visible light image features of the visible light sample image through a first feature extraction network of the visible light face living body detection model;
Inputting the near-infrared sample image into a near-infrared human face living body detection model to be trained for living body detection, obtaining a near-infrared human face living body detection result, and extracting near-infrared image features of the near-infrared sample image through a second feature extraction network of the near-infrared human face living body detection model;
acquiring a human face living body true value of the near infrared sample image;
based on the visible light image features, the near infrared face living body detection results and the face living body true values, adjusting parameters of the near infrared face living body detection model to obtain a trained near infrared face living body detection model;
the adjusting parameters of the near-infrared face living body detection model based on the visible light image features, the near-infrared face living body detection result and the face living body true value to obtain a trained near-infrared face living body detection model comprises the following steps:
acquiring image feature loss according to the difference between the visible light image features and the near infrared image features;
obtaining result loss according to the difference between the near infrared human face living body detection result and the human face living body true value;
Obtaining the model loss of the near infrared human face living body detection model according to the image characteristic loss and the result loss; the model loss is the sum of the image characteristic loss and the result loss;
and adjusting the parameters according to the model loss.
2. The method of claim 1, wherein the first feature extraction network is structurally identical to the second feature extraction network.
3. The method of claim 1, wherein the acquiring a sample image pair comprises:
acquiring an original visible light image and an original near infrared image of the same face object;
acquiring a first face key point of the original visible light image and a second face key point of the original near infrared image;
based on the first face key point and the second face key point, respectively aligning a face region in the original visible light image and a face region in the original near infrared image to obtain an aligned visible light image and an aligned near infrared image;
and respectively carrying out image normalization processing and random data enhancement processing on the aligned visible light image and the aligned near infrared image to obtain the visible light sample image and the near infrared sample image.
4. The method of claim 1, wherein the face in-vivo truth value comprises any of: and the visible light human face living body detection result of the visible light sample image output by the visible light human face living body detection model and the label of the sample image pair.
5. A face in-vivo detection method, comprising:
acquiring a near infrared face image to be identified;
performing feature extraction on the near-infrared face image to be identified by using a second feature extraction network in a pre-trained near-infrared face living body detection model to obtain near-infrared face image features;
classifying the near-infrared face image features by using a classifier network of the near-infrared face living body detection model to obtain a face living body detection result of a near-infrared face image to be identified;
the near infrared human face living body detection model is trained by the method according to any one of claims 1-4.
6. A human face living body detection model training device, comprising:
the image acquisition module is used for acquiring a sample image pair, wherein the sample image pair comprises a visible light sample image and a near infrared sample image of the same face object;
The first image feature extraction module is used for inputting the visible light sample image into a pre-trained visible light face living body detection model and extracting visible light image features of the visible light sample image through a first feature extraction network of the visible light face living body detection model;
the second image feature extraction module is used for inputting the near-infrared sample image into a near-infrared face living body detection model to be trained for living body detection, obtaining a near-infrared face living body detection result, and extracting near-infrared image features of the near-infrared sample image through a second feature extraction network of the near-infrared face living body detection model;
the truth value acquisition module is used for acquiring the human face living truth value of the near infrared sample image;
the parameter adjustment module is used for adjusting parameters of the near-infrared face living body detection model based on the visible light image characteristics, the near-infrared face living body detection result and the face living body true value to obtain a trained near-infrared face living body detection model;
the parameter adjustment module comprises:
the image feature loss determining module is used for obtaining image feature loss according to the difference between the visible light image features and the near infrared image features;
The prediction result loss determining module is used for obtaining result loss according to the difference between the near infrared human face living body detection result and the human face living body true value;
the model loss determining module is used for obtaining the model loss of the near infrared human face living body detection model according to the image characteristic loss and the result loss; the model loss is the sum of the image characteristic loss and the result loss;
and the adjusting module is used for adjusting the parameters according to the model loss.
7. The apparatus of claim 6, wherein the first feature extraction network is structurally identical to the second feature extraction network.
8. The apparatus of claim 6, wherein the image acquisition module comprises:
the original image acquisition sub-module is used for acquiring an original visible light image and an original near infrared image of the same face object;
the key point acquisition sub-module is used for acquiring a first face key point of the original visible light image and a second face key point of the original near infrared image;
the image alignment sub-module is used for respectively aligning the face area in the original visible light image and the face area in the original near infrared image based on the first face key point and the second face key point to obtain an aligned visible light image and an aligned near infrared image;
And the image normalization sub-module is used for respectively carrying out image normalization processing and random data enhancement processing on the aligned visible light image and the aligned near infrared image to obtain the visible light sample image and the near infrared sample image.
9. The apparatus of claim 6, wherein the face liveness truth value comprises any one of:
and the visible light human face living body detection result of the visible light sample image output by the visible light human face living body detection model and the label of the sample image pair.
10. A face living body detection apparatus comprising:
the image acquisition module is used for acquiring a near infrared face image to be identified;
the image feature extraction module is used for extracting features of the near-infrared face image to be identified by utilizing a second feature extraction network in the pre-trained near-infrared face living body detection model to obtain near-infrared face image features;
the result output module is used for classifying the near-infrared face image features by utilizing a classifier network of the near-infrared face living body detection model to obtain a face living body detection result of the near-infrared face image to be identified;
the near infrared human face living body detection model is obtained by training the device according to any one of claims 6-9.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202210834544.2A 2022-07-14 2022-07-14 Training method and device for human face living body detection model, electronic equipment and storage medium Active CN115205939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210834544.2A CN115205939B (en) 2022-07-14 2022-07-14 Training method and device for human face living body detection model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210834544.2A CN115205939B (en) 2022-07-14 2022-07-14 Training method and device for human face living body detection model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115205939A CN115205939A (en) 2022-10-18
CN115205939B true CN115205939B (en) 2023-07-25

Family

ID=83582450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210834544.2A Active CN115205939B (en) 2022-07-14 2022-07-14 Training method and device for human face living body detection model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115205939B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921100A (en) * 2018-07-04 2018-11-30 武汉高德智感科技有限公司 A kind of face identification method merged based on visible images with infrared image and system
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN110633691A (en) * 2019-09-25 2019-12-31 北京紫睛科技有限公司 Binocular in-vivo detection method based on visible light and near-infrared camera
CN112288663A (en) * 2020-09-24 2021-01-29 山东师范大学 Infrared and visible light image fusion method and system
CN113453618A (en) * 2018-12-18 2021-09-28 日本电气株式会社 Image processing apparatus, image processing method, and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10956714B2 (en) * 2018-05-18 2021-03-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, electronic device, and storage medium
CN110084110B (en) * 2019-03-19 2020-12-08 西安电子科技大学 Near-infrared face image recognition method and device, electronic equipment and storage medium
CN112241716B (en) * 2020-10-23 2023-06-20 北京百度网讯科技有限公司 Training sample generation method and device
CN112613471B (en) * 2020-12-31 2023-08-01 中移(杭州)信息技术有限公司 Face living body detection method, device and computer readable storage medium
CN112733757A (en) * 2021-01-15 2021-04-30 深圳市海清视讯科技有限公司 Living body face recognition method based on color image and near-infrared image
CN113128481A (en) * 2021-05-19 2021-07-16 济南博观智能科技有限公司 Face living body detection method, device, equipment and storage medium
CN113343826B (en) * 2021-05-31 2024-02-13 北京百度网讯科技有限公司 Training method of human face living body detection model, human face living body detection method and human face living body detection device
CN113435408A (en) * 2021-07-21 2021-09-24 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921100A (en) * 2018-07-04 2018-11-30 武汉高德智感科技有限公司 A kind of face identification method merged based on visible images with infrared image and system
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN113453618A (en) * 2018-12-18 2021-09-28 日本电气株式会社 Image processing apparatus, image processing method, and storage medium
CN110633691A (en) * 2019-09-25 2019-12-31 北京紫睛科技有限公司 Binocular in-vivo detection method based on visible light and near-infrared camera
CN112288663A (en) * 2020-09-24 2021-01-29 山东师范大学 Infrared and visible light image fusion method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Face anti-spoofing with generated near-infrared images;Fangling Jiang,et al;Multimedia Tools and Applications;21299-21323 *
人脸识别应用活体检测技术研究;宛根训,等;中国安全防范技术与应用(第06期);59-63 *
基于双摄像头下的活体人脸检测方法;张鹤,等;软件(第07期);57-62 *
基于稀疏表征的可见光和近红外光人脸图像融合快速识别算法;赵英男,等;计算机科学(第06期);57-62 *

Also Published As

Publication number Publication date
CN115205939A (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN113343826B (en) Training method of human face living body detection model, human face living body detection method and human face living body detection device
CN113205057B (en) Face living body detection method, device, equipment and storage medium
CN113221771B (en) Living body face recognition method, device, apparatus, storage medium and program product
WO2022188315A1 (en) Video detection method and apparatus, electronic device, and storage medium
EP4080470A2 (en) Method and apparatus for detecting living face
CN112561879A (en) Ambiguity evaluation model training method, image ambiguity evaluation method and device
CN113221767B (en) Method for training living body face recognition model and recognizing living body face and related device
CN113435408A (en) Face living body detection method and device, electronic equipment and storage medium
CN113569708A (en) Living body recognition method, living body recognition device, electronic apparatus, and storage medium
CN115273184B (en) Training method and device for human face living body detection model
CN115116111B (en) Anti-disturbance human face living body detection model training method and device and electronic equipment
CN114445898B (en) Face living body detection method, device, equipment, storage medium and program product
CN113255512B (en) Method, apparatus, device and storage medium for living body identification
CN115205939B (en) Training method and device for human face living body detection model, electronic equipment and storage medium
CN114140320B (en) Image migration method and training method and device of image migration model
CN116052288A (en) Living body detection model training method, living body detection device and electronic equipment
CN114067394A (en) Face living body detection method and device, electronic equipment and storage medium
CN113569707A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113869253A (en) Living body detection method, living body training device, electronic apparatus, and medium
CN113221766A (en) Method for training living body face recognition model and method for recognizing living body face and related device
CN113033372A (en) Vehicle damage assessment method and device, electronic equipment and computer readable storage medium
CN113705620B (en) Training method and device for image display model, electronic equipment and storage medium
CN115578797B (en) Model training method, image recognition device and electronic equipment
CN116704620A (en) Living body detection method, living body detection device, electronic equipment and storage medium
CN115359574A (en) Human face living body detection and corresponding model training method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant