CN113128481A - Face living body detection method, device, equipment and storage medium - Google Patents

Face living body detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113128481A
CN113128481A CN202110547658.4A CN202110547658A CN113128481A CN 113128481 A CN113128481 A CN 113128481A CN 202110547658 A CN202110547658 A CN 202110547658A CN 113128481 A CN113128481 A CN 113128481A
Authority
CN
China
Prior art keywords
image
living body
infrared
visible light
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110547658.4A
Other languages
Chinese (zh)
Inventor
王哲
焦任直
梁潇
王薷泉
韩泽
崔玉君
谢会斌
李聪廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Boguan Intelligent Technology Co Ltd
Original Assignee
Jinan Boguan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Boguan Intelligent Technology Co Ltd filed Critical Jinan Boguan Intelligent Technology Co Ltd
Priority to CN202110547658.4A priority Critical patent/CN113128481A/en
Publication of CN113128481A publication Critical patent/CN113128481A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The application discloses a face living body detection method, a face living body detection device, face living body detection equipment and a storage medium. The method comprises the following steps: acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image; predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on a convolutional neural network; and if so, performing living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on the convolutional neural network, and judging whether the image to be detected contains a living body human face target or not according to a detection result. Therefore, by utilizing the respective reflection characteristics of visible light and near infrared spectrum wave bands, the visible light living body detection model is firstly utilized to filter the near infrared weak scene so as to assist the near infrared living body detection model, and the accuracy and the efficiency of the human face living body detection are improved.

Description

Face living body detection method, device, equipment and storage medium
Technical Field
The invention relates to the field of face detection, in particular to a face living body detection method, a face living body detection device, face living body detection equipment and a storage medium.
Background
Currently, with the rapid development of face technologies in the fields of security, finance and the like, more and more face recognition terminals and identity verification systems are widely applied. The living body detection is used as a key step in face recognition, and is easily attacked by face fraud such as printing photos, mobile phone videos, 3D masks and the like to influence the safety. Absorption and reflection intensity of a real human face and an inactive carrier to a Near Infrared (NIR) spectrum band are stronger than those of a Visible Light (VIS) spectrum band, and it has become a development direction of in-vivo detection by outputting an image through a Near Infrared camera. However, the near-infrared image has a large discrimination for screen attacks and a small discrimination for high-definition color paper printing, so that infrared printed pictures are difficult to distinguish under the NIR camera, and the detection accuracy is low. And the binocular face recognition terminal respectively carries out living body detection on the VIS image and the NIR image, which can cause the increase of time consumption, the slowing of the speed and the reduction of the real-time property in the face recognition process.
In the prior art, a visible light image and a near-infrared image are respectively input into a VIS and NIR-based photo attack prevention human face living body detection model and comprehensively judged whether the image is a living body, and then two paths of images are respectively input into a VIS and NIR-based screen attack prevention human face living body detection model and comprehensively judged whether the image is a living body; the method respectively carries out comprehensive judgment on photo attack and screen attack by utilizing respective human face image characteristics of VIS and NIR, but is limited by the diversity of human face attack modes, VIS and NIR two-path human face detection is respectively carried out in the method, then living body judgment is carried out by integrating two detection results, and the capability and the efficiency of human face living body detection are reduced. In the prior art, image fusion is also carried out on a visible light face and a near-infrared face, and living body judgment of a visible light face image, a near-infrared face image and a fusion image is respectively carried out; however, the method needs three times of in vivo detection of a visible light face image, a near infrared face image and a fusion image, the time consumption for in vivo judgment is long, VIS and NIR images need to be fused and color features need to be extracted, the inherent difference between the feature expression of an RGB image and a Gray image in a color space is not suitable, and the capability of face in vivo detection is reduced.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a device and a medium for detecting a living human face, which can improve the accuracy and efficiency of detecting a living human face. The specific scheme is as follows:
in a first aspect, the present application discloses a face live detection method, including:
acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image;
predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on a convolutional neural network;
and if so, performing living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on the convolutional neural network, and judging whether the image to be detected contains a living body human face target or not according to a detection result.
Optionally, acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image, and the method comprises the following steps:
acquiring a visible light image by using a visible light camera, and acquiring a near-infrared image corresponding to the visible light image by using a near-infrared camera;
and extracting the face region in the visible light image to obtain the visible light face image, and extracting the face region in the near-infrared image to obtain the near-infrared face image.
Optionally, the process of creating the visible light living body detection model includes:
obtaining a sample image and adding a corresponding label to the sample image to obtain a first training sample set; the sample image comprises a visible light characteristic cutting dummy image, a visible light real person image and a near-infrared printing dummy image;
constructing a first network to be trained on the basis of a convolutional neural network; the first network to be trained comprises a convolutional layer, a full connection layer and a Softmax classification layer;
and training the first network to be trained by utilizing the first training sample set to obtain the visible light living body detection model.
Optionally, the performing living body detection on the near-infrared face image by using a near-infrared living body detection model created in advance based on a convolutional neural network, and determining whether the image to be detected contains a living body face target according to a detection result, including:
performing target feature extraction on the near-infrared human face image by using the near-infrared living body detection model; the target features comprise depth features and infrared classification features;
and judging whether the image to be detected contains the living body face target or not according to a preset probability judgment rule according to the class probability corresponding to the depth feature and the class probability corresponding to the infrared classification feature.
Optionally, after predicting whether the visible light face image is a living body face image, the method further includes:
if the visible light face image is predicted not to be the living body face image through the visible light living body detection model, judging that the living body face target does not exist in the image to be detected;
after the living body detection is carried out on the near-infrared human face image by utilizing a near-infrared living body detection model which is created in advance based on a convolutional neural network, the method further comprises the following steps:
and if the near-infrared living body detection model detects that the near-infrared face image is not the living body face image, judging that the living body face target does not exist in the image to be detected.
Optionally, the process of creating the near-infrared living body detection model includes:
acquiring a near-infrared face living body image and a near-infrared non-face living body image, and adding corresponding depth labels to the near-infrared face living body image and the near-infrared non-face living body image to obtain a second training sample set;
building a network based on the convolutional neural network, the Euclidean distance loss function and the Softmax loss function to obtain a second network to be trained; wherein the Euclidean distance loss function is used for calculating a depth loss value, and the Softmax loss function is used for calculating a classification loss value;
training the second network to be trained by using the second training sample set to obtain the near-infrared living body detection model; and the loss value in the training process is the weighted sum total value of the depth loss value and the classification loss value.
In a second aspect, the present application discloses a human face living body detection device, including:
the image acquisition module is used for acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image;
the visible light living body detection module is used for predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on a convolutional neural network;
and the near-infrared living body detection module is used for carrying out living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on a convolutional neural network if the prediction result of the visible light living body detection module is positive, and judging whether the image to be detected contains a living body human face target or not according to the detection result.
Optionally, the near-infrared living body detection module includes:
the characteristic extraction unit is used for extracting target characteristics of the near-infrared human face image by using the near-infrared living body detection model; the target features comprise depth features and infrared classification features;
and the living body judging unit is used for judging whether the image to be detected contains a living body face target or not according to the probability of the depth feature and the probability of the infrared classification feature and a preset probability judging rule.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
and the processor is used for executing the computer program to realize the human face living body detection method.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program when executed by the processor implements the aforementioned face liveness detection method.
In the application, firstly, an image to be detected is obtained; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image; then, predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on a convolutional neural network; and if so, performing living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on the convolutional neural network, and judging whether the image to be detected contains a living body human face target or not according to a detection result. Therefore, whether the visible light face image has the living body characteristics or not is predicted through the visible light living body detection model, and at the moment, images which are difficult to distinguish by near-infrared detection, such as near-infrared weak scene images of infrared photos, characteristic cutting and the like, can be effectively filtered by using the detection characteristics of the visible light image; and then, whether the near-infrared human face target has the living body characteristics is comprehensively analyzed through a near-infrared living body detection model, so that the detection scenes of screen attack, 3D human face and the like which are difficult to detect by visible light are solved, and the near-infrared non-living body characteristics are effectively filtered. According to the advantages of the visible light imaging mechanism and the near-infrared imaging mechanism, the visible light detection model is used for assisting the near-infrared detection model, so that the accuracy rate of the living body detection and the real-time performance of the detection are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a human face in-vivo detection method provided by the present application;
FIG. 2 is a flow chart illustrating the training of a visible light biopsy model and a near-infrared biopsy model provided herein;
FIG. 3 is a near-infrared living body feature pixel distribution histogram provided herein;
FIG. 4 is a near-infrared non-living feature pixel distribution histogram provided herein;
FIG. 5 is an exemplary illustration of an in vivo training image and corresponding depth label provided herein;
FIG. 6 is an exemplary illustration of a non-live training image and corresponding depth label provided herein;
FIG. 7 is a schematic diagram of a network structure of a specific near-infrared biopsy model provided in the present application;
fig. 8 is a flowchart of a specific human face in-vivo detection method provided in the present application;
fig. 9 is a flowchart of another specific human face live detection method provided in the present application;
FIG. 10 is a schematic structural diagram of a living human face detection apparatus according to the present application;
fig. 11 is a block diagram of an electronic device according to the present application.
Detailed Description
In the prior art, a visible light image and a near-infrared image are respectively input into a VIS and NIR-based photo attack prevention human face living body detection model and comprehensively judged whether the image is a living body, and then two paths of images are respectively input into a VIS and NIR-based screen attack prevention human face living body detection model and comprehensively judged whether the image is a living body; the method respectively carries out comprehensive judgment on photo attack and screen attack by utilizing respective human face image characteristics of VIS and NIR, but is limited by the diversity of human face attack modes, VIS and NIR two-path human face detection is respectively carried out in the method, then living body judgment is carried out by integrating two detection results, and the capability and the efficiency of human face living body detection are reduced.
The embodiment of the application discloses a face living body detection method, and as shown in fig. 1, the method can comprise the following steps:
step S11: acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image.
In this embodiment, an image to be detected is first acquired, where the image to be detected includes a visible light face image and a near-infrared face image corresponding to the visible light face image, and it can be understood that the near-infrared face image and the visible light face image may be a visible light image and a near-infrared image acquired at the same time and at the same angle for the same pedestrian.
It can be understood that attacks by non-live carriers include a variety of ways, such as infrared printed photos, cell phone screens, 3D masks, etc.; therefore, the accuracy and robustness of the system can be enhanced by combining the advantages of the spectral characteristics of visible light and near infrared light, and the living body detection full flow diagram is shown in fig. 2 and comprises the living body detection of two angles of visible light and near infrared light.
Step S12: and predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on the convolutional neural network.
In this embodiment, after the visible light face image and the near-infrared face image are obtained, a visible light living body detection model created in advance based on a convolutional neural network is first used to perform living body detection on the visible light face image, and whether a living body feature exists in the visible light face image is detected, so as to predict whether the visible light face image is a living body face image.
In this embodiment, the process of creating the visible light living body detection model may include: obtaining a sample image and adding a corresponding label to the sample image to obtain a first training sample set; the sample image comprises a visible light characteristic cutting dummy image, a visible light real person image and a near-infrared printing dummy image; constructing a first network to be trained on the basis of a convolutional neural network; the first network to be trained comprises a convolutional layer, a full connection layer and a Softmax classification layer; and training the first network to be trained by utilizing the first training sample set to obtain the visible light living body detection model.
It can be understood that the visible light living body detection model provides a living body detection method for the visible light face image, and the visible light living body detection model is created as follows:
(1) selecting and labeling training sample images: the method comprises the steps of respectively utilizing a natural light camera and an infrared camera to collect 3 types of samples for training, wherein the 3 types of samples comprise a visible light feature cutting dummy (VIS-Crop), a visible light true man (VIS-R) and a near infrared printing dummy (NIR-F), and labeling the 3 types of samples, namely adding corresponding labels to obtain a first training sample.
(2) Network construction: and constructing a first network to be trained by using the convolutional layer, the pooling layer, the full-connection layer and the like. Wherein the first network to be trained may contain 21 convolutional layers, 2 fully-connected layers, and the last Softmax classification layer. The size of the input image of the convolutional neural network can be 128 × 128, wherein the convolutional layers comprise 19 convolutional cores with the convolutional kernel size of 3 × 3, the step size of 2, 1 convolutional core with the size of 7 × 7, the filling mode is msra, the step size of 2, and 1 convolutional core with the size of 1 × 1, and the filling mode can adopt msra filling algorithm.
(3) Network training: and training the first network to be trained by utilizing the first training sample set to obtain a visible light living body detection model. Specifically, the 3 types of samples can be input into a convolutional neural network for training based on a Caffe deep learning framework. In each training, training images are input by taking batch (number of batches) as a unit, the batch can take 32, that is, 32 random training images are input each time, and training data which are randomly distributed (the proportion sum of 3 types is 1) can be also distributed according to the interval of [0.3-0.7] in each batch training. For example, labels 0, 1, and 2 may be added to three types of samples, and face images resize to 128 × 128, and then data is sent to a convolutional neural network according to a random proportion for training, a corresponding loss value is generated by using the obtained class features in the training process, and SoftmaxWithLoss is used as a classification network as a loss function, where:
Figure BDA0003074124340000071
wherein n is class number 3, LossclsTo classify the loss value, yiFor class labels, classify the correct class for class 1, y i1, the classification error class is yi=0||2。f(zi) For the network classification layer output, ziAnd outputting a Feature Vector for the network Feature layer, namely outputting the network Feature layer (called Feature Vector) after the CNN-Net background.
After the network training is finished, the visible light living body detection model is obtained, and the characteristic extraction can be carried out on the image to be detected by using the visible light living body detection model. The visible light face image is input and classified, namely, the probability interval corresponding to the visible light face image is compared to judge whether the visible light face image is of a corresponding category, the sum of 3 category output probabilities is 1, the discrimination threshold value of each category is set to be 0.5, and then the comparison result is as follows:
Figure BDA0003074124340000072
wherein, prob0For a corresponding VIS-Crop class feature, prob1Being characteristic of the VIS-R class, prob2Characteristic of NIR-F category. Therefore, when the output probability value is in accordance with the corresponding interval, the judgment is madeThe current input image is of the type, if the output result is 0 or 2, the current input image is judged to be an attack mode, and the current target is directly judged to be a non-living body; and if the output result is 1, judging that the face is a real face, and carrying out the next detection. The visible light living body detection model can effectively distinguish characteristics such as face local characteristic cutting, infrared printing photos and the like, and near-infrared weak scenes are filtered before near-infrared detection is carried out, so that the robustness of an algorithm is ensured, and the reliability of the system is enhanced.
Step S13: and if so, performing living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on the convolutional neural network, and judging whether the image to be detected contains a living body human face target or not according to a detection result.
In this embodiment, if the visible light live body detection model detects that the visible light face image contains a living body feature, the near-infrared live body detection model created in advance based on the convolutional neural network is used to perform the living body detection on the near-infrared face image, and whether the image to be detected contains a living body face target is determined according to the detection result. In the living body detection overall process, the visible light living body detection model in the primary judgment stage is directly judged as a non-living body if no human face target exists, and a near-infrared human face image is input into the near-infrared living body detection model for detection if the human face target exists.
It can be understood that the degree of attack means such as mobile phone screens and 3D masks is weak in the visible light scene, and secondly, some attack means are difficult to distinguish through visible light features under specific illumination or angle, so that the infrared face matched with the target passing through the first judgment is selected in the embodiment to carry out the second judgment, and the accuracy of the living body detection is further improved.
In this embodiment, the performing living body detection on the near-infrared human face image by using a near-infrared living body detection model created in advance based on a convolutional neural network, and determining whether the image to be detected contains a living body human face target according to a detection result may include: performing target feature extraction on the near-infrared human face image by using the near-infrared living body detection model; the target features comprise depth features and infrared classification features; and judging whether the image to be detected contains the living body face target or not according to a preset probability judgment rule according to the class probability corresponding to the depth feature and the class probability corresponding to the infrared classification feature.
It can be understood that, because the near-infrared human face features have more obvious infrared classification features, such as bright pupils, higher accuracy can be ensured by near-infrared secondary non-living body filtering. The pixel distribution of the near-infrared living body feature shown in fig. 3 is obviously different from the pixel distribution of the near-infrared non-living body feature shown in fig. 4 in the distribution interval. When the near-infrared living body detection model is used for carrying out feature extraction on an image, the number of features of the image to be detected is more than one, specifically including a depth feature and an infrared classification feature, wherein the depth feature represents the probability of the existence of a depth image corresponding to a living body face, and the depth information based on the image to be detected includes but is not limited to face distance information, face concave-convex shape and the like.
In this embodiment, the process of creating the near-infrared living body detection model may include: acquiring a near-infrared face living body image and a near-infrared non-face living body image, and adding corresponding depth labels to the near-infrared face living body image and the near-infrared non-face living body image to obtain a second training sample set; building a network based on the convolutional neural network, the Euclidean distance loss function and the Softmax loss function to obtain a second network to be trained; wherein the Euclidean distance loss function is used for calculating a depth loss value, and the Softmax loss function is used for calculating a classification loss value; training the second network to be trained by using the second training sample set to obtain the near-infrared living body detection model; and the loss value in the training process is the weighted sum total value of the depth loss value and the classification loss value.
It can be understood that the near-infrared part of the whole process of the living body detection can comprise a training stage and an application stage, wherein input data is a training image in the training process, the input image is an image to be detected in the application process, and the near-infrared living body detection model creation process specifically comprises the following steps:
(1) selecting and labeling training sample images: live training images and corresponding depth labels, such as shown in fig. 5, and non-live training images and corresponding depth labels, such as shown in fig. 6, are acquired to obtain a second set of training samples.
(2) Network construction: building a network based on the convolutional layer, the pooling layer, the full-link layer and the like, as well as the Euclidean distance loss function and the Softmax loss function to obtain a second network to be trained; specifically, the structure of the second network to be trained is shown in fig. 7, the network is used for extracting image feature vectors, the left side of the network is a classification loss part, and the right side of the network is a depth loss part, and the classification loss part and the depth loss part are weighted and summed to obtain a final loss value.
(3) Network training: and training a second network to be trained by using the second training sample set to obtain a near-infrared living body detection model. And in the training process, after a training image is obtained, inputting the training image into a second network to be trained for training, and generating corresponding Loss values, namely Depth Loss and Class Loss by using the obtained Depth Feature and classification Feature. In each training, training images and labels are input in batch units, and as shown in fig. 5 and 6, live labels guide the model to learn the features of the face region, whereas non-live objects are not guided. The batch can take 32, that is, 32 random training data are input at a time, and each batch training can be randomly distributed according to the interval of [0.2-0.8 ].
Loss function in training process is divided into depth LossdepthAnd Loss of classification LossclsTwo-part, LossdepthThe face images resize are firstly input into the network after being processed by 224 × 224, and corresponding depth loss values are obtained after corresponding feature maps are extracted by the face images are calculated in the forward direction. Since the model needs to be trained in multiple aspects, weighted calculation of multiple losses is needed so as to generate the Loss value by using the Loss value corresponding to other characteristics, and the model adjusts the current parameter through back propagation, LossdepthCan be expressed as follows:
Figure BDA0003074124340000101
wherein n is the total number of preset convolution kernels of 8, DpredFor outputting a characteristic map for the network, DlabelIn order to deeply train the labels,
Figure BDA0003074124340000102
in order to calculate the two-norm,
Figure BDA0003074124340000103
for the purpose of the convolution calculation,
Figure BDA0003074124340000104
presetting a convolution kernel for the ith, wherein each convolution kernel is as follows:
Figure BDA0003074124340000105
Figure BDA0003074124340000106
the 8 convolution kernels correspond to i from 1 to 8 respectively, and when the model is calculated forward and the feature map is extracted, the feature map is convoluted with the 8 convolution kernels respectively, and a two-norm integration depth loss value is calculated with the depth label which is convoluted in the same way.
LossclsFirstly, respectively setting labels of a non-living sample and a living sample as 0 and 1, sending face images from resize to 224X 224, then sending the face images into a network for training, generating corresponding loss values by using obtained class characteristics in the training process, and using SoftmaxWithLoss as a classification network as a loss function:
Figure BDA0003074124340000107
wherein n is class number 2, LossclsAs classification loss value, yiAs a classification label, for the living body class yi=1,Non-living body class, yi=0。f(zi) As network classification layer output, ziAnd the feature vector is output as a network feature layer, namely the output of the network feature layer behind the backbone network in the near-infrared living body detection model.
Obtaining LossdepthAnd LossclsThen calculating the final Loss value Loss by using a weighted summation modemultiThe following were used:
Lossmulti=λ1Lossdepth2Losscls
wherein λ is1、λ2The specific size of the weighting coefficients for the depth penalty and the category penalty is not limited. LossdepthTaking the depth map label as a training label, and adopting an Euclidean Loss function as a loss function to calculate the obtained depth loss value; lossclsAnd taking 0 and 1 as training labels, and adopting a SoftmaxWithLoss function as a loss function to calculate the obtained classification loss value.
After the depth map feature extraction and the classification feature extraction are carried out on the near-infrared face image by using the model, the near-infrared face image is classified respectively, namely whether the image to be detected is a living body target or not is judged from the two angles. Based on the two features, respectively comparing the two features with corresponding probability intervals, judging whether the two features are in the corresponding probability intervals or not, judging whether the feature class scores output by the model are in the preset intervals or not, wherein the probability intervals of the depth features and the classification features are (0.5, 1), and judging that the Result is:
Figure BDA0003074124340000111
wherein, probdepthFor depth probability, probclsAnd the classification probability is determined, so that when the judgment result is 1, the two characteristic probabilities are proved to be both greater than 0.5, and the image to be detected is determined to be the living body target.
The near-infrared living body detection model is a main network for living body detection, and the size of a classification layer is limited by a plurality of factors, so that the classification layer cannot be expanded at will. If the size of the classification layer is limited, the accuracy of the convolutional neural network is poor. Therefore, when the identification accuracy of the convolutional neural network model is to be improved, the network needs to be pruned to reduce the amount of the convolutional neural network model, and meanwhile, the convolutional neural network model is trained by using GPU equipment with a large enough video memory.
Therefore, the near-infrared living body detection model extracts the depth features and the classification features corresponding to the near-infrared face images, and the judgment results of the face targets are more detailed and accurate because the features of different classes are represented from different angles. After judging whether each feature is in the corresponding probability interval, judging whether the target is a living body according to the category, and ensuring the accuracy of living body detection. Compared with the method of simply using the 0 and 1 labels for classification and discrimination, the method has better robustness. Moreover, a plurality of images are not required to be continuously acquired by a visible light camera and a near infrared camera for multi-path parallel processing, so that the detection speed is high, and the problems of low speed and low accuracy of face living body identification are solved.
From the above, in this embodiment, the visible light living body detection model is used to filter the near-infrared weak scene in the first stage, and the visible light auxiliary model with a simplified structure can reduce the time consumption of forward calculation. For special scenes such as feature cutting, face printing, infrared photos and the like, more than 95% of basic non-living attacks can be effectively filtered by adopting the method, and the paper non-living target has stronger visible light spectrum reflection and higher accuracy. And further, a near-infrared living body detection model is used for carrying out living body detection on the target with the living body characteristics detected in the first stage again in the second stage, more accurate filtering is realized, and a characteristic attention mechanism of the model is strengthened by adopting face depth characteristic loss and infrared classification characteristic loss weighting. Therefore, the advantages of the reflection characteristics of the visible light and the near infrared spectrum wave bands are combined, the reliability and the accuracy of the system are enhanced, the accuracy of a real person is ensured to be more than 95% under the condition that the false person false recognition rate is 1% in a general indoor test scene, and the requirements of equipment such as a face recognition terminal and an entrance guard terminal can be met.
The embodiment of the application discloses a specific human face living body detection method, and as shown in fig. 8, the method can include the following steps:
step S21: the method comprises the steps of acquiring a visible light image by using a visible light camera, and acquiring a near-infrared image corresponding to the visible light image by using a near-infrared camera.
In this embodiment, for example, as shown in fig. 9, a visible light camera may be used to acquire a visible light image, and a near-infrared camera may be used to acquire a near-infrared image corresponding to the visible light image.
Step S22: and extracting the face region in the visible light image to obtain a visible light face image, and extracting the face region in the near-infrared image to obtain a near-infrared face image.
In this embodiment, after the visible light face image and the near-infrared face image are obtained, a common visible light face detection model may be used to extract a face region in the visible light image to obtain a visible light face image, and a common near-infrared face detection model may be used to extract a face region in the near-infrared image to obtain a near-infrared face image. Specifically, the human face features detected by visible light can be extracted and screened layer by layer through a convolutional neural network model trained in advance, and due to the fact that deep learning has the characteristic learning capacity, supervised learning for performing controllable adjustment on neural network parameters by using labeled training data can be achieved. And acquiring a face region image according to the visible light face detection model, converting the face region image into a fixed size, and inputting the fixed size image into the visible light living body detection model. Meanwhile, a plurality of images are continuously acquired through the near-infrared face detection model and processed to obtain a face target area, so that the detection speed of the system is guaranteed, and meanwhile, the system is not influenced by factors such as personnel movement and illumination change.
Step S23: and predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on the convolutional neural network.
Step S24: and if the visible light human face image is predicted not to be the living human face image through the visible light living body detection model, judging that the living human face target does not exist in the image to be detected.
In this embodiment, if it is predicted by the visible light living body detection model that the visible light face image is not the living body face image, it is determined that the living body face target does not exist in the image to be detected, that is, subsequent detection is not performed.
Step S25: and if so, performing living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on the convolutional neural network, and judging whether the image to be detected contains a living body human face target or not according to a detection result.
Step S26: and if the near-infrared living body detection model detects that the near-infrared face image is not the living body face image, judging that the living body face target does not exist in the image to be detected.
In this embodiment, if it is detected by the near-infrared living body detection model that the near-infrared face image is not a living body face image, that is, the living body feature is not monitored, it is determined that the living body face target does not exist in the image to be detected.
For the specific processes of step S23 and step S25, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
From the above, in this embodiment, the visible light camera is used to acquire a visible light image, and the near-infrared camera is used to acquire a near-infrared image corresponding to the visible light image. And then, extracting the face region in the visible light image to obtain a visible light face image, and extracting the face region in the near-infrared image to obtain a near-infrared face image. And then, performing the biopsy by using the visible light biopsy model, directly judging that the target is not a biopsy target if the visible light biopsy model does not detect the biopsy characteristics, and further performing near-infrared biopsy if the biopsy characteristics are detected. The living body face and the non-living carrier are combined to have different reflection characteristics in visible light and near infrared spectrum wave bands, a visible light living body detection model is adopted to filter near infrared weak scenes in one stage, and then a near infrared living body detection model is adopted to judge living bodies in the two stages, so that the accuracy and the efficiency of face living body detection are improved.
Correspondingly, the embodiment of the present application further discloses a human face living body detection device, as shown in fig. 10, the device includes:
the image acquisition module 11 is used for acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image;
the visible light living body detection module 12 is configured to predict whether the visible light face image is a living body face image or not by using a visible light living body detection model created in advance based on a convolutional neural network;
and the near-infrared living body detection module 13 is configured to, if the prediction result of the visible light living body detection module is yes, perform living body detection on the near-infrared face image by using a near-infrared living body detection model created in advance based on a convolutional neural network, and determine whether the image to be detected includes a living body face target according to the detection result.
Firstly, acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image; then, predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on a convolutional neural network; and if so, performing living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on the convolutional neural network, and judging whether the image to be detected contains a living body human face target or not according to a detection result. Therefore, whether the visible light face image has the living body characteristics or not is predicted through the visible light living body detection model, and at the moment, images which are difficult to distinguish by near-infrared detection, such as near-infrared weak scene images of infrared photos, characteristic cutting and the like, can be effectively filtered by using the detection characteristics of the visible light image; and then, whether the near-infrared human face target has the living body characteristics is comprehensively analyzed through a near-infrared living body detection model, so that the detection scenes of screen attack, 3D human face and the like which are difficult to detect by visible light are solved, and the near-infrared non-living body characteristics are effectively filtered. According to the advantages of the visible light imaging mechanism and the near-infrared imaging mechanism, the visible light detection model is used for assisting the near-infrared detection model, so that the accuracy rate of the living body detection and the real-time performance of the detection are improved.
In some specific embodiments, the image obtaining module 11 may specifically include:
the image acquisition unit is used for acquiring a visible light image by using a visible light camera and acquiring a near-infrared image corresponding to the visible light image by using a near-infrared camera;
and the face region extraction unit is used for extracting the face region in the visible light image to obtain the visible light face image, and extracting the face region in the near-infrared image to obtain the near-infrared face image.
In some embodiments, the near-infrared living body detection module 13 may specifically include:
the characteristic extraction unit is used for extracting target characteristics of the near-infrared human face image by using the near-infrared living body detection model; the target features comprise depth features and infrared classification features;
and the living body judging unit is used for judging whether the image to be detected contains a living body face target or not according to the probability of the depth feature and the probability of the infrared classification feature and a preset probability judging rule.
In some specific embodiments, the living human face detection apparatus may specifically include:
the first non-living body judging unit is used for judging that a living body face target does not exist in the image to be detected if the visible light face image is predicted not to be the living body face image through the visible light living body detection model;
and the second non-living body judging unit is used for judging that a living body face target does not exist in the image to be detected if the near-infrared living body detection model detects that the near-infrared face image is not the living body face image.
Further, the embodiment of the present application also discloses an electronic device, which is shown in fig. 11, and the content in the drawing cannot be considered as any limitation to the application scope.
Fig. 11 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein, the memory 22 is used for storing a computer program, and the computer program is loaded and executed by the processor 21 to implement the relevant steps in the living human face detection method disclosed in any of the foregoing embodiments.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the memory 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., the resources stored thereon include an operating system 221, a computer program 222, data 223 including an image to be detected, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the electronic device 20, so as to realize the operation and processing of the mass data 223 in the memory 22 by the processor 21, and may be Windows Server, Netware, Unix, Linux, and the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the face living body detection method performed by the electronic device 20 disclosed in any of the foregoing embodiments.
Further, an embodiment of the present application further discloses a computer storage medium, where computer-executable instructions are stored in the computer storage medium, and when the computer-executable instructions are loaded and executed by a processor, the steps of the face liveness detection method disclosed in any of the foregoing embodiments are implemented.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method, the device, the equipment and the medium for detecting the living human face provided by the invention are described in detail, specific examples are applied in the text to explain the principle and the implementation mode of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A face living body detection method is characterized by comprising the following steps:
acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image;
predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on a convolutional neural network;
and if so, performing living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on the convolutional neural network, and judging whether the image to be detected contains a living body human face target or not according to a detection result.
2. The human face in-vivo detection method according to claim 1, wherein the image to be detected is acquired; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image, and the method comprises the following steps:
acquiring a visible light image by using a visible light camera, and acquiring a near-infrared image corresponding to the visible light image by using a near-infrared camera;
and extracting the face region in the visible light image to obtain the visible light face image, and extracting the face region in the near-infrared image to obtain the near-infrared face image.
3. The human face living body detection method according to claim 1, wherein the creation process of the visible light living body detection model comprises:
obtaining a sample image and adding a corresponding label to the sample image to obtain a first training sample set; the sample image comprises a visible light characteristic cutting dummy image, a visible light real person image and a near-infrared printing dummy image;
constructing a first network to be trained on the basis of a convolutional neural network; the first network to be trained comprises a convolutional layer, a full connection layer and a Softmax classification layer;
and training the first network to be trained by utilizing the first training sample set to obtain the visible light living body detection model.
4. The human face in-vivo detection method according to claim 1, wherein the performing in-vivo detection on the near-infrared human face image by using a near-infrared in-vivo detection model created in advance based on a convolutional neural network, and determining whether the image to be detected contains a living human face target according to a detection result comprises:
performing target feature extraction on the near-infrared human face image by using the near-infrared living body detection model; the target features comprise depth features and infrared classification features;
and judging whether the image to be detected contains the living body face target or not according to a preset probability judgment rule according to the class probability corresponding to the depth feature and the class probability corresponding to the infrared classification feature.
5. The method for detecting living human face according to claim 1, wherein after predicting whether the visible light human face image is a living human face image, the method further comprises:
if the visible light face image is predicted not to be the living body face image through the visible light living body detection model, judging that the living body face target does not exist in the image to be detected;
after the living body detection is carried out on the near-infrared human face image by utilizing a near-infrared living body detection model which is created in advance based on a convolutional neural network, the method further comprises the following steps:
and if the near-infrared living body detection model detects that the near-infrared face image is not the living body face image, judging that the living body face target does not exist in the image to be detected.
6. The face in-vivo detection method according to any one of claims 1 to 5, wherein the process of creating the near-infrared in-vivo detection model comprises the following steps:
acquiring a near-infrared face living body image and a near-infrared non-face living body image, and adding corresponding depth labels to the near-infrared face living body image and the near-infrared non-face living body image to obtain a second training sample set;
building a network based on the convolutional neural network, the Euclidean distance loss function and the Softmax loss function to obtain a second network to be trained; wherein the Euclidean distance loss function is used for calculating a depth loss value, and the Softmax loss function is used for calculating a classification loss value;
training the second network to be trained by using the second training sample set to obtain the near-infrared living body detection model; and the loss value in the training process is the weighted sum total value of the depth loss value and the classification loss value.
7. A face liveness detection device, comprising:
the image acquisition module is used for acquiring an image to be detected; the image to be detected comprises a visible light face image and a near-infrared face image corresponding to the visible light face image;
the visible light living body detection module is used for predicting whether the visible light face image is a living body face image or not by utilizing a visible light living body detection model which is created in advance based on a convolutional neural network;
and the near-infrared living body detection module is used for carrying out living body detection on the near-infrared human face image by using a near-infrared living body detection model which is created in advance based on a convolutional neural network if the prediction result of the visible light living body detection module is positive, and judging whether the image to be detected contains a living body human face target or not according to the detection result.
8. The human face in-vivo detection device of claim 7, wherein the near-infrared in-vivo detection module comprises:
the characteristic extraction unit is used for extracting target characteristics of the near-infrared human face image by using the near-infrared living body detection model; the target features comprise depth features and infrared classification features;
and the living body judging unit is used for judging whether the image to be detected contains a living body face target or not according to the probability of the depth feature and the probability of the infrared classification feature and a preset probability judging rule.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the face liveness detection method according to any one of claims 1 to 6.
10. A computer-readable storage medium for storing a computer program; wherein the computer program when executed by the processor implements the face liveness detection method as claimed in any one of claims 1 to 6.
CN202110547658.4A 2021-05-19 2021-05-19 Face living body detection method, device, equipment and storage medium Pending CN113128481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110547658.4A CN113128481A (en) 2021-05-19 2021-05-19 Face living body detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110547658.4A CN113128481A (en) 2021-05-19 2021-05-19 Face living body detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113128481A true CN113128481A (en) 2021-07-16

Family

ID=76782697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110547658.4A Pending CN113128481A (en) 2021-05-19 2021-05-19 Face living body detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113128481A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724250A (en) * 2021-09-26 2021-11-30 新希望六和股份有限公司 Animal target counting method based on double-optical camera
CN113723243A (en) * 2021-08-20 2021-11-30 南京华图信息技术有限公司 Thermal infrared image face recognition method for wearing mask and application
US11443550B2 (en) * 2020-06-05 2022-09-13 Jilin Qs Spectrum Data Technology Co. Ltd Face recognition monitoring system based on spectrum and multi-band fusion and recognition method using same
CN115205939A (en) * 2022-07-14 2022-10-18 北京百度网讯科技有限公司 Face living body detection model training method and device, electronic equipment and storage medium
CN115601818A (en) * 2022-11-29 2023-01-13 海豚乐智科技(成都)有限责任公司(Cn) Lightweight visible light living body detection method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964056A (en) * 2010-10-26 2011-02-02 徐勇 Bimodal face authentication method with living body detection function and system
CN106355169A (en) * 2016-11-11 2017-01-25 成都优势互动科技有限公司 Infrared living body face image acquisition device, identification device and treatment method thereof
CN106372601A (en) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 In vivo detection method based on infrared visible binocular image and device
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN109446981A (en) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 A kind of face's In vivo detection, identity identifying method and device
CN110443192A (en) * 2019-08-01 2019-11-12 中国科学院重庆绿色智能技术研究院 A kind of non-interactive type human face in-vivo detection method and system based on binocular image
CN210402507U (en) * 2019-04-04 2020-04-24 杭州臻财科技有限公司 Access control and access control management system based on face recognition system
CN111126366A (en) * 2020-04-01 2020-05-08 湖南极点智能科技有限公司 Method, device, equipment and storage medium for distinguishing living human face
CN112052808A (en) * 2020-09-10 2020-12-08 河南威虎智能科技有限公司 Human face living body detection method, device and equipment for refining depth map and storage medium
CN112818821A (en) * 2021-01-28 2021-05-18 广州广电卓识智能科技有限公司 Human face acquisition source detection method and device based on visible light and infrared light

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964056A (en) * 2010-10-26 2011-02-02 徐勇 Bimodal face authentication method with living body detection function and system
CN106372601A (en) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 In vivo detection method based on infrared visible binocular image and device
CN106355169A (en) * 2016-11-11 2017-01-25 成都优势互动科技有限公司 Infrared living body face image acquisition device, identification device and treatment method thereof
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN109446981A (en) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 A kind of face's In vivo detection, identity identifying method and device
CN210402507U (en) * 2019-04-04 2020-04-24 杭州臻财科技有限公司 Access control and access control management system based on face recognition system
CN110443192A (en) * 2019-08-01 2019-11-12 中国科学院重庆绿色智能技术研究院 A kind of non-interactive type human face in-vivo detection method and system based on binocular image
CN111126366A (en) * 2020-04-01 2020-05-08 湖南极点智能科技有限公司 Method, device, equipment and storage medium for distinguishing living human face
CN112052808A (en) * 2020-09-10 2020-12-08 河南威虎智能科技有限公司 Human face living body detection method, device and equipment for refining depth map and storage medium
CN112818821A (en) * 2021-01-28 2021-05-18 广州广电卓识智能科技有限公司 Human face acquisition source detection method and device based on visible light and infrared light

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11443550B2 (en) * 2020-06-05 2022-09-13 Jilin Qs Spectrum Data Technology Co. Ltd Face recognition monitoring system based on spectrum and multi-band fusion and recognition method using same
CN113723243A (en) * 2021-08-20 2021-11-30 南京华图信息技术有限公司 Thermal infrared image face recognition method for wearing mask and application
CN113724250A (en) * 2021-09-26 2021-11-30 新希望六和股份有限公司 Animal target counting method based on double-optical camera
CN115205939A (en) * 2022-07-14 2022-10-18 北京百度网讯科技有限公司 Face living body detection model training method and device, electronic equipment and storage medium
CN115601818A (en) * 2022-11-29 2023-01-13 海豚乐智科技(成都)有限责任公司(Cn) Lightweight visible light living body detection method and device

Similar Documents

Publication Publication Date Title
Gaur et al. Video flame and smoke based fire detection algorithms: A literature review
CN113128481A (en) Face living body detection method, device, equipment and storage medium
CN110909690B (en) Method for detecting occluded face image based on region generation
CN111767882A (en) Multi-mode pedestrian detection method based on improved YOLO model
CN110569808A (en) Living body detection method and device and computer equipment
CN110298297B (en) Flame identification method and device
CN108230291B (en) Object recognition system training method, object recognition method, device and electronic equipment
CN107871314B (en) Sensitive image identification method and device
CN110390308B (en) Video behavior identification method based on space-time confrontation generation network
CN112052830B (en) Method, device and computer storage medium for face detection
CN116311214B (en) License plate recognition method and device
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
CN115049954A (en) Target identification method, device, electronic equipment and medium
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN113723215B (en) Training method of living body detection network, living body detection method and device
CN115546906A (en) System and method for detecting human face activity in image and electronic equipment
CN114882557A (en) Face recognition method and device
CN114387496A (en) Target detection method and electronic equipment
CN116597527B (en) Living body detection method, living body detection device, electronic equipment and computer readable storage medium
CN112818911A (en) Living body detection method, computer device, and storage medium
CN117094986B (en) Self-adaptive defect detection method based on small sample and terminal equipment
CN115512428B (en) Face living body judging method, system, device and storage medium
CN114155475B (en) Method, device and medium for identifying end-to-end personnel actions under view angle of unmanned aerial vehicle
CN116071581A (en) Recognition of attack-resistant image and training method and system of recognition model thereof
Abou-Zbiba et al. Toward Reliable Mobile CrowdSensing Data Collection: Image Splicing Localization Overview

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210716

RJ01 Rejection of invention patent application after publication