CN112395943A - Detection method for counterfeiting face video based on deep learning - Google Patents

Detection method for counterfeiting face video based on deep learning Download PDF

Info

Publication number
CN112395943A
CN112395943A CN202011119313.0A CN202011119313A CN112395943A CN 112395943 A CN112395943 A CN 112395943A CN 202011119313 A CN202011119313 A CN 202011119313A CN 112395943 A CN112395943 A CN 112395943A
Authority
CN
China
Prior art keywords
face
image
face image
model
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011119313.0A
Other languages
Chinese (zh)
Inventor
徐华建
汤敏伟
袁顺杰
李�真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Electronic Commerce Co Ltd
Original Assignee
Tianyi Electronic Commerce Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Electronic Commerce Co Ltd filed Critical Tianyi Electronic Commerce Co Ltd
Priority to CN202011119313.0A priority Critical patent/CN112395943A/en
Publication of CN112395943A publication Critical patent/CN112395943A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a detection method for counterfeiting a face video based on deep learning, which comprises the following steps: s1: using a face detection model to perform face detection on an image to be detected, and acquiring an image containing a face and a small-range background area, namely a face image for short; s2: filtering the face image by using a characteristic filtering model to obtain a filtered face image; s3: and (3) using a capsule network model, firstly extracting the characteristics of the filtered face image, then predicting the extracted characteristics, giving the probability that the image is the face image forged, and finally judging whether the image is the face image forged or not. The main working module of the invention is a capsule network module, the invention mainly aims at detecting the generated forged face image obtained by resisting the generated network and detecting the image obtained by a more novel forging mode, which is not possessed by other similar inventions and has higher accuracy and better robustness.

Description

Detection method for counterfeiting face video based on deep learning
Technical Field
The invention relates to the technical field of electronic information, in particular to a detection method for forging human face videos based on deep learning.
Background
In the related field of fake face image detection, a method based on a circular convolution neural network is mainly adopted at present. The convolutional neural network is mainly used for analyzing image texture features, human face edge features, head posture features and other counterfeit features. Most of these features are detected by conventional forgery means, such as PS image features, moire features of a reproduced screen, and the like, but these conventional detection methods face the latest forgery means at present: the counterfeited face image generated by the countermeasure generation network does not have effective detection capability. The rest parts detect the latest means for generating the forged face image, but at present, the detection method is still not mature enough, the detection accuracy is low, and the robustness is poor.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a detection method for counterfeiting a face video based on depth learning.
In order to solve the technical problems, the invention provides the following technical scheme:
the invention relates to a detection method for forging a face video based on deep learning, which comprises the following steps:
s1: using a face detection model to perform face detection on an image to be detected, and acquiring an image containing a face and a small-range background area, namely a face image for short;
s2: filtering the face image by using a characteristic filtering model to obtain a filtered face image;
s3: using a capsule network model, firstly extracting features of the filtered face image, then predicting the extracted features, giving the probability that the image is a generated forged face image, and finally judging whether the image is the generated forged face image;
wherein step S1 includes the following:
s1.1: carrying out face detection on an image to be detected by using a face detection model to obtain coordinate information of a face in the image;
s1.2: obtaining coordinates of a face center according to the face coordinate information obtained in the step S1.1, and taking the coordinates as a center, intercepting a square image containing a face and a small-range background area, namely the face image for short;
step S2 includes the following:
s2.1: sending the face image obtained in the step 1.1 into a filter to obtain a filtered face image;
step S3 includes the following:
s3.1: sending the filtered face image obtained in the step S2.1 into a convolutional neural network to obtain texture features;
s3.2: transmitting the texture features obtained in the step 3.1 into ten input capsules in a copying mode, and obtaining ten output feature vectors after calculation of the input capsules;
s3.3: performing dynamic routing operation on the ten output characteristic vectors to obtain an output vector, sending the output vector into an output capsule, and obtaining a prediction probability after operation;
s3.4: and comparing the prediction probability obtained in the step S3.3 with a threshold value to obtain a prediction result.
Compared with the prior art, the invention has the following beneficial effects:
1. the main working module of the invention is a capsule network module, which is the core of the invention different from other similar inventions and is the root of the invention that the detection performance is superior to other similar inventions;
2. the method mainly aims at detecting the generated forged face image obtained by resisting the generated network, and aims at detecting the picture obtained by a novel forging mode, which is not possessed by other similar inventions, so that the method has higher accuracy and better robustness.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a general schematic of the system of the present invention;
FIG. 2 is a schematic diagram of a face acquisition model of the present invention;
FIG. 3 is a schematic diagram of a characteristic filtering model of the present invention;
FIG. 4 is a schematic diagram of a capsule network module of the present invention;
fig. 5 is a schematic diagram of a filter obtaining a noise picture.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1
As shown in fig. 1 to 5, the present invention provides a method for detecting a fake face video based on deep learning, which comprises the following steps:
s1: using a face detection model to perform face detection on an image to be detected, and acquiring an image containing a face and a small-range background area, namely a face image for short;
s2: filtering the face image by using a characteristic filtering model to obtain a filtered face image;
s3: using a capsule network model, firstly extracting features of the filtered face image, then predicting the extracted features, giving the probability that the image is a generated forged face image, and finally judging whether the image is the generated forged face image;
wherein step S1 includes the following:
s1.1: carrying out face detection on an image to be detected by using a face detection model to obtain coordinate information of a face in the image;
s1.2: obtaining coordinates of a face center according to the face coordinate information obtained in the step S1.1, and taking the coordinates as a center, intercepting a square image containing a face and a small-range background area, namely the face image for short;
step S2 includes the following:
s2.1: sending the face image obtained in the step 1.1 into a filter to obtain a filtered face image;
step S3 includes the following:
s3.1: sending the filtered face image obtained in the step S2.1 into a convolutional neural network to obtain texture features;
s3.2: transmitting the texture features obtained in the step 3.1 into ten input capsules in a copying mode, and obtaining ten output feature vectors after calculation of the input capsules;
s3.3: performing dynamic routing operation on the ten output characteristic vectors to obtain an output vector, sending the output vector into an output capsule, and obtaining a prediction probability after operation;
s3.4: and comparing the prediction probability obtained in the step S3.3 with a threshold value to obtain a prediction result.
The method for detecting the counterfeit face image based on the capsule network provided by the embodiment considers whether the face image is the counterfeit image as an image classification problem, provides a method for predicting the probability of whether the face image is counterfeit or not by adopting the capsule network, and finally determines whether the face image is the counterfeit image or not. In specific implementation, because the textures of the face region and the image background region in the fake face image are not completely consistent, the hidden texture information of the image can be extracted, the corresponding depth features are extracted by using a capsule network, then the corresponding prediction probabilities are subjected to thresholding division to be divided into fake or real, the prediction probabilities are compared with the threshold values, and finally whether the face image is a fake image or not is judged.
Fig. 1 is a flowchart illustrating a method for detecting a counterfeit face image based on a capsule network according to an exemplary embodiment, and referring to fig. 1, the method includes the following steps:
s1: using a face acquisition module to perform face detection on an image to be detected, and acquiring an image containing a face and a small-range background area, namely a face image for short;
in particular, proper cropping of the image to be authenticated is a very important process. The method mainly detects whether the areas of the two people in the image are forged or not, and does not pay attention to the rest areas in the image. If the whole image is directly detected, the proportion of the face area is too small, and the detection model may not pay attention to the face area to extract proper features. Therefore, the human face area and the small-range background area with proper sizes are intercepted, and the effectiveness of the detection model can be improved.
Fig. 2 is a flowchart illustrating a process of performing face detection on an image to be detected by using a face acquisition model to acquire an image including a face and a small-range background region, according to an exemplary embodiment, and referring to fig. 2, the process includes the following steps:
s1.1: using a face detection module to perform face detection on an image to be detected to obtain coordinate information of a face in the image;
specifically, the face detection module in step S1.1 detects a face by using an open source face _ recognition model. The face _ recognition model mainly calls a face detection model in a Dlib library to detect key points of the face in the face image, so that the function of positioning the face in the image is realized. Specifically, the face detection model in the Dlib library itself detects 68 key points of the face of the person in the face image, and gives coordinate information of the 68 key points. In this embodiment, only four key points on the outermost side in the upper, lower, left, and right directions are used to locate the face region, and the coordinates of the four key points are used as the coordinate information of the face.
In the embodiment of the present invention, as a preferred implementation manner, the face detection model used in step S1.1 adopts an open-source face _ recognition model. In other embodiments, a CNN-based face detection model in the Dlib library, or a model obtained by other training methods may also be used.
S1.2: and obtaining the coordinates of the center of the face according to the face coordinate information obtained in the step S1.1. Taking the coordinate as a center, intercepting a square image containing a face and a small-range background area, namely the face image for short;
specifically, the face coordinate information includes coordinates of four key points, i.e., upper, lower, left, and right key points, and the coordinates of the four key points form a rectangle, and the center of the rectangular area is determined as the center of the face.
Specifically, the size of the face region is extracted in step S1.2, and in this embodiment, as a preferred embodiment, the face region occupies approximately half of the area of the 256 × 256 region, so that the size of 256 × 256 pixels is adopted. In other embodiments, the size may be other sizes suitable for the actual application scenario.
Specifically, since some images have a face with a large size or a face position close to an image edge, step S1.2 specifically includes the following cases:
if the 256 × 256 square area cannot completely cover the face area, performing one or more times of downsampling on the image until the face area can be completely covered, and then intercepting the square area with the pixel size of 256 × 256;
if the center of the face is taken as the center of the square area with the pixel size of 256 × 256 and the area exceeds the pixel range of the image, the area is cut after being properly translated.
S2: and filtering the face image by using a characteristic filtering model to obtain a filtered face image.
In particular, feature extraction of an image to be detected is a very important process. The detection of the unprocessed RGB image is not performed, and the feature information of the forged face cannot be extracted well. Therefore, the effectiveness of the model can be effectively detected by using a certain mode for feature extraction.
Fig. 3 is a flowchart illustrating a method for extracting features of a face image to obtain texture features by using a feature extraction module according to an exemplary embodiment, and referring to fig. 3, the method includes the following steps:
s2.1: sending the face image obtained in the step S1.1 into a filter to obtain a filtered face image;
specifically, step S2.1 employs an SRM filter for filtering. The SRM filter is
RichmodelsForsteganalysisolfdigitalimages, an abbreviation for SteganalysisRechModel, meaning a Rich steganalysis model. The paper used 3 filters in fig. 5 to obtain a noisy picture (as shown in fig. 5).
The SRM filter collects basic noise characteristics, quantifies and cuts the output of the filters, extracts nearby co-occurrence information as final characteristics, and can effectively extract the characteristics of a face area and a background area.
In this embodiment, as a preferred embodiment, the filter used in step S2.1 is an SRM filter. In other embodiments, other forms of filters may be used, such as LBP filters, sobel filters, etc.
S3: and (3) using a capsule network model, firstly extracting the characteristics of the filtered face image, then predicting the extracted characteristics, giving the probability that the image is the face image forged, and finally judging whether the image is the face image forged or not.
In particular, the conventional convolutional neural network cannot effectively and respectively learn different regions in the image. Further, the difference between the texture characteristics of the face region and the background region cannot be effectively distinguished. In contrast, the capsule network can learn information of different positions in the image step by step due to the unique structural design and different input capsules adopt different random initial values. Subsequently, through a dynamic routing algorithm, the capsule network can identify the texture inconsistency of the face region and the background region, and finally effectively detect the forged face image.
Specifically, the capsule network model in step S3 is obtained by a pre-training method. The pre-training is achieved by acquiring data sets and updating parameters of gradient descent. The data set is a set of a batch of images including a common face image and a generated tampered face image, wherein the ratio of the common face image to the generated tampered face image is approximately one to one. After the data set is obtained, the data set is divided into a training set, a verification set and a test set, the proportion of the three parts is about 70:15:15, wherein the training set is input into an initial model with certain preset parameters according to the step S3, the parameters of the model are updated through a gradient descent method, and finally, the gradient descent is stopped after certain preset conditions are reached. And then, inputting the verification set into the model after the parameters are updated, finely adjusting the parameters according to a prediction result given by the model, wherein the model after the parameters are finely adjusted is the model adopted in the final specific implementation process.
Fig. 4 is a flow chart illustrating prediction of a face image after feature extraction using a capsule network module, according to an exemplary embodiment, giving a probability that the image is a fake face image, as shown with reference to fig. 4, which includes the following steps:
s3.1: and (4) sending the filtered face image obtained in the step (S2.1) into a convolutional neural network to obtain texture features.
Specifically, the convolutional neural network used in step S3.1 adopts a VGG19 network model in the present invention. The network weight is obtained by using a neural network model trained by VGG19 in ImageNet as a pre-training basis weight and further training.
Specifically, as the extracted features need to be input into the subsequent capsule network model, and only the features need to be preliminarily extracted, a simpler VGG19 network model is selected for extraction.
In this embodiment, as a preferred embodiment, the convolution neural network used in step S3.1 is a VGG19 network. In other embodiments, other forms of convolutional neural networks such as resnet may be used.
S3.2: sending the characteristic vectors obtained in the step S3.1 into ten input capsules in a copying mode, and obtaining ten output characteristics after calculation of the input capsules;
specifically, in the present embodiment, ten input capsules are used, which is to ensure the validity of the detection model by using a sufficient number of input capsules in consideration of the appropriate size of the model. However, since the improvement of the effectiveness of the model by the excessive number of input capsules is also limited, ten input capsules are selected in the embodiment. In other embodiments, the number of input capsules can be reduced or increased according to the requirement of the size of the model or the requirement of the accuracy of the detection model.
Specifically, in this embodiment, the input capsule adopts a form of a feature extraction module and a feature compression module, where the feature extraction module is configured to further extract the texture features input in step S3.2, reduce the dimensionality, and reduce the matrix parameter number of the subsequent dynamic routing module, and meanwhile, the feature extraction module also has a function of converting a two-dimensional feature matrix into a one-dimensional feature vector. And finally, carrying out nonlinear transformation on the one-dimensional feature vector by a feature compression module, ensuring that the maximum length of the one-dimensional feature vector is 1 and the minimum length of the one-dimensional feature vector is 0, and keeping the direction of the one-dimensional feature vector unchanged. The final result is the output feature vector.
S3.3: and performing dynamic routing operation on the ten output characteristic vectors to obtain an output vector, sending the output vector into an output capsule, and obtaining the prediction probability after operation.
Specifically, the dynamic routing mainly comprises a conversion matrix module, a vector compression module and a probability prediction module. After the feature vectors given by the ten input capsules are obtained, the transformation matrix module first stacks them into a two-dimensional feature matrix. Subsequently, the feature matrices obtained by stacking are subjected to weight distribution in a matrix multiplication form, and are converted into one-dimensional vectors from two-dimensional matrices again. And then, compressing the feature vector by a vector compression module to enable the one-dimensional feature vector to be subjected to nonlinear transformation, and simultaneously ensuring that the maximum length of the feature vector is 1 and the minimum length of the feature vector is 0, and keeping the direction of the feature vector unchanged. Finally, the compressed feature vectors are converted into final prediction probabilities through a full-connection network of a probability prediction module.
S3.4: and (4) comparing the prediction probability obtained in the step (S3.3) with a threshold value to obtain a prediction result.
Specifically, the threshold used for comparison in step S3.3 is a value selected to minimize the equal error rate of the training data predictions during the training of the model.
In this embodiment, as a preferred embodiment, the threshold value is selected in step S3.1 to be the minimum of the equal error rates as the evaluation criterion. In other embodiments, other evaluation criteria may be adopted according to the actual application requirements.
The invention mainly uses a capsule network to detect the generation of a forged face picture, and has the technical key points as follows:
1. the main working module of the invention is a capsule network module, which is the core of the invention different from other similar inventions and is the basis of the invention that the detection performance is superior to other similar inventions.
2. The method mainly aims at detecting the generated forged face image obtained by resisting the generated network, and aims at detecting the picture obtained by a novel forging mode, which is not possessed by other similar inventions, so that the method has higher accuracy and better robustness.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. A detection method for forging a face video based on deep learning is characterized by comprising the following steps:
s1: using a face detection model to perform face detection on an image to be detected, and acquiring an image containing a face and a small-range background area, namely a face image for short;
s2: filtering the face image by using a characteristic filtering model to obtain a filtered face image;
s3: using a capsule network model, firstly extracting features of the filtered face image, then predicting the extracted features, giving the probability that the image is a generated forged face image, and finally judging whether the image is the generated forged face image;
wherein step S1 includes the following:
s1.1: carrying out face detection on an image to be detected by using a face detection model to obtain coordinate information of a face in the image;
s1.2: obtaining coordinates of a face center according to the face coordinate information obtained in the step S1.1, and taking the coordinates as a center, intercepting a square image containing a face and a small-range background area, namely the face image for short;
step S2 includes the following:
s2.1: sending the face image obtained in the step 1.1 into a filter to obtain a filtered face image;
step S3 includes the following:
s3.1: sending the filtered face image obtained in the step S2.1 into a convolutional neural network to obtain texture features;
s3.2: sending the texture features obtained in the step 3.1 into ten input capsules in a copying mode, and obtaining ten output feature vectors after calculation of the input capsules;
s3.3: performing dynamic routing operation on the ten output characteristic vectors to obtain an output vector, sending the output vector into an output capsule, and obtaining the prediction probability after operation;
s3.4: and comparing the prediction probability obtained in the step S3.3 with a threshold value to obtain a prediction result.
CN202011119313.0A 2020-10-19 2020-10-19 Detection method for counterfeiting face video based on deep learning Pending CN112395943A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011119313.0A CN112395943A (en) 2020-10-19 2020-10-19 Detection method for counterfeiting face video based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011119313.0A CN112395943A (en) 2020-10-19 2020-10-19 Detection method for counterfeiting face video based on deep learning

Publications (1)

Publication Number Publication Date
CN112395943A true CN112395943A (en) 2021-02-23

Family

ID=74596890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011119313.0A Pending CN112395943A (en) 2020-10-19 2020-10-19 Detection method for counterfeiting face video based on deep learning

Country Status (1)

Country Link
CN (1) CN112395943A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016482A (en) * 2020-08-31 2020-12-01 成都新潮传媒集团有限公司 Method and device for distinguishing false face and computer equipment
CN112991239A (en) * 2021-03-17 2021-06-18 广东工业大学 Image reverse recovery method based on deep learning
CN113205044A (en) * 2021-04-30 2021-08-03 湖南大学 Deep counterfeit video detection method based on characterization contrast prediction learning
CN113537110A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 False video detection method fusing intra-frame and inter-frame differences

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016482A (en) * 2020-08-31 2020-12-01 成都新潮传媒集团有限公司 Method and device for distinguishing false face and computer equipment
CN112991239A (en) * 2021-03-17 2021-06-18 广东工业大学 Image reverse recovery method based on deep learning
CN113205044A (en) * 2021-04-30 2021-08-03 湖南大学 Deep counterfeit video detection method based on characterization contrast prediction learning
CN113537110A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 False video detection method fusing intra-frame and inter-frame differences
CN113537110B (en) * 2021-07-26 2024-04-26 北京计算机技术及应用研究所 False video detection method fusing intra-frame differences

Similar Documents

Publication Publication Date Title
CN112395943A (en) Detection method for counterfeiting face video based on deep learning
Redi et al. Digital image forensics: a booklet for beginners
CN106530200B (en) Steganographic image detection method and system based on deep learning model
EP3523776A1 (en) Systems and methods for detection and localization of image and document forgery
CN111160313B (en) Face representation attack detection method based on LBP-VAE anomaly detection model
CN113536990A (en) Deep fake face data identification method
CN111696021B (en) Image self-adaptive steganalysis system and method based on significance detection
CN113744153B (en) Double-branch image restoration forgery detection method, system, equipment and storage medium
Gupta et al. A study on source device attribution using still images
CN115861210B (en) Transformer substation equipment abnormality detection method and system based on twin network
CN115641632A (en) Face counterfeiting detection method based on separation three-dimensional convolution neural network
CN111882525A (en) Image reproduction detection method based on LBP watermark characteristics and fine-grained identification
CN116453199A (en) GAN (generic object model) generation face detection method based on fake trace of complex texture region
CN106851140B (en) A kind of digital photo images source title method using airspace smothing filtering
CN114998261A (en) Double-current U-Net image tampering detection network system and image tampering detection method thereof
CN111259792A (en) Face living body detection method based on DWT-LBP-DCT characteristics
CN112651319B (en) Video detection method and device, electronic equipment and storage medium
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
CN107451990B (en) A kind of photograph image altering detecting method using non-linear guiding filtering
Mohamed et al. Detecting Secret Messages in Images Using Neural Networks
CN113570564B (en) Multi-definition fake face video detection method based on multi-path convolution network
CN112906508A (en) Face living body detection method based on convolutional neural network
CN113553895A (en) Multi-pose face recognition method based on face orthogonalization
CN114694196A (en) Living body classifier establishing method, human face living body detection method and device
CN117292442B (en) Cross-mode and cross-domain universal face counterfeiting positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210223