CN111191549A - Two-stage face anti-counterfeiting detection method - Google Patents

Two-stage face anti-counterfeiting detection method Download PDF

Info

Publication number
CN111191549A
CN111191549A CN201911337725.9A CN201911337725A CN111191549A CN 111191549 A CN111191549 A CN 111191549A CN 201911337725 A CN201911337725 A CN 201911337725A CN 111191549 A CN111191549 A CN 111191549A
Authority
CN
China
Prior art keywords
counterfeiting
face
face anti
unit
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911337725.9A
Other languages
Chinese (zh)
Inventor
陈耀武
陈浩楠
田翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201911337725.9A priority Critical patent/CN111191549A/en
Publication of CN111191549A publication Critical patent/CN111191549A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a two-stage face anti-counterfeiting detection method, which comprises the following steps: constructing a first-stage face anti-counterfeiting detector based on a regional face anti-counterfeiting convolutional neural network; constructing a second-level face anti-counterfeiting detector based on LBP characteristics; performing first-stage anti-counterfeiting detection on the input face image by using the first-stage face anti-counterfeiting detector to obtain a first-stage face anti-counterfeiting detection result; when the first-level face anti-counterfeiting detection result does not meet the requirement, performing second-level anti-counterfeiting detection on the input face region image by using the second-level face anti-counterfeiting detector based on the first-level face anti-counterfeiting detection result to obtain a second-level face anti-counterfeiting detection result; and fusing the first-level face anti-counterfeiting detection result and the second-level face anti-counterfeiting detection result to obtain a final face anti-counterfeiting detection result. The method can directly use the original picture to carry out face living body detection, does not need to carry out face detection and cutting on the picture in advance, and accords with the practical application scene.

Description

Two-stage face anti-counterfeiting detection method
Technical Field
The invention belongs to the field of biological authentication anti-counterfeiting, and particularly relates to a two-stage face anti-counterfeiting detection method.
Background
With the development of technology, human beings use various biological characteristics as important credentials of an authentication system, such as fingerprints, human faces, sounds, pupils, and the like. Human faces are one of the most influential biometrics, both from an economic perspective and a social perspective. In addition, due to the rapid development of face recognition and face detection, the technology has been applied in many occasions, such as access control systems in confidential occasions, login systems in notebook computers, and even unlocking systems of mobile terminals, and compared with other biological characteristics, face authentication gradually becomes the most commonly used authentication mode.
A spoofing attack is an attack on a biometric authentication system that attempts to cause the biometric authentication system to authenticate an illegal user as a legitimate user by presenting a counterfeit version of the legitimate biometric features to a sensor, thereby causing the illegal user to enter the biometric authentication system. Since the attacker can easily acquire the facial features of the legal user from the personal website or the social website, the attacker can also take photos and videos of the legal user at a short distance, and compared with other biological features, the spoofing attack aiming at the human face is easier to implement.
In general, face spoofing attacks can be classified into 3 categories: photo attack, video attack and facial attack. Photo attack refers to an attack in which an attacker presents a photo of a legitimate user to a biometric authentication system sensor by printing the photo on paper or displaying the photo on a screen of an electronic device. Video attacks are also referred to as replay attacks because such video attacks are implemented by replaying videos of legitimate users. The mask attack refers to an attack behavior that an attacker wears a 3D mask of a legal user to pretend to be the legal user and tries to enter a face recognition system.
Since the 21 st century, research institutes at home and abroad begin to carry out a great deal of research on face spoofing attack detection, and the traditional methods are mainly divided into three categories: the first type is a texture feature-based method, which mainly uses manually extracted texture features to perform face spoofing attack detection. The second type is a method based on motion information, which mainly detects motion information of blinking, shaking, nodding and the like of people in a video through a video frame sequence, thereby judging whether the attack is a face spoofing attack. The third type is a method based on face three-dimensional information reconstruction, which mainly reconstructs a sparse three-dimensional model by using a face two-dimensional photo, thereby judging whether the face is a real face or a picture or a video attack.
With the rapid development of the computing power of modern processors and deep learning theory, the method of deep learning has begun to be applied to face spoofing attack detection. Compared with the artificial feature extraction, the detection performance of the neural network method is greatly improved. The neural network can learn more discriminative characteristics to judge the face spoofing attack, but the deep learning method needs a large amount of training data to train the model, and the generalization of the model is reduced to a certain extent due to the strong learning capability. Therefore, it is a hot research focus of researchers to improve the generalization of the model while making the best use of the learning ability of the neural network.
Disclosure of Invention
The invention aims to provide a two-stage face anti-counterfeiting detection method, which can directly use an original picture to carry out face living body detection, does not need to carry out face detection and cutting on the picture in advance, and accords with an actual application scene. Meanwhile, the illumination robustness of the optimized Retinex local binary pattern characteristic is utilized to carry out second-stage detection, so that the detection result is accurate and has certain generalization.
In order to achieve the purpose, the invention provides the following technical scheme:
a two-stage face anti-counterfeiting detection method is characterized by comprising the following steps:
constructing a first-stage face anti-counterfeiting detector based on a regional face anti-counterfeiting convolutional neural network and used for performing first-stage anti-counterfeiting detection on an input face image;
constructing a second-level face anti-counterfeiting detector which is based on LBP characteristics and used for carrying out second-level anti-counterfeiting detection on the input face area image;
performing first-stage anti-counterfeiting detection on the input face image by using the first-stage face anti-counterfeiting detector to obtain a first-stage face anti-counterfeiting detection result;
when the first-level face anti-counterfeiting detection result does not meet the requirement, performing second-level anti-counterfeiting detection on the input face region image by using the second-level face anti-counterfeiting detector based on the first-level face anti-counterfeiting detection result to obtain a second-level face anti-counterfeiting detection result;
and fusing the first-level face anti-counterfeiting detection result and the second-level face anti-counterfeiting detection result to obtain a final face anti-counterfeiting detection result.
The human face anti-counterfeiting detection method has the beneficial effects that at least:
the structure of the human face anti-counterfeiting detection model is different from most models, a cascaded framework is adopted, and the human face image is detected by two human face anti-counterfeiting detectors. The first-level face anti-counterfeiting detector detects confidence degrees of face anti-counterfeiting classification roughly on one hand, and acquires position coordinates of face region images on the other hand. The second-level face anti-counterfeiting detector detects difficult cases and uncertain samples of the first-level face anti-counterfeiting detector, and the Retinex algorithm enables the second-level detector to have illumination robustness, so that the difficult cases which cannot be detected due to illumination can be better dealt with. And finally, the outputs of the two face anti-counterfeiting detectors are integrated to obtain a final classification result. Compared with the existing method, the method can combine face detection and face anti-counterfeiting detection, directly input the original image and output the result, has illumination robustness, and has better improvement on accuracy and practicability compared with other methods.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a two-stage face anti-counterfeiting detection method provided by the embodiment;
FIG. 2 is a schematic diagram of the first stage detector of FIG. 1;
FIG. 3 is a schematic diagram of the structure of the second stage detector of FIG. 1;
FIG. 4 is a flowchart of constructing and training a first-level face anti-counterfeiting detection model according to an embodiment;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In order to realize the anti-counterfeiting detection of the human face, the two-stage anti-counterfeiting detection method provided by the embodiment comprises two stages of constructing the anti-counterfeiting detector of the human face and performing anti-counterfeiting judgment on the human face by using the anti-counterfeiting detector of the human face.
Construction stage of human face anti-fake detector
The construction stage mainly constructs a first-stage face anti-counterfeiting detector which is based on a regional face anti-counterfeiting convolutional neural network and is used for carrying out first-stage anti-counterfeiting detection on an input face image; and constructing a second-level face anti-counterfeiting detector which is based on the LBP characteristics and used for carrying out second-level anti-counterfeiting detection on the input face area image.
The construction of the first-level human face anti-counterfeiting detector mainly comprises network construction and network training.
As shown in fig. 2, the area human face anti-counterfeiting convolutional neural network comprises a basic convolutional layer unit, an area generation network unit, a RoI pooling layer, a fusion unit and a classification regression unit, wherein,
the basic convolution layer unit is used for extracting image features of a face image and outputting the image features to the region generation network unit, and the region generation network unit performs region generation based on the image features to obtain candidate frame coordinates possibly containing the face and confidence coefficients possibly containing the face;
image features extracted from the rear three layers of the basic convolutional layer unit and candidate frame coordinates obtained by the area generation network unit are input into the fusion unit after being pooled by the RoI pooling layer;
the fusion unit fuses the input features by adopting an attention mechanism to obtain a regional feature block for classification and outputs the regional feature block to the classification regression unit;
the classification regression unit classifies and regresses the input regional feature blocks and outputs a first-level face anti-counterfeiting detection result comprising a first classification result confidence coefficient and face position coordinates.
In this embodiment, the basic convolutional layer unit takes VGG-16 as a basic network, and removes the subsequent pooling layer to form a shared convolutional layer for extracting facial image features.
The fusion unit is used for fusing the pooling characteristics obtained by the last three layers of output characteristics in the basic convolutional layer unit through the RoI pooling layer. Firstly, initializing a weight vector, wherein the vector dimension is (N, 3), the value is 0.5, N is the dimension of a feature vector, in the training process, a loss function is propagated reversely to update weight parameters, when the whole training is finished, an optimal fusion weight is obtained, and three pooling feature vectors with the same dimension are subjected to weighted fusion according to bits to obtain a single dimension-invariant region feature block.
The classification regression unit is mainly used for classifying the regional characteristic blocks and outputting classification results. And performing regression and fine adjustment on the obtained face region position coordinates to obtain final face position coordinates. The classification regression unit performs classification by using the full-connection layer, outputs a three-dimensional vector with the sum of 1, namely a background, a real face and an attack face, wherein the correspondence of the larger confidence degree represents the final output prediction result, 0 represents the background, 1 represents the real face, and 2 represents the attack face. The face position coordinates are four integers and comprise horizontal and vertical coordinates of the upper left and the lower right of the face area.
After the regional face anti-counterfeiting convolutional neural network is constructed and obtained, parameter optimization is carried out on the regional face anti-counterfeiting convolutional neural network, and then the first-stage face anti-counterfeiting detector can be obtained. Specifically, the region-based face anti-counterfeiting convolutional neural network is trained to optimize parameters, the determined parameters and the region-based face anti-counterfeiting convolutional neural network structure form a first-stage face anti-counterfeiting detector, as shown in fig. 4, the training process is as follows:
and when the parameters are optimized, the difference between the output predicted value and the label value is used as loss, and the weight parameters of the regional face anti-counterfeiting convolutional neural network are updated by using back propagation.
And when the training is finished, the determined network parameters and the regional face anti-counterfeiting convolutional neural network form a first-stage face anti-counterfeiting detector.
In this embodiment, the living human face classification adopted in the first-stage human face anti-counterfeiting detector training uses a Crystal loss function and a Center loss function, the adopted optimizer is an Adam optimizer, and the initial learning rate is set to 0.001. The mini-batch size adopted during training is 16, namely 8 images with the size of 224 multiplied by 224 are fed each time for training, and after 300 batches of training, model parameters are stored.
As shown in fig. 3, the second-level face anti-counterfeiting detector comprises a cropping unit, an illumination image generation unit, an enhanced image generation unit, a channel merging unit, an LBP feature extraction unit and a classification unit, wherein,
the cutting unit cuts the face region image according to the face position coordinates output by the first-stage face anti-counterfeiting detector, obtains the face region image and does not output the face region image to the illumination image generating unit;
the illumination image generation unit processes the face area image by adopting an iteration guide filtering function to obtain an illumination image and outputs the illumination image to the enhanced image generation unit;
the enhanced image generation unit removes an illumination image from the face region image by adopting a Retinex algorithm to obtain an enhanced image and outputs the enhanced image to the channel merging unit;
the channel merging unit converts the RGB face region image into YCbCr and LAB color space, and then performs inter-channel merging with the enhanced image to obtain 5-channel image and outputs the 5-channel image to the LBP feature extraction unit;
the LBP (local binary pattern) feature extraction unit is used for carrying out LBP feature extraction on the input enhanced image and outputting LBP features to the classification unit;
and the classification unit calculates and obtains a second classification result confidence coefficient according to the input LBP characteristics.
In the illumination image generation unit, multiple iterations are used, a gray level image is used as a guide image, guide filtering is carried out, and the original Gaussian filtering in a Retinex algorithm is replaced, so that an illumination image is obtained. In the enhanced image generation unit, the Retinex algorithm is an image enhancement algorithm based on the Retinex theory. The illumination image is assumed to be estimated as a spatially smooth image, the original image as S (x, y), the reflection image as R (x, y), and the luminance image as L (x, y). The Retinex algorithm estimates the change of illumination in an image by calculating the weighted average of pixel points and surrounding areas, removes a brightness image and only keeps the attribute of S (x, y), thereby achieving the effect of brightness uniformity. The invention uses a multi-scale Retinex algorithm developed based on a single-scale Retinex algorithm, absorbs the advantages of a single-scale method, overcomes the defects of the single-scale method, and realizes a better brightness uniformity effect.
In this embodiment, the classification unit performs classification by using a Support Vector Machine (SVM).
On the basis of constructing a second-level face anti-counterfeiting detector structure, training the second-level face anti-counterfeiting detector to optimize parameters so as to obtain a second-level face anti-counterfeiting detector with determined parameters, wherein the training process comprises the following steps:
and performing parameter optimization on a classifier in a classification unit of the second-level face anti-counterfeiting detector by taking the face region image and the classification label as training samples to obtain the second-level face anti-counterfeiting detector with determined parameters.
Anti-fake judging stage for human face by using human face anti-fake detector
As shown in fig. 1, the process of performing anti-counterfeiting judgment on a human face by using a human face anti-counterfeiting detector includes:
performing first-stage anti-counterfeiting detection on the input face image by using the first-stage face anti-counterfeiting detector to obtain a first-stage face anti-counterfeiting detection result;
when the first-level face anti-counterfeiting detection result does not meet the requirement, performing second-level anti-counterfeiting detection on the input face region image by using the second-level face anti-counterfeiting detector based on the first-level face anti-counterfeiting detection result to obtain a second-level face anti-counterfeiting detection result;
and fusing the first-level face anti-counterfeiting detection result and the second-level face anti-counterfeiting detection result to obtain a final face anti-counterfeiting detection result.
In this embodiment, when the confidence of the first classification result in the first-level face anti-counterfeiting detection result is smaller than the set threshold, the first-level face anti-counterfeiting detection result is considered to be not satisfied with the requirement. The confidence coefficient of the classification result obtained by the first-level face anti-counterfeiting detector is checked to judge whether second-level face anti-counterfeiting detection is needed, if the confidence coefficient reaches a target threshold value, the result of the first-level face anti-counterfeiting detector is directly output, otherwise, the second-level face anti-counterfeiting detection is carried out.
In this embodiment, the confidence of the first classification result in the primary face anti-counterfeiting detection result and the confidence of the second classification result in the secondary face anti-counterfeiting detection result are averaged to obtain the confidence of the final classification result, that is, the final face anti-counterfeiting detection result is obtained.
Compared with the prior art, the anti-counterfeiting detection result obtained by the two-stage face anti-counterfeiting detection method has better accuracy.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A two-stage face anti-counterfeiting detection method is characterized by comprising the following steps:
constructing a first-stage face anti-counterfeiting detector based on a regional face anti-counterfeiting convolutional neural network and used for performing first-stage anti-counterfeiting detection on an input face image;
constructing a second-level face anti-counterfeiting detector which is based on LBP characteristics and used for carrying out second-level anti-counterfeiting detection on the input face area image;
performing first-stage anti-counterfeiting detection on the input face image by using the first-stage face anti-counterfeiting detector to obtain a first-stage face anti-counterfeiting detection result;
when the first-level face anti-counterfeiting detection result does not meet the requirement, performing second-level anti-counterfeiting detection on the input face region image by using the second-level face anti-counterfeiting detector based on the first-level face anti-counterfeiting detection result to obtain a second-level face anti-counterfeiting detection result;
and fusing the first-level face anti-counterfeiting detection result and the second-level face anti-counterfeiting detection result to obtain a final face anti-counterfeiting detection result.
2. The two-stage face anti-counterfeiting detection method according to claim 1, wherein the regional face anti-counterfeiting convolutional neural network comprises a basic convolutional layer unit, a region generation network unit, a RoI pooling layer, a fusion unit and a classification regression unit, wherein,
the basic convolution layer unit is used for extracting image features of a face image and outputting the image features to the region generation network unit, and the region generation network unit performs region generation based on the image features to obtain candidate frame coordinates possibly containing the face and confidence coefficients possibly containing the face;
image features extracted from the rear three layers of the basic convolutional layer unit and candidate frame coordinates obtained by the area generation network unit are input into the fusion unit after being pooled by the RoI pooling layer;
the fusion unit fuses the input features by adopting an attention mechanism to obtain a regional feature block for classification and outputs the regional feature block to the classification regression unit;
the classification regression unit classifies and regresses the input regional feature blocks and outputs a first-level face anti-counterfeiting detection result comprising a first classification result confidence coefficient and face position coordinates.
3. The two-stage face anti-counterfeiting detection method according to claim 2, wherein the basic convolutional layer unit takes VGG-16 as a basic network, and a rear pooling layer is removed.
4. The two-stage face anti-counterfeiting detection method according to any one of claims 1 to 3, wherein the region-based face anti-counterfeiting convolutional neural network is trained to optimize parameters, the determined parameters and the region-based face anti-counterfeiting convolutional neural network structure form a first-stage face anti-counterfeiting detector, and the training process is as follows:
and when the parameters are optimized, the difference between the output predicted value and the label value is used as loss, and the weight parameters of the regional face anti-counterfeiting convolutional neural network are updated by using back propagation.
5. The two-stage face anti-counterfeiting detection method according to claim 1, wherein the second-stage face anti-counterfeiting detector comprises a cropping unit, an illumination image generation unit, an enhanced image generation unit, a channel merging unit, an LBP feature extraction unit and a classification unit,
the cutting unit cuts the face region image according to the face position coordinates output by the first-stage face anti-counterfeiting detector, obtains the face region image and does not output the face region image to the illumination image generating unit;
the illumination image generation unit processes the face area image by adopting an iteration guide filtering function to obtain an illumination image and outputs the illumination image to the enhanced image generation unit;
the enhanced image generation unit removes an illumination image from the face region image by adopting a Retinex algorithm to obtain an enhanced image and outputs the enhanced image to the channel merging unit;
the channel merging unit converts the RGB face region image into YCbCr and LAB color space, and then performs inter-channel merging with the enhanced image to obtain 5-channel image and outputs the 5-channel image to the LBP feature extraction unit;
the LBP feature extraction unit is used for carrying out LBP feature extraction on the input enhanced image and outputting LBP features to the classification unit;
and the classification unit calculates and obtains a second classification result confidence coefficient according to the input LBP characteristics.
6. The two-stage face anti-counterfeiting detection method according to claim 1, wherein the classification unit performs classification by using a Support Vector Machine (SVM).
7. The two-stage face anti-counterfeiting detection method according to claim 5 or 6, wherein the second-stage face anti-counterfeiting detector is trained to optimize parameters to obtain a parameter-determined second-stage face anti-counterfeiting detector, and the training process comprises:
and performing parameter optimization on a classifier in a classification unit of the second-level face anti-counterfeiting detector by taking the face region image and the classification label as training samples to obtain the second-level face anti-counterfeiting detector with determined parameters.
8. The two-stage face anti-counterfeiting detection method according to claim 1, wherein when the confidence of the first classification result in the first-stage face anti-counterfeiting detection result is less than a set threshold, the first-stage face anti-counterfeiting detection result is considered to be not satisfied with the requirements.
9. The two-stage face anti-counterfeiting detection method according to claim 1, wherein a final classification result confidence is obtained by averaging a first classification result confidence in the first-stage face anti-counterfeiting detection result and a second classification result in the second-stage face anti-counterfeiting detection result, so as to obtain a final face anti-counterfeiting detection result.
CN201911337725.9A 2019-12-23 2019-12-23 Two-stage face anti-counterfeiting detection method Pending CN111191549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911337725.9A CN111191549A (en) 2019-12-23 2019-12-23 Two-stage face anti-counterfeiting detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911337725.9A CN111191549A (en) 2019-12-23 2019-12-23 Two-stage face anti-counterfeiting detection method

Publications (1)

Publication Number Publication Date
CN111191549A true CN111191549A (en) 2020-05-22

Family

ID=70707531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911337725.9A Pending CN111191549A (en) 2019-12-23 2019-12-23 Two-stage face anti-counterfeiting detection method

Country Status (1)

Country Link
CN (1) CN111191549A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749686A (en) * 2021-01-29 2021-05-04 腾讯科技(深圳)有限公司 Image detection method, image detection device, computer equipment and storage medium
CN112818782A (en) * 2021-01-22 2021-05-18 电子科技大学 Generalized silence living body detection method based on medium sensing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650699A (en) * 2016-12-30 2017-05-10 中国科学院深圳先进技术研究院 CNN-based face detection method and device
US20190012868A1 (en) * 2016-03-14 2019-01-10 Toppan Printing Co., Ltd. Identification devices, identification methods, identification programs and computer readable media including identification programs
CN110348319A (en) * 2019-06-18 2019-10-18 武汉大学 A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN110414350A (en) * 2019-06-26 2019-11-05 浙江大学 The face false-proof detection method of two-way convolutional neural networks based on attention model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012868A1 (en) * 2016-03-14 2019-01-10 Toppan Printing Co., Ltd. Identification devices, identification methods, identification programs and computer readable media including identification programs
CN106650699A (en) * 2016-12-30 2017-05-10 中国科学院深圳先进技术研究院 CNN-based face detection method and device
CN110348319A (en) * 2019-06-18 2019-10-18 武汉大学 A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN110414350A (en) * 2019-06-26 2019-11-05 浙江大学 The face false-proof detection method of two-way convolutional neural networks based on attention model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAONAN CHEN ET AL.: "A Cascade Face Spoofing Detector Based on Face Anti-Spoofing R-CNN and Improved Retinex LBP" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818782A (en) * 2021-01-22 2021-05-18 电子科技大学 Generalized silence living body detection method based on medium sensing
CN112818782B (en) * 2021-01-22 2021-09-21 电子科技大学 Generalized silence living body detection method based on medium sensing
CN112749686A (en) * 2021-01-29 2021-05-04 腾讯科技(深圳)有限公司 Image detection method, image detection device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
AU2014368997B2 (en) System and method for identifying faces in unconstrained media
EP3321850B1 (en) Method and apparatus with iris region extraction
CN109815826B (en) Method and device for generating face attribute model
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN112052831B (en) Method, device and computer storage medium for face detection
CN111814620B (en) Face image quality evaluation model establishment method, optimization method, medium and device
Zheng et al. Attention-based spatial-temporal multi-scale network for face anti-spoofing
CN110414350A (en) The face false-proof detection method of two-way convolutional neural networks based on attention model
CN113963032A (en) Twin network structure target tracking method fusing target re-identification
CN116310008B (en) Image processing method based on less sample learning and related equipment
CN115482595B (en) Specific character visual sense counterfeiting detection and identification method based on semantic segmentation
Tsai et al. Robust in-plane and out-of-plane face detection algorithm using frontal face detector and symmetry extension
CN112434647A (en) Human face living body detection method
CN111191549A (en) Two-stage face anti-counterfeiting detection method
Sabaghi et al. Deep learning meets liveness detection: recent advancements and challenges
CN117079313A (en) Image processing method, device, equipment and storage medium
CN115578768A (en) Training method of image detection network, image detection method and system
Cho et al. Colorizing Face Sketch Images for Face Photo Synthesis
Zhang et al. Unsupervised saliency detection in 3-D-video based on multiscale segmentation and refinement
Murtaza et al. Face forgery detection via optimum deep convolution activation feature selection algorithm using expert-generated images
Chihaoui et al. Implementation of skin color selection prior to Gabor filter and neural network to reduce execution time of face detection
Khan et al. Critical Evaluation of Frontal Image-Based Gender Classification Techniques
Zhang et al. Computational Visual Media
CN116959125A (en) Data processing method and related device
Abid et al. Computationally intelligent real-time security surveillance system in the education sector using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200522