CN115147705B - Face copying detection method and device, electronic equipment and storage medium - Google Patents

Face copying detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115147705B
CN115147705B CN202211081105.5A CN202211081105A CN115147705B CN 115147705 B CN115147705 B CN 115147705B CN 202211081105 A CN202211081105 A CN 202211081105A CN 115147705 B CN115147705 B CN 115147705B
Authority
CN
China
Prior art keywords
face
frequency domain
model
classification probability
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211081105.5A
Other languages
Chinese (zh)
Other versions
CN115147705A (en
Inventor
梁俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202211081105.5A priority Critical patent/CN115147705B/en
Publication of CN115147705A publication Critical patent/CN115147705A/en
Application granted granted Critical
Publication of CN115147705B publication Critical patent/CN115147705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/95Pattern authentication; Markers therefor; Forgery detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiment of the application provides a face copying detection method and device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: respectively carrying out frequency domain conversion on the N RGB qualified face images; respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, and carrying out weighted average on N frequency domain feature classification results to obtain a first real person classification probability; respectively inputting the N D depth face images into a pre-constructed depth image model, and carrying out weighted average on N depth feature classification results to obtain a second real person classification probability; fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability; and determining a copied face classification result according to the final classification probability and a preset copying threshold value. Like this, can confirm the human face classification result of reproduction, improve the accurate rate that detects the human face reproduction, based on accurate human face reproduction detection, financial institution can realize accurate anti-fraud and discern, ensures financial transaction safety.

Description

Face copying detection method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a face reproduction, an electronic device, and a storage medium.
Background
With the continuous development of face recognition technology, face recognition technology is widely applied in various scenes. But at the same time, it is common to use a dummy face to complete on-line face recognition on the network, which results in various losses. Especially in the financial field, such as the scenes of banks, insurance, securities and the like, once the fraudulent behavior of face reproduction occurs, immeasurable loss is brought to the user. The human face anti-counterfeiting and in-vivo detection are the most important part in the application of human face recognition, and play an important role in protecting a human face recognition system from malicious attacks. At present, a large number of lawbreakers attack a face recognition system through means such as photos and masks, particularly through copying of an electronic screen or a photo, and currently, a large number of living body recognition technologies exist in the market, such as action biopsy, silence living body detection, face dazzling living body detection and the like.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present application provide a method and an apparatus for detecting face duplication, an electronic device, and a storage medium.
In a first aspect, an embodiment of the present application provides a face duplication detection method, where the method includes:
acquiring N RGB qualified face images and N corresponding D depth face images;
respectively carrying out frequency domain conversion on the N RGB qualified face images to obtain N face frequency domain images;
respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability;
respectively inputting the N D-depth face images into a pre-constructed depth image model, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability;
fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability;
and determining a copied human face classification result according to the final classification probability and a preset copying threshold value.
In one embodiment, the obtaining of each RGB qualified face image and its corresponding D depth face image includes:
carrying out face detection through MediaPipe, when a face image is detected, obtaining the face area ratio of the face image area to the whole image, and when the face area ratio is in a preset qualified area ratio interval, determining the whole image as the RGB qualified face image;
and taking the D depth face image synchronously shot with the RGB qualified face image as the D depth face image corresponding to the RGB qualified face image.
In an embodiment, the method further comprises:
constructing a model to be trained based on the se-densnet model;
constructing the frequency domain model, including:
inputting the frequency domain graph training sample into the model to be trained for calculation to obtain a first training classification result;
adjusting the model to be trained according to the first training classification result to obtain the frequency domain model;
constructing the depth map model, including:
inputting the depth map training sample into the model to be trained for calculation to obtain a second training classification result;
and adjusting the model to be trained according to the second training classification result to obtain the depth map model.
In an embodiment, the obtaining of each RGB qualified face image and its corresponding D depth face image includes:
carrying out face detection through MediaPipe, when a face image is detected, obtaining the face area ratio of the face image area to the whole image, and when the face area ratio is in a preset qualified area ratio interval, determining the whole image as the RGB qualified face image;
and taking the D depth face image synchronously shot with the RGB qualified face image as the D depth face image corresponding to the RGB qualified face image.
In an embodiment, the method further comprises:
constructing a model to be trained based on a se-densenert model;
constructing the frequency domain model, including:
inputting the frequency domain graph training sample into the model to be trained for calculation to obtain a first training classification result;
adjusting the model to be trained according to the first training classification result to obtain the frequency domain model;
constructing the depth map model, including:
inputting the depth map training sample into the model to be trained for calculation to obtain a second training classification result;
and adjusting the model to be trained according to the second training classification result to obtain the depth map model.
In an embodiment, the SE-densenert model includes a plurality of SE-densener modules, and the building of the model to be trained based on the SE-densenert model includes:
adding a corresponding Se attention mechanism module to the convolution layer of each DenseNet module to construct each SE-Dense module;
and connecting the SE-Dense modules through a convolutional layer and a pooling layer to construct the model to be trained.
In one embodiment, the SE-sense module comprises a plurality of BN-Relu-Conv layers and the SE attention mechanism module, and the SE-sense module is configured to comprise:
sequentially overlapping the BN-Relu-Conv layers to obtain an overlapping submodule;
and adding a corresponding Se-block attention module to the Conv layer of the last N-1 BN-Relu-Conv layers of the superposition submodule to obtain the SE-Dense module.
In an embodiment, the obtaining N frequency-domain feature classification results through the frequency-domain model includes:
respectively calculating real person classification probability and reproduction classification probability corresponding to the RGB qualified face images through the frequency domain model;
and taking the real person classification probability corresponding to each RGB qualified face image as each frequency domain feature classification result.
In an embodiment, the determining a classification result of the copied face according to the final classification probability and a preset copying threshold includes:
and determining a copied face classification result under the condition that the final classification probability is greater than the preset copying threshold value.
In an embodiment, the method further comprises:
and determining the real human face classification result under the condition that the final classification probability is less than or equal to the preset reproduction threshold value.
In an embodiment, the fusing the first human classification probability and the second human classification probability to obtain a final classification probability includes:
respectively setting a first weight and a second weight, wherein the first weight is smaller than the second weight;
calculating a first product value of the first weight and the first real person classification probability, calculating a second product value of the second weight and the second real person classification probability, and determining a sum of the first product value and the second product value as the final classification probability.
In a second aspect, an embodiment of the present application provides a face duplication detection apparatus, where the apparatus includes:
the acquisition module is used for acquiring N RGB qualified face images and N corresponding D depth face images;
the conversion module is used for respectively carrying out frequency domain conversion on the N RGB qualified face images to obtain N face frequency domain images;
the first processing module is used for respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability;
the second processing module is used for respectively inputting the N D-depth face images into a pre-constructed depth image model, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability;
the fusion module is used for fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability;
and the determining module is used for determining the classification result of the copied face according to the final classification probability and a preset copying threshold value.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory is used to store a computer program, and the computer program executes the face duplication detection method provided in the first aspect when the processor runs.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, where the computer program, when executed on a processor, executes the method for detecting a face duplication provided in the first aspect.
The face copying detection method, the face copying detection device, the electronic equipment and the storage medium obtain N RGB qualified face images and N corresponding D-depth face images; respectively carrying out frequency domain conversion on the N RGB qualified face images to obtain N face frequency domain images; respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability; respectively inputting the N D-depth face images into a pre-constructed depth image model, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability; fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability; and determining a copied face classification result according to the final classification probability and a preset copying threshold value. Like this, through frequency domain model and degree of depth map model respectively to RGB qualified people's face map, D degree of depth people's face map acquire first real person classification probability and second real person classification probability to fuse first real person classification probability and second real person classification probability and confirm reproduction people's face classification result, improve the rate of accuracy that detects the people's face reproduction, based on accurate people's face reproduction detection, financial institution can realize accurate anti-fraud and discern, ensures financial transaction safety.
Drawings
In order to more clearly explain the technical solutions of the present application, the drawings needed to be used in the embodiments are briefly introduced below, and it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope of protection of the present application. Like components are numbered similarly in the various figures.
Fig. 1 shows a flow diagram of a face copying detection method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a frequency domain feature of a human face image provided in an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a frequency domain characteristic of a copied image of an electronic screen according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a frequency domain feature of a photo reproduction image according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating depth map features of a human face image provided by an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a depth map feature of a copied image according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a network structure of a model to be trained according to an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a network structure of a Se-sense module according to an embodiment of the present disclosure;
FIG. 9 shows a schematic structural diagram of a Se-block provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of a face duplication detection apparatus according to an embodiment of the present application;
fig. 11 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Icon: 1000-human face reproduction detection device, 1001-acquisition module, 1002-conversion module, 1003-first processing module, 1004-second processing module, 1005-fusion module and 1006-determination module;
1100-electronic device, 1101-transceiver, 1102-processor, 1103-memory.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present application, are intended to indicate only specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the various embodiments of this application belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments.
Example 1
The embodiment of the disclosure provides a face copying detection method, which can be applied to electronic equipment, wherein the electronic equipment can be terminal equipment used by a financial institution for anti-fraud detection.
Referring to fig. 1, the face duplication detection method includes:
step S101, obtaining N RGB qualified face images and N corresponding D depth face images.
In the embodiment, a refined design is performed in the process of face acquisition, so that the qualified RGB face images are ensured to be acquired, the corresponding D-depth face images are synchronously acquired, and in addition, the experience of customers is also considered, and the customer operation is simplified. Exemplarily, in various transaction scenes of banks, insurance, securities and the like in the financial field, the face acquisition process can be carried out at each financial transaction processing stage of face login, face verification and the like. Namely, N RGB qualified face images and N corresponding D depth face images can be simultaneously acquired at various face acquisition stages of financial transactions such as face login, face verification and the like.
In one embodiment, the obtaining of each RGB qualified face image and its corresponding D depth face image includes:
carrying out face detection through MediaPipe, when a face image is detected, obtaining the face area ratio of the face image area to the whole image, and when the face area ratio is in a preset qualified area ratio interval, determining the whole image as the RGB qualified face image;
and taking the D depth face image synchronously shot with the RGB qualified face image as the D depth face image corresponding to the RGB qualified face image.
In this embodiment, the D-depth face map may be collected in the original biopsy procedure, which does not affect the original biopsy procedure, thereby improving user experience. The acquisition device may be a camera device including LiDAR, such as a cell phone including LiDAR, or the like.
In this embodiment, mediaPipe is a framework for constructing a machine learning pipeline for processing time series data of video, audio, and the like. Exemplarily, the MediaPipe with high precision and high speed is used for detecting the face, and if the face cannot be detected or more than two faces are detected, the prompt information is sent to ensure that only one face appears in front of the lens. After a face is detected, counting the face area ratio, that is, taking the ratio of the area of the face detection frame to the area of the whole image as the face area ratio, determining that the face area ratio is qualified when the face area ratio is in a preset qualified area ratio interval, and prompting a client to approach a lens or zoom out the lens if the face area ratio is not in the preset qualified area ratio interval, wherein the preset qualified area ratio interval may be 0.4-0.65, or other set intervals, and is not limited herein. Therefore, the size and the definition of the collected face image are ensured, the interference of areas except the face is reduced as much as possible, and meanwhile, the D-depth face image is better collected. It should be noted that, for the N RGB qualified face maps and the N D depth face maps corresponding to the RGB qualified face maps, the value of N may be set in a customized manner, for example, N may be 3,N or 5, and is not limited herein.
And S102, respectively carrying out frequency domain conversion on the N RGB qualified face images to obtain N face frequency domain images.
It should be noted that, the conversion of the RGB qualified face image into the frequency domain is to acquire more useful information features to better distinguish the copied face image from the real face image, where the real face image may be understood as a real face image obtained by directly shooting the face of a real living body. The copying human face image can be a photo copying image and an electronic screen copying image, the electronic screen copying image is an image obtained by copying the human face photo, and the electronic screen copying image is an image obtained by copying the human face image displayed by the electronic screen.
In the process of acquiring the face image, when the RGB qualified face image is acquired, it is ensured that the area ratio of the face to the whole image is relatively large, and the RGB qualified face image is a single face, in order to prevent that only the face part is selected and some features except the face region are ignored, for example, the frame information of the electronic screen and the edge information of the photo that are copied by the electronic screen, in this embodiment, the whole image is directly used as the input data of the frequency domain model without selecting the face region.
In this embodiment, the RGB-qualified face image is converted into a frequency domain image using a fast fourier transform algorithm. The frequency domain characteristic diagrams corresponding to the electronic screen reproduction image and the photo reproduction image have larger difference with the frequency domain characteristic diagram of the human face image. Exemplarily, referring to fig. 2, fig. 3 and fig. 4, fig. 2 is a schematic diagram of a frequency domain feature of a human face image, fig. 3 is a schematic diagram of a frequency domain feature of a copied image of an electronic screen, and fig. 4 is a schematic diagram of a frequency domain feature of a copied image of a photo. As can be seen from fig. 2, the frequency domain information of the real human face image diverges from the center, and the frequency domain information distribution of the electronic screen copied image and the photo copied image extends along the horizontal and vertical directions. Therefore, the difference of the frequency domain information can be utilized to assist in distinguishing whether the image is a copied face image.
Step S103, inputting the N human face frequency domain graphs into a pre-constructed frequency domain model respectively, obtaining N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability.
In this embodiment, the image time domain is converted into the frequency domain to obtain more effective and obvious frequency domain information for distinguishing real persons from reproduction, and the frequency domain image is calculated based on the frequency domain model by converting into the face frequency domain image, so that the accuracy of face reproduction detection can be improved better, the accuracy of face reproduction detection under the scenes of banks, insurance, securities and the like can be improved, and a financial institution can realize accurate anti-fraud identification and ensure the safety of financial transactions.
In an embodiment, the method further comprises:
constructing a model to be trained based on a se-densenert model;
constructing the frequency domain model, including:
inputting the frequency domain graph training sample into the model to be trained for calculation to obtain a first training classification result;
and adjusting the model to be trained according to the first training classification result to obtain the frequency domain model.
In an embodiment, the obtaining N frequency-domain feature classification results through the frequency-domain model includes:
respectively calculating real person classification probability and reproduction classification probability corresponding to the RGB qualified face images through the frequency domain model;
and taking the real person classification probability corresponding to each RGB qualified face image as each frequency domain feature classification result.
Exemplarily, 3 face frequency domain images are input into a frequency domain model, each image outputs 3 probability values respectively corresponding to real person face image classification probability, photo reproduction image classification probability and electronic screen reproduction image classification probability, the first is the real person face probability value, the 3 face frequency domain images correspondingly obtain 3 real person face probability values, and then the 3 real person face probability values are subjected to average weighting to obtain the final result of the frequency domain as the first real person classification probability cls1.
And S104, respectively inputting the N D depth face images into a pre-constructed depth image model, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability.
In this embodiment, if N is 3, 3D depth face maps are obtained at the same time at the corresponding positions of the 3 qualified RGB face maps, where the D depth face maps include depth information. The Depth Map (Depth Map) is an image or an image channel containing information on the distance of the surface of the scene object from the viewpoint. Where the Depth Map is similar to a grayscale image except that each pixel value thereof is the actual distance of the sensor from the object. Usually, the RGB image and the Depth image are registered, so that there is a one-to-one correspondence between the pixel points. And the pixel points of the RGB face image and the D-depth face image have corresponding significance.
Exemplarily, 3 human face depth maps are input into the depth map model to obtain 3 depth feature classification results, and then the 3 depth feature classification results are subjected to average weighting to obtain the final result of the depth map, which is the second human classification probability cls2.
It should be added that, since most of the existing face copying detection systems basically perform processing based on RGB two-dimensional information, the accuracy of the copying detection system is reduced by using a targeted attack optimization method based on RGB two-dimensional information in the outside. Therefore, the dual defense of the information fusion of the face frequency domain information and the D depth map is utilized, the defense capacity of the copying detection system is improved, the accuracy of the face copying detection algorithm is improved, the accuracy of face copying detection in the scenes of banks, insurance, securities and the like is improved, a financial institution can realize accurate anti-fraud behavior recognition, and the financial transaction safety is ensured. In addition, the financial transaction safety can improve the satisfaction degree of the customer and also can improve the user experience of the customer.
It is further added that the depth information map can distinguish human face images or copied face images very easily. Because the five sense organs of a normal person are concave and convex, depth information can be obtained in the D depth imaging process, and the depth maps of the photo-copied image and the electronic screen-copied image only have the same depth and do not have the difference of the depth information. Therefore, extracting the depth map information can also effectively help to distinguish the real human face image from the copied human face image.
Exemplarily, referring to fig. 5 and 6, fig. 5 is a schematic view showing a depth map feature of a human face image of a real person, fig. 5 includes depth information showing concave-convex information of five organs and the like, fig. 6 is a schematic view showing a depth map feature of a copied image, and the depth information in fig. 6 is the same and has no difference in depth.
In an embodiment, the method further comprises:
constructing a model to be trained based on a se-densenert model;
constructing the depth map model, including:
inputting the depth map training sample into the model to be trained for calculation to obtain a second training classification result;
and adjusting the model to be trained according to the second training classification result to obtain the depth map model.
In this embodiment, the attention model is used to improve the accuracy of the model. The attention model used in the scheme is a se attention mechanism-Dense Convolutional Network (se-densenert) model, and the se-densenert model is obtained by combining a se attention mechanism module (se-bolck) and a Dense Convolutional Network (densneet) 121. The Se-Densenet model adds a Se attention mechanism module (Se-Block) on a convolution (convolution) layer between each DenseNet Block, so that the accuracy of the model is improved.
It is further supplemented and explained that the embodiment adds the feature of the attention model extraction details, so as to better obtain the extraction of the fine features of the face image, thereby greatly improving the accuracy of face reproduction detection, improving the accuracy of face reproduction detection in the scenes of banks, insurance, securities and the like, and the financial institution can realize accurate anti-fraud behavior recognition and ensure the safety of financial transactions. In addition, financial transaction security may improve customer satisfaction.
In an embodiment, the SE-densenert model includes a plurality of SE-densener modules, and the building of the model to be trained based on the SE-densenert model includes:
adding a corresponding Se attention mechanism module to the convolution layer of each DenseNet module to construct each SE-Dense module;
and connecting the SE-Dense modules through a convolutional layer and a pooling layer to construct the model to be trained.
Referring to fig. 7, fig. 7 is a schematic diagram of a network structure of the model to be trained according to the present embodiment. Exemplarily, the SE-densest model may be composed of 3 SE-detect modules (blocks), which are SE attention-Dense convolutional network module 1 (SE-detect Block 1), SE attention-Dense convolutional network module 2 (SE-detect Block 2), and SE attention-Dense convolutional network module 3 (SE-detect Block 3), in addition, a convolution layer, a pooling layer, a convolution layer between the input layer (input) and the SE-detect Block1, and 1 pooling layer and 1 linear layer between the SE-detect Block3 and the output layer (output).
In one embodiment, the SE-sense module comprises a plurality of BN-Relu-Conv layers and the SE attention mechanism module, and the SE-sense module is configured to comprise:
sequentially overlapping the BN-Relu-Conv layers to obtain an overlapping submodule;
and adding a corresponding Se-block attention module to the Conv layer of the last N-1 BN-Relu-Conv layers of the superposition submodule to obtain the SE-Dense module.
Exemplarily, the normalization-activation function-convolution (BN-Relu-Conv) layer may have 5 layers. Referring to fig. 8, fig. 8 is a schematic diagram of a network structure of the SE-sense module, which may also be referred to as SE-sense Block, and the SE-sense Block is composed of 5 normalization-activation function-convolution (BN-Relu-Conv) layers, each BN-Relu-Conv layer is overlapped with the following BN-Relu-Conv layer, wherein a SE attention mechanism module (SE-Block) is added to the last Conv layer of the following 4 BN-Relu-Conv layers. The BN-Relu-Conv layer includes a normalization (BatchNorm) layer, a modified linear unit (Relu) function, and a Convolution (Convolution) layer. Input data x of Input layer (Input) 0 Data x 0 The data H is obtained through the processing of a first BN-Relu-Conv layer 1 Data H 1 Processing by a first Se attention mechanism module (Se-block) to obtain data x 1 Data x 1 The data H is obtained through the 2 nd normalization-activation function-convolution (BN-Relu-Conv) layer processing 2 Data H 2 Obtaining data x by a 2 nd Se attention mechanism module (Se-block) 2 Data x 2 The data H is obtained through the 3 rd normalization-activation function-convolution (BN-Relu-Conv) layer processing 3 Data H 3 Obtaining data x by a 3 rd Se attention mechanism module (Se-block) 3 Data x 3 The data H is obtained through the 4 th normalization-activation function-convolution (BN-Relu-Conv) layer processing 4 Data H 4 Obtaining data x by molding (Se-block) by a 4 th Se power machine 4 Data x 4 Output through the conversion Layer (Transition Layer).
In this embodiment, the Se-block is a weighting operation performed on a channel, and the importance of the channel is obtained by multiplying the channel by a weight (0 to 1). A series of transformations were performed in the convolution (Conv) layer to yield a weight matrix (ranging from 0 to 1) of 1 x C (C being the number of channels), which was then dot-multiplied with the original Conv. This allows the model to focus more on the important channels through Se-block.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a Se attention modeling (Se-block) provided in this embodiment, in which the number of channels corresponding to a convolution (Conv) layer is H × W × C, the number of channels corresponding to a Full Pooling (Global firing) layer is 1 × 1 × C, and the number of channels corresponding to a first Full Connection (FC) layer is 1 × 1 ″
Figure F_221118144951107_107802001
The channel number of the modified linear unit (Relu) activation function layer is 1 × 1
Figure F_221118144951234_234763002
The number of channels of the first FC layer is 1 × 1 × C, the number of channels of the Sigmoid function layer is 1 × 1 × C, and the number of channels of the Scale (Scale) layer is H × W × C. For the Se-block shown in FIG. 9, it is assumed that input data is input
Figure P_221118144955088_088758003
Then, after Se-block, data are obtained
Figure P_221118144955120_120001004
And S105, fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability.
In an embodiment, the fusing the first human classification probability and the second human classification probability to obtain a final classification probability includes:
respectively setting a first weight and a second weight, wherein the first weight is smaller than the second weight;
calculating a first product value of the first weight and the first human classification probability, calculating a second product value of the second weight and the second human classification probability, and determining a sum of the first product value and the second product value as the final classification probability.
Illustratively, the final classification probability may be calculated according to the following formula:
Figure M_221118144955135_135655001
wherein the content of the first and second substances,
Figure M_221118144955198_198144001
the probability of the final classification is represented,
Figure M_221118144955214_214720002
a first real person classification probability is represented,
Figure M_221118144955246_246472003
the second real person classification probability is represented, 0.4 is a first weight, and 0.6 is a second weight, it should be noted that only if the first weight is smaller than the second weight, the first weight and the second weight may take other values, and the present invention is not limited herein. Since the accuracy of the depth map model is higher than that of the frequency domain model, the weight of the real human face classification probability calculated by the depth map model can be set to be greater than that calculated by the frequency domain model.
And step S106, determining a copied face classification result according to the final classification probability and a preset copying threshold value.
In this embodiment, the preset reproduction threshold may be 0.6, or may be other values, which is not limited herein. Exemplarily, the size relationship between the final classification probability and a preset reproduction threshold value can be compared, and a reproduction face classification result can be determined according to the size relationship.
Therefore, the first real person classification probability and the second real person classification probability are fused to obtain the final classification probability, the final classification probability is compared with the size relation of the preset copying threshold value, the face copying classification result is determined according to the size relation, the face copying accuracy is improved, based on accurate face copying detection, a financial institution can realize accurate anti-fraud behavior recognition, and financial transaction safety is ensured.
In an embodiment, the determining a classification result of the copied face according to the final classification probability and a preset copying threshold includes:
and determining a copied human face classification result under the condition that the final classification probability is greater than the preset copying threshold value.
In an embodiment, the method further comprises:
and determining the real human face classification result under the condition that the final classification probability is less than or equal to the preset reproduction threshold value.
In this embodiment, the face image of the real person or the copied face image can be determined according to the following formula:
Figure M_221118144955277_277719001
the face copying detection method provided by the embodiment obtains N RGB qualified face images and N D depth face images corresponding to the N RGB qualified face images; respectively carrying out frequency domain conversion on the N pieces of RGB qualified face images to obtain N pieces of face frequency domain images; respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability; respectively inputting the N D-depth face images into a pre-constructed depth image model, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability; fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability; and determining a copied face classification result according to the final classification probability and a preset copying threshold value. Like this, through frequency domain model and degree of depth map model respectively to RGB qualified face picture, D degree of depth face picture acquire first real person classification probability and second real person classification probability to fuse first real person classification probability and second real person classification probability and confirm reproduction face classification result, improve the rate of accuracy that detects the face reproduction, based on accurate face reproduction detection, financial institution can realize accurate anti-fraud and discern, ensures financial transaction safety.
Example 2
In addition, the embodiment of the disclosure provides a human face copying detection device, which is applied to electronic equipment.
Specifically, as shown in fig. 10, the face duplication detection apparatus 1000 includes:
an obtaining module 1001, configured to obtain N RGB qualified face images and N D depth face images corresponding to the N RGB qualified face images;
a conversion module 1002, configured to perform frequency domain conversion on the N RGB qualified face maps respectively to obtain N face frequency domain maps;
the first processing module 1003 is configured to input the N human face frequency domain graphs into a pre-constructed frequency domain model respectively, obtain N frequency domain feature classification results through the frequency domain model, and perform weighted average on the N frequency domain feature classification results to obtain a first real person classification probability;
the second processing module 1004 is configured to input the N D-depth face maps into a pre-constructed depth map model, output N depth feature classification results through the depth map, and perform weighted average on the N depth feature classification results to obtain a second real person classification probability;
a fusion module 1005, configured to fuse the first real person classification probability and the second real person classification probability to obtain a final classification probability;
a determining module 1006, configured to determine a copied face classification result according to the final classification probability and a preset copying threshold.
In an embodiment, the obtaining module 1001 is further configured to perform face detection through MediaPipe, obtain a face area ratio between a face image area and an entire image when one face image is detected, and determine the entire image as the RGB qualified face image when the face area ratio is within a preset qualified area ratio interval;
and taking the D depth face image synchronously shot with the RGB qualified face image as the D depth face image corresponding to the RGB qualified face image.
In one embodiment, the face duplication detection apparatus 1000 further includes:
the first construction module is used for constructing a model to be trained based on the se-Densenet model;
the second construction module is used for inputting the frequency domain graph training samples into the model to be trained for calculation to obtain a first training classification result;
adjusting the model to be trained according to the first training classification result to obtain the frequency domain model;
the third construction module is used for inputting the depth map training samples into the model to be trained for calculation to obtain a second training classification result;
and adjusting the model to be trained according to the second training classification result to obtain the depth map model.
In an embodiment, the SE-densnet model includes a plurality of SE-densnet modules, a first construction module, and a SE attention mechanism module, wherein the SE-densnet module is further configured to construct each SE-densnet module by adding the corresponding SE attention mechanism module to the convolution layer of each densnet module;
and connecting the SE-Dense modules through a convolutional layer and a pooling layer to construct the model to be trained.
In an embodiment, the SE-detect module includes a plurality of BN-Relu-Conv layers and the SE attention mechanism module, and the first building module is further configured to sequentially stack the plurality of BN-Relu-Conv layers to obtain a stack sub-module;
and adding a corresponding Se-block attention module to the Conv layer of the rear N-1 BN-Relu-Conv layers of the superposition submodule to obtain the SE-Dense module.
In an embodiment, the first processing module 1003 is further configured to calculate, through the frequency domain model, a real person classification probability and a reproduction classification probability corresponding to each RGB qualified face image;
and taking the real person classification probability corresponding to each RGB qualified face image as each frequency domain feature classification result.
In an embodiment, the determining module 1006 is further configured to determine a copied face classification result if the final classification probability is greater than the preset copying threshold.
In an embodiment, the determining module 1006 is further configured to determine a real human face classification result if the final classification probability is less than or equal to the preset copying threshold.
In an embodiment, the fusing module 1005 is further configured to set a first weight and a second weight, respectively, where the first weight is smaller than the second weight;
calculating a first product value of the first weight and the first human classification probability, calculating a second product value of the second weight and the second human classification probability, and determining a sum of the first product value and the second product value as the final classification probability.
The face duplication detection apparatus 1000 provided in this embodiment can implement the face duplication detection method provided in embodiment 1, and is not described herein again to avoid repetition.
The face copying detection device provided by the embodiment obtains N RGB qualified face images and N D depth face images corresponding to the N RGB qualified face images; respectively carrying out frequency domain conversion on the N RGB qualified face images to obtain N face frequency domain images; respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain characteristic classification results through the frequency domain model, and carrying out weighted average on the N frequency domain characteristic classification results to obtain a first real person classification probability; respectively inputting the N D-depth face images into a pre-constructed depth image model, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability; fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability; and determining a copied face classification result according to the final classification probability and a preset copying threshold value. Like this, through frequency domain model and degree of depth map model respectively to RGB qualified face picture, D degree of depth face picture acquire first real person classification probability and second real person classification probability to fuse first real person classification probability and second real person classification probability and confirm reproduction face classification result, improve the rate of accuracy that detects the face reproduction, based on accurate face reproduction detection, financial institution can realize accurate anti-fraud and discern, ensures financial transaction safety.
Example 3
Furthermore, an embodiment of the present disclosure provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the computer program executes the face duplication detection method provided in embodiment 1 when running on the processor.
Specifically, referring to fig. 11, the electronic device 1100 includes: a transceiver 1101, a bus interface and processor 1102, the processor 1102 being configured to: acquiring N RGB qualified face images and N corresponding D depth face images;
respectively carrying out frequency domain conversion on the N pieces of RGB qualified face images to obtain N pieces of face frequency domain images;
respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability;
inputting the N D-depth face images into a pre-constructed depth image model respectively, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability;
fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability;
and determining a copied face classification result according to the final classification probability and a preset copying threshold value.
In an embodiment of the present invention, the electronic device 1100 further includes: and a memory 1103. In fig. 11, the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by the processor 1102, and various circuits, represented by the memory 1103, linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1101 may be a plurality of elements including a transmitter and a receiver providing a means for communicating with various other apparatus over a transmission medium. The processor 1102 is responsible for managing the bus architecture and general processing, and the memory 1103 may store data used by the processor 1102 in performing operations.
The electronic device 1100 provided in the embodiment of the present invention may execute the face duplication detection method provided in method embodiment 1, and is not described herein again to avoid repetition.
The electronic device provided by the embodiment obtains N RGB qualified face images and N D depth face images corresponding to the N RGB qualified face images; respectively carrying out frequency domain conversion on the N RGB qualified face images to obtain N face frequency domain images; respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability; inputting the N D-depth face images into a pre-constructed depth image model respectively, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability; fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability; and determining a copied face classification result according to the final classification probability and a preset copying threshold value. Like this, through frequency domain model and degree of depth map model respectively to RGB qualified people's face map, D degree of depth people's face map acquire first real person classification probability and second real person classification probability to fuse first real person classification probability and second real person classification probability and confirm reproduction people's face classification result, improve the rate of accuracy that detects the people's face reproduction, based on accurate people's face reproduction detection, financial institution can realize accurate anti-fraud and discern, ensures financial transaction safety.
Example 4
The present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for detecting a face duplication provided in embodiment 1 is implemented.
In this embodiment, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The computer-readable storage medium provided in this embodiment may implement the face duplication detection method provided in embodiment 1, and is not described herein again to avoid repetition.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or terminal that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (9)

1. A face duplication detection method is characterized by comprising the following steps:
acquiring N RGB qualified face images and N corresponding D depth face images;
respectively carrying out frequency domain conversion on the N RGB qualified face images to obtain N face frequency domain images;
respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability;
respectively inputting the N D-depth face images into a pre-constructed depth image model, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability;
fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability;
determining a copied face classification result according to the final classification probability and a preset copying threshold value;
constructing a model to be trained based on a se-densenert model;
constructing the frequency domain model, including:
inputting the frequency domain graph training sample into the model to be trained for calculation to obtain a first training classification result;
adjusting the model to be trained according to the first training classification result to obtain the frequency domain model;
constructing the depth map model, including:
inputting the depth map training sample into the model to be trained for calculation to obtain a second training classification result;
adjusting the model to be trained according to the second training classification result to obtain the depth map model;
the obtaining of N frequency domain feature classification results by the frequency domain model includes:
respectively calculating real person classification probability and reproduction classification probability corresponding to the RGB qualified face images through the frequency domain model;
using the real person classification probability corresponding to each RGB qualified face image as each frequency domain feature classification result;
and fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability, wherein the final classification probability comprises the following steps:
respectively setting a first weight and a second weight, wherein the first weight is smaller than the second weight;
calculating a first product value of the first weight and the first human classification probability, calculating a second product value of the second weight and the second human classification probability, and determining a sum of the first product value and the second product value as the final classification probability.
2. The method of claim 1, wherein obtaining each of the RGB-qualified face maps and its corresponding D-depth face map comprises:
carrying out face detection through MediaPipe, when a face image is detected, obtaining the face area ratio of the face image area to the whole image, and when the face area ratio is in a preset qualified area ratio interval, determining the whole image as the RGB qualified face image;
and taking the D depth face image synchronously shot with the RGB qualified face image as the D depth face image corresponding to the RGB qualified face image.
3. The method according to claim 1, wherein the SE-densenert model comprises a plurality of SE-densener modules, and the building of the model to be trained based on the SE-densener model comprises:
adding a corresponding Se attention mechanism module to the convolution layer of each DenseNet module to construct each SE-Dense module;
and connecting the SE-Dense modules through a convolutional layer and a pooling layer to construct the model to be trained.
4. The method of claim 3, wherein the SE-sense module comprises a plurality of BN-Relu-Conv layers and the SE attention mechanism module, and wherein constructing the SE-sense module comprises:
sequentially overlapping the BN-Relu-Conv layers to obtain an overlapping submodule;
and adding a corresponding Se-block attention module to the Conv layer of the last N-1 BN-Relu-Conv layers of the superposition submodule to obtain the SE-Dense module.
5. The method according to claim 1, wherein the determining a reproduction face classification result according to the final classification probability and a preset reproduction threshold value comprises:
and determining a copied face classification result under the condition that the final classification probability is greater than the preset copying threshold value.
6. The method of claim 5, further comprising:
and determining the real human face classification result under the condition that the final classification probability is less than or equal to the preset reproduction threshold value.
7. A face duplication detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring N RGB qualified face images and N corresponding D depth face images;
the conversion module is used for respectively carrying out frequency domain conversion on the N RGB qualified face images to obtain N face frequency domain images;
the first processing module is used for respectively inputting the N human face frequency domain graphs into a pre-constructed frequency domain model, acquiring N frequency domain feature classification results through the frequency domain model, and carrying out weighted average on the N frequency domain feature classification results to obtain a first real person classification probability;
the second processing module is used for respectively inputting the N D-depth face images into a pre-constructed depth image model, outputting N depth feature classification results through the depth image, and carrying out weighted average on the N depth feature classification results to obtain a second real person classification probability;
the fusion module is used for fusing the first real person classification probability and the second real person classification probability to obtain a final classification probability;
the determining module is used for determining a copied face classification result according to the final classification probability and a preset copying threshold value;
the first construction module is used for constructing a model to be trained based on the se-Densenet model;
the second construction module is used for inputting the frequency domain graph training samples into the model to be trained for calculation to obtain a first training classification result;
adjusting the model to be trained according to the first training classification result to obtain the frequency domain model;
the third construction module is used for inputting the depth map training samples into the model to be trained for calculation to obtain a second training classification result;
adjusting the model to be trained according to the second training classification result to obtain the depth map model;
the first processing module is further used for respectively calculating real person classification probability and reproduction classification probability corresponding to the RGB qualified face images through the frequency domain model;
using the real person classification probability corresponding to each RGB qualified face image as each frequency domain feature classification result;
the fusion module is further configured to set a first weight and a second weight, respectively, where the first weight is smaller than the second weight;
calculating a first product value of the first weight and the first human classification probability, calculating a second product value of the second weight and the second human classification probability, and determining a sum of the first product value and the second product value as the final classification probability.
8. An electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program executes the method for detecting a face duplication as claimed in any one of claims 1 to 6 when the processor runs.
9. A computer-readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the face duplication detection method of any one of claims 1 to 6.
CN202211081105.5A 2022-09-06 2022-09-06 Face copying detection method and device, electronic equipment and storage medium Active CN115147705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211081105.5A CN115147705B (en) 2022-09-06 2022-09-06 Face copying detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211081105.5A CN115147705B (en) 2022-09-06 2022-09-06 Face copying detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115147705A CN115147705A (en) 2022-10-04
CN115147705B true CN115147705B (en) 2023-02-03

Family

ID=83415814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211081105.5A Active CN115147705B (en) 2022-09-06 2022-09-06 Face copying detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115147705B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN110490076A (en) * 2019-07-18 2019-11-22 平安科技(深圳)有限公司 Biopsy method, device, computer equipment and storage medium
CN112464690A (en) * 2019-09-06 2021-03-09 广州虎牙科技有限公司 Living body identification method, living body identification device, electronic equipment and readable storage medium
CN112507934A (en) * 2020-12-16 2021-03-16 平安银行股份有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113158773A (en) * 2021-03-05 2021-07-23 普联技术有限公司 Training method and training device for living body detection model
CN113869219A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107818313B (en) * 2017-11-20 2019-05-14 腾讯科技(深圳)有限公司 Vivo identification method, device and storage medium
WO2019152983A2 (en) * 2018-02-05 2019-08-08 Board Of Trustees Of Michigan State University System and apparatus for face anti-spoofing via auxiliary supervision
CN109635539B (en) * 2018-10-30 2022-10-14 荣耀终端有限公司 Face recognition method and electronic equipment
CN112528969B (en) * 2021-02-07 2021-06-08 中国人民解放军国防科技大学 Face image authenticity detection method and system, computer equipment and storage medium
CN113792671A (en) * 2021-09-16 2021-12-14 平安银行股份有限公司 Method and device for detecting face synthetic image, electronic equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN110490076A (en) * 2019-07-18 2019-11-22 平安科技(深圳)有限公司 Biopsy method, device, computer equipment and storage medium
CN112464690A (en) * 2019-09-06 2021-03-09 广州虎牙科技有限公司 Living body identification method, living body identification device, electronic equipment and readable storage medium
CN112507934A (en) * 2020-12-16 2021-03-16 平安银行股份有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113158773A (en) * 2021-03-05 2021-07-23 普联技术有限公司 Training method and training device for living body detection model
CN113869219A (en) * 2021-09-29 2021-12-31 平安银行股份有限公司 Face living body detection method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep Frequent Spatial Temporal Learning for Face Anti-Spoofing;Ying Huang等;《Computer Vision and Pattern Recognition》;20200120;1-8 *
基于SE-DenseNet的变压器故障诊断;郭如雁等;《电工电能新技术》;20210131;第40卷(第01期);第2节,图1-2,表1 *

Also Published As

Publication number Publication date
CN115147705A (en) 2022-10-04

Similar Documents

Publication Publication Date Title
Tang et al. Median filtering detection of small-size image based on CNN
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
WO2022161286A1 (en) Image detection method, model training method, device, medium, and program product
CN109829506B (en) Image processing method, image processing device, electronic equipment and computer storage medium
TW202026948A (en) Methods and devices for biological testing and storage medium thereof
CN108229375B (en) Method and device for detecting face image
JP7419080B2 (en) computer systems and programs
CN112084952B (en) Video point location tracking method based on self-supervision training
WO2023016137A1 (en) Facial image processing method and apparatus, and device and storage medium
CN114037838A (en) Neural network training method, electronic device and computer program product
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN111353325A (en) Key point detection model training method and device
CN115147705B (en) Face copying detection method and device, electronic equipment and storage medium
CN116975828A (en) Face fusion attack detection method, device, equipment and storage medium
CN116881967A (en) Privacy protection method, device and equipment
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN114882576B (en) Face recognition method, electronic device, computer-readable medium, and program product
CN113657293B (en) Living body detection method, living body detection device, electronic equipment, medium and program product
CN112818743B (en) Image recognition method and device, electronic equipment and computer storage medium
CN113284137B (en) Paper fold detection method, device, equipment and storage medium
CN112819486B (en) Method and system for identity certification
CN114463799A (en) Living body detection method and device and computer readable storage medium
CN111275183A (en) Visual task processing method and device and electronic system
CN117351579B (en) Iris living body detection method and device based on multi-source information fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant