CN114360034A - Method, system and equipment for detecting deeply forged human face based on triplet network - Google Patents
Method, system and equipment for detecting deeply forged human face based on triplet network Download PDFInfo
- Publication number
- CN114360034A CN114360034A CN202210269883.0A CN202210269883A CN114360034A CN 114360034 A CN114360034 A CN 114360034A CN 202210269883 A CN202210269883 A CN 202210269883A CN 114360034 A CN114360034 A CN 114360034A
- Authority
- CN
- China
- Prior art keywords
- network
- net
- face
- image
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 35
- 239000013598 vector Substances 0.000 claims abstract description 21
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 8
- 230000004913 activation Effects 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 12
- 230000014509 gene expression Effects 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method, a system and equipment for detecting a deep forged face based on a triplet networkPreprocessing into 299 x 3 size, inputting into the main feature extraction network of the three-cell network to obtain the depth feature of the face imageNet(I),Net(I)Is a characteristic vector of 2048 dimensions; then using the classification network pairNet(I)And (3) classifying, outputting a 2-dimensional feature by a classification network for the input 2048-dimensional feature vector, converting the numerical value of the 2-dimensional feature into a probability through Softmax processing, and expressing the relative probability of authenticity of the picture, wherein the picture with the probability greater than a preset value is a forged face picture. The three-cell network can extract more effective true and false face identification characteristics and has more accurate deep fake face detection effect.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence safety, relates to a method, a system and equipment for detecting a deep forged face, and particularly relates to a method, a system and equipment for detecting a deep forged face based on a three-cell network.
Background
Deep forgery is a technology for creating or synthesizing forged content (such as images and videos) based on intelligent methods such as deep learning. In recent years, with the development of deep learning techniques, deep forgery has progressed at an unprecedented rate. At present, the depth counterfeiting technology can generate face changing images and imitate action expressions of real person speaking, can create people which do not exist in reality, is difficult to distinguish, and subverts the traditional concept of 'seeing the eyes as the reality'.
The deep counterfeiting technology brings great harm to individuals, society and countries once being abused.
The best way to combat "depth forgery" is "depth forgery detection". The depth forgery detection technology aims to detect whether an image or video is forged or not through the depth forgery technology. The current mainstream detection methods include a detection method using the conventional image features and a detection method based on deep learning. With the development of deep learning techniques, more and more novel deep forgery detection techniques are applied. Researchers extract depth features in the image by constructing different convolutional neural network structures, and judge whether the face is deeply forged or not by using the depth features. In order to improve the expressive power of features, researchers continuously propose new network architectures, including Xception network, residual network and DenseNet in mainstream. Frequency domain information is introduced into the convolutional neural network by researchers, and the expression capacity of the features is improved.
However, although these convolutional neural network structures can well extract the main features of the image, these networks with single sample input easily focus on feature expressions that are not related to the picture authenticity attribute, such as background features, skin color features, and the like, and it is difficult to capture the intrinsic feature expressions related to the picture authenticity attribute, especially when multiple pictures are similar in appearance but different in authenticity attributes, these networks easily extract similar image features, thereby affecting the accuracy of detection.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method, a system and an electronic device for detecting deep face forgery based on a three-cell network. The 3 coupled samples of the original picture, the target picture and the forged picture are used for learning the input network, so that the network can capture characteristic expressions which are similar in appearance but different in true and false attributes.
The method adopts the technical scheme that: a deep fake face detection method based on a triplet network comprises the following steps:
step 1: counterfeit human face image to be detectedIPreprocessing the image into a preset size and inputting the preset size into a trunk feature extraction network of a three-cell network to obtain the depth feature of the face imageNet(I);Net(I)Is a characteristic vector of 2048 dimensions;
step 2: using a classification network pairNet(I)Classifying, namely outputting a 2-dimensional feature by a classification network for the input 2048-dimensional feature vector, and converting the numerical value of the 2-dimensional feature into relative probability through Softmax processing to express the relative probability of authenticity of the picture; wherein, the picture with the probability larger than the preset value is a forged face picture;
the main feature extraction network of the three-cell network adopts a framework of an Xconvergence network, and comprises an inlet flow, a middle flow and an outlet flow; the ingress stream contains 2 convolutions of 3 × 3 and is activated using the ReLU activation function and 3 volume blocks; the intermediate stream contains 8 convolution modules; the exit stream comprises a volume block and two times of 3 x 3 depth separable convolutions, and is activated by using a ReLU function, and finally an average pooling operation is performed; three trunk characteristic extraction networks share one weight;
the classification network adopts a BP neural network and comprises three layers, wherein the first layer is an input layer and comprises 2048 nodes; the middle layer comprises 1024 nodes, and the output layer comprises 2 nodes; and a ReLU activation function is used between each layer for activation.
The technical scheme adopted by the system of the invention is as follows: a depth fake face detection system based on a three-cell network comprises the following modules:
module 1 for detecting a fake face image to be detectedIPreprocessing the image into a preset size and inputting the preset size into a trunk feature extraction network of a three-cell network to obtain the depth feature of the face imageNet(I);Net(I)Is a characteristic vector of 2048 dimensions;
module 2 for utilizing a classification network pairNet(I)Classifying, namely outputting a 2-dimensional feature by a classification network for the input 2048-dimensional feature vector, and converting the numerical value of the 2-dimensional feature into relative probability through Softmax processing to express the relative probability of authenticity of the picture; wherein, the picture with the probability larger than the preset value is a forged face picture;
the main feature extraction network of the three-cell network adopts a framework of an Xconvergence network, and comprises an inlet flow, a middle flow and an outlet flow; the ingress stream contains 2 convolutions of 3 × 3 and is activated using the ReLU activation function and 3 volume blocks; the intermediate stream contains 8 convolution modules; the exit stream comprises a volume block and two times of 3 x 3 depth separable convolutions, and is activated by using a ReLU function, and finally an average pooling operation is performed; three trunk characteristic extraction networks share one weight;
the classification network adopts a BP neural network and comprises three layers, wherein the first layer is an input layer and comprises 2048 nodes; the middle layer comprises 1024 nodes, and the output layer comprises 2 nodes; and a ReLU activation function is used between each layer for activation.
The technical scheme adopted by the equipment of the invention is as follows: a depth fake face detection device based on a three-cell network comprises:
one or more processors;
a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method for detecting a counterfeit face based on a three-component network.
The invention has the advantages and positive effects that:
(1) the invention adopts 3 coupled samples of the original picture, the target picture and the forged picture to learn the input network. The use of triplets can cause the network to capture a similar appearance, but different in true and false attributes, of the representation of the feature as compared to a single sample input network.
(2) The invention realizes the identification of the deeply forged face and solves the safety problem caused by the forged face in the practical application scene.
Drawings
FIG. 1 is a method schematic of an embodiment of the invention;
FIG. 2 is a schematic diagram of a three-cell network constructed according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a classification network constructed in accordance with an embodiment of the present invention;
fig. 4 is a graph of experimental results for a network constructed according to an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
The existing convolutional neural network structure can only extract the depth features of the image, but the depth features do not pay attention to feature expression related to the authenticity attribute of the image. And 3 coupled samples of the original picture, the target picture and the forged picture are sent to the three-cell network for learning, so that the three-cell network can capture characteristic expressions which are similar in appearance but different in true and false attributes.
Referring to fig. 1, the method for detecting a deep forged face based on a three-cell network provided by the invention comprises the following steps:
step 1: counterfeit human face image to be detectedIPreprocessing into 299 x 3 size, inputting into the main feature extraction network of the three-cell network to obtain the depth feature of the face imageNet(I);Net(I)Is a characteristic vector of 2048 dimensions;
step 2: using a classification network pairNet(I)Classifying, namely outputting a 2-dimensional feature by a classification network for the input 2048-dimensional feature vector, and converting the numerical value of the 2-dimensional feature into relative probability through Softmax processing to express the relative probability of authenticity of the picture; and the picture with the probability greater than the preset value is a forged face picture.
Referring to fig. 2, the backbone feature extraction network of the present embodiment adopts a framework of an Xception network, which includes an inlet flow, an intermediate flow, and an outlet flow; the ingress stream is a feature map that converts a 299 × 299 × 3 picture into a 19 × 19 × 728 picture, contains 2 times of 3 × 3 convolutions and is activated using the ReLU activation function, and 3 volume blocks; the intermediate stream contains 8 convolution modules; the exit stream is obtained by converting a 19 × 19 × 728 feature map into a 2048-dimensional feature vector, including a volume block and two 3 × 3 depth separable convolutions, and performing activation using a ReLU function, and finally performing an average pooling operation; the three main feature extraction networks share a weight, and finally, a 299 x 3 image is converted into a 2048-dimensional feature vector through the main feature extraction networks.
Referring to fig. 3, the classification network of the present embodiment adopts a BP neural network, which includes three layers, where the first layer is an input layer and includes 2048 nodes; the middle layer comprises 1024 nodes, and the output layer comprises 2 nodes; and a ReLU activation function is used between each layer for activation.
The trunk feature extraction network of the embodiment is a trained trunk feature extraction network; the training process comprises the following substeps:
step 1.1: acquiring a plurality of triplets of original image-target image-forged image, and recording as (original,target,fake);
In the embodiment, firstly, the fake face video, the original face video and the target face video are downsampled, frame images starting from a specific frame and having fixed frame intervals in the video are selected, and the fake face video, the original face video from the fake face video and the video frames obtained by downsampling the target face video are kept in one-to-one correspondence. In the embodiment, 10 frames of images, namely, one frame per second and each video, from 0 second are selected as an original data set;
in this embodiment, a retinaFace face detection algorithm is then used to detect a face region in the acquired frame image, and the face image is cut. Extracting 5 facial feature points including left eye, right eye, nose, left mouth corner and right mouth corner; aligning the face through the facial feature points, so that the aligned face is positioned in the center of the image; the picture is adjusted into 299 pixels long, 299 pixels wide and 3 channels through opencv, the processed face image is organized into three groups of data of original face-target face-forged face, and the data is recorded as (original,target,fake)。
In the embodiment, the processed face image is organized into the original face-target face-forged face triple data which are obtained in the step 1.1 and are respectively used as input and supervision samples, and the trunk feature extraction network is continuously trained.
Step 1.2: for each set of triplets: (original,target,fake) Sequentially sending the images in the triplets into a trunk feature extraction network of the triplet networkNet( )(ii) a Obtaining the depth features of the images respectively, and recording as (Net(original),Net(target),Net(fake) ); wherein the trunk feature extraction networkNet( )Sharing the weight of (1);
step 1.3, calculating the characteristic distance between the depth characteristic of the original image and the depth characteristic of the target imageDis(Net(original),Net(target) And feature distances of target image depth features and forged image depth featuresDis(Net(target),Net(fake) Therein), whereinDis(a,b) Representing two eigenvectorsa,bThe feature distance between them, the feature vector distance calculation formula is as follows:
step 1.4: calculating the network loss function of the triplet according to the characteristic distance calculated in the step 1.3lossIn order to drive the network model to focus on not appearance-liked features (even if appearance is not like, depth features are close) but true-false attribute expressions (even if appearance is like, depth features are far away), it is desirable that two real picture features are distant from each otherDis(Net(original),Net(target) ) feature distances between true and false images are as small as possibleDis(Net(target),Net(fake) ) is as large as possible. The loss function calculation is therefore as follows:
loss=max (Dis(Net(original),Net(target))-Dis(Net(target),Net(fake))+margin,0);
wherein,marginsetting the interval between two characteristic distances for the hyper-parameter;
in this embodiment, settingmargin=0.2, this means: when the characteristic distance between the original picture and the target picture is smaller than the characteristic distance between the target picture and the forged picture by 0.2, no loss is generated.
Step 1.5: after calculating the loss, performing back propagation and optimization on the trunk feature extraction network by using an Adam optimizer;
step 1.6: and (5) repeating the step 1.1-1.5, and training the trunk feature extraction network until convergence, so as to obtain the trained trunk feature extraction network.
The classification network of the embodiment is a trained classification network; the training process comprises the following substeps:
step 2.1: counterfeit human face image to be detectedIPre-processed to 299 x 3 sizeInputting the depth features of the face image into a trunk feature extraction network in the trained three-cell networkNet(I); Net(I)Is a characteristic vector of 2048 dimensions;
step 2.2: using a classification network pairNet(I)Classifying, namely outputting a 2-dimensional feature by a classification network for the input 2048-dimensional feature vector, and converting the numerical value of the 2-dimensional feature into relative probability through Softmax processing to express the relative probability of authenticity of the picture;
step 2.3: calculating a cross entropy loss function between the predicted result and the actual resultloss,lossThe calculation method of (2) is as follows:
whereinp i Representative sampleiThe probability of being true is the probability that,y i representative sampleiIf the sample is a label ofiTo forge a picture, theny i =0, otherwise y i =1;
Step 2.4: after the cross entropy loss is calculated, an SGD optimizer is used, and a gradient descent method is used for back propagation and optimizing a classification network;
step 2.5: and (5) repeating the step 2.1 and the step 2.4 until the classification network is converged to obtain the trained classification network.
And after the steps are completed, carrying out an experiment on the network. And mixing the forged image and the real image and sending the mixture into the trained triplet network for feature extraction. The extracted feature vectors are mapped on a two-dimensional plane rectangular coordinate, and the result is shown in fig. 4. As can be seen from fig. 4, the features of the forged face are significantly distinguished from those of the real face. The method can detect the authenticity of the face image, and the detection result has higher reliability.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (6)
1. A deep fake face detection method based on a triplet network is characterized by comprising the following steps:
step 1: counterfeit human face image to be detectedIPreprocessing the image into a preset size and inputting the preset size into a trunk feature extraction network of a three-cell network to obtain the depth feature of the face imageNet(I);Net(I)Is a characteristic vector of 2048 dimensions;
step 2: using a classification network pairNet(I)Classifying, namely outputting a 2-dimensional feature by a classification network for the input 2048-dimensional feature vector, and converting the numerical value of the 2-dimensional feature into relative probability through Softmax processing to express the relative probability of authenticity of the picture; wherein, the picture with the probability larger than the preset value is a forged face picture;
the main feature extraction network of the three-cell network adopts a framework of an Xconvergence network, and comprises an inlet flow, a middle flow and an outlet flow; the ingress stream contains 2 convolutions of 3 × 3 and is activated using the ReLU activation function and 3 volume blocks; the intermediate stream contains 8 convolution modules; the exit stream comprises a volume block and two times of 3 x 3 depth separable convolutions, and is activated by using a ReLU function, and finally an average pooling operation is performed; three trunk characteristic extraction networks share one weight;
the classification network adopts a BP neural network and comprises three layers, wherein the first layer is an input layer and comprises 2048 nodes; the middle layer comprises 1024 nodes, and the output layer comprises 2 nodes; and a ReLU activation function is used between each layer for activation.
2. The method for detecting the deep forged face based on the triplet network as claimed in claim 1, wherein: the trunk feature extraction network of the three-cell network in the step 1 is a trained trunk feature extraction network; the training process comprises the following substeps:
step 1.1: acquiring a plurality of triplets of original image-target image-forged image, and recording as (original,target,fake);
Step 1.2: for each set of triplets: (original,target,fake) Sequentially sending the images in the triplets into a trunk feature extraction network of the triplet networkNet( )(ii) a Obtaining the depth features of the images respectively, and recording as (Net(original),Net(target),Net(fake) ); wherein the trunk feature extraction networkNet( )Sharing the weight of (1);
step 1.3: calculating the feature distance between the depth feature of the original image and the depth feature of the target imageDis(Net(original),Net(target) And feature distances of target image depth features and forged image depth featuresDis(Net(target),Net(fake) Therein), whereinDis(a,b) Representing two eigenvectorsa,bThe feature distance between them, the feature vector distance calculation formula is as follows:
step 1.4: calculating the network loss function of the triplet according to the characteristic distance calculated in the step 1.3loss,The loss function calculation formula is as follows:
loss=max (Dis(Net(original),Net(target))-Dis(Net(target),Net(fake))+margin,0);
wherein,marginsetting the interval between two characteristic distances for the hyper-parameter;
step 1.5: after calculating the loss, performing back propagation and optimization on the trunk feature extraction network by using an Adam optimizer;
and 1.6, repeating the step 1.1-1.5, and training the trunk feature extraction network until convergence to obtain the trained trunk feature extraction network.
3. The method for detecting the deep forged face based on the triplet network as claimed in claim 2, wherein the step 1.1 comprises the following steps:
step 1.1.1: selecting frame images starting from specified frames and having fixed frame number intervals from the forged face video, the target face video and the original face video; enabling the original face video frame, the target face video frame and the forged face video frame to correspond one to one; generating a group of original face-target face-fake face images;
step 1.1.2: preprocessing the images in the step 1.1.1, and identifying and cutting a face area of each image through a face detection technology; aligning the human face through the human face feature point, so that the aligned human face is positioned in the center of the image; organizing the obtained image into a triplet of an original image, a target image and a fake image, and marking as (original,target,fake)。
4. The method for detecting the deep forged face based on the triplet network as claimed in claim 1, wherein: the classification network in the step 2 is a trained classification network; the training process comprises the following substeps:
step 2.1: counterfeit human face image to be detectedIPreprocessing into 299 x 3 size, inputting into the main feature extraction network in the trained three-cell network to obtain the depth feature of the face imageNet(I); Net(I)Is a characteristic vector of 2048 dimensions;
step 2.2: using a classification network pairNet(I)Classifying, namely outputting a 2-dimensional feature by a classification network for the input 2048-dimensional feature vector, and converting the numerical value of the 2-dimensional feature into relative probability through Softmax processing to express the relative probability of authenticity of the picture;
step 2.3: calculating intersections between predicted results and actual resultsEntropy loss functionloss,lossThe calculation method of (2) is as follows:
whereinp i Representative sampleiThe probability of being true is the probability that,y i representative sampleiIf the sample is a label ofiTo forge a picture, theny i =0, otherwise y i =1;
Step 2.4: after the cross entropy loss is calculated, an SGD optimizer is used, and a gradient descent method is used for back propagation and optimizing a classification network;
step 2.5: and (5) repeating the step 2.1 and the step 2.4 until the classification network is converged to obtain the trained classification network.
5. A depth fake face detection system based on a three-cell network is characterized by comprising the following modules:
module 1 for detecting a fake face image to be detectedIPreprocessing the image into a preset size and inputting the preset size into a trunk feature extraction network of a three-cell network to obtain the depth feature of the face imageNet(I);Net(I)Is a characteristic vector of 2048 dimensions;
module 2 for utilizing a classification network pairNet(I)Classifying, namely outputting a 2-dimensional feature by a classification network for the input 2048-dimensional feature vector, and converting the numerical value of the 2-dimensional feature into relative probability through Softmax processing to express the relative probability of authenticity of the picture; wherein, the picture with the probability larger than the preset value is a forged face picture;
the main feature extraction network of the three-cell network adopts a framework of an Xconvergence network, and comprises an inlet flow, a middle flow and an outlet flow; the ingress stream contains 2 convolutions of 3 × 3 and is activated using the ReLU activation function and 3 volume blocks; the intermediate stream contains 8 convolution modules; the exit stream comprises a volume block and two times of 3 x 3 depth separable convolutions, and is activated by using a ReLU function, and finally an average pooling operation is performed; three trunk characteristic extraction networks share one weight;
the classification network adopts a BP neural network and comprises three layers, wherein the first layer is an input layer and comprises 2048 nodes; the middle layer comprises 1024 nodes, and the output layer comprises 2 nodes; and a ReLU activation function is used between each layer for activation.
6. A depth forgery face detection device based on three-cell network is characterized by comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the network-based deep false face detection method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210269883.0A CN114360034A (en) | 2022-03-18 | 2022-03-18 | Method, system and equipment for detecting deeply forged human face based on triplet network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210269883.0A CN114360034A (en) | 2022-03-18 | 2022-03-18 | Method, system and equipment for detecting deeply forged human face based on triplet network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114360034A true CN114360034A (en) | 2022-04-15 |
Family
ID=81094968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210269883.0A Pending CN114360034A (en) | 2022-03-18 | 2022-03-18 | Method, system and equipment for detecting deeply forged human face based on triplet network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114360034A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114841340A (en) * | 2022-04-22 | 2022-08-02 | 马上消费金融股份有限公司 | Deep forgery algorithm identification method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533097A (en) * | 2019-08-27 | 2019-12-03 | 腾讯科技(深圳)有限公司 | A kind of image definition recognition methods, device, electronic equipment and storage medium |
CN111291863A (en) * | 2020-01-20 | 2020-06-16 | 腾讯科技(深圳)有限公司 | Training method of face changing identification model, face changing identification method, device and equipment |
CN112215043A (en) * | 2019-07-12 | 2021-01-12 | 普天信息技术有限公司 | Human face living body detection method |
CN112686331A (en) * | 2021-01-11 | 2021-04-20 | 中国科学技术大学 | Forged image recognition model training method and forged image recognition method |
WO2021158205A1 (en) * | 2020-02-03 | 2021-08-12 | Google Llc | Verification of the authenticity of images using a decoding neural network |
CN114120148A (en) * | 2022-01-25 | 2022-03-01 | 武汉易米景科技有限公司 | Method for detecting changing area of remote sensing image building |
-
2022
- 2022-03-18 CN CN202210269883.0A patent/CN114360034A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215043A (en) * | 2019-07-12 | 2021-01-12 | 普天信息技术有限公司 | Human face living body detection method |
CN110533097A (en) * | 2019-08-27 | 2019-12-03 | 腾讯科技(深圳)有限公司 | A kind of image definition recognition methods, device, electronic equipment and storage medium |
CN111291863A (en) * | 2020-01-20 | 2020-06-16 | 腾讯科技(深圳)有限公司 | Training method of face changing identification model, face changing identification method, device and equipment |
WO2021158205A1 (en) * | 2020-02-03 | 2021-08-12 | Google Llc | Verification of the authenticity of images using a decoding neural network |
CN112686331A (en) * | 2021-01-11 | 2021-04-20 | 中国科学技术大学 | Forged image recognition model training method and forged image recognition method |
CN114120148A (en) * | 2022-01-25 | 2022-03-01 | 武汉易米景科技有限公司 | Method for detecting changing area of remote sensing image building |
Non-Patent Citations (1)
Title |
---|
张安琪: "基于孪生卷积神经网络与三元组损失函数的图像识别模型", 《电子制作》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114841340A (en) * | 2022-04-22 | 2022-08-02 | 马上消费金融股份有限公司 | Deep forgery algorithm identification method and device, electronic equipment and storage medium |
CN114841340B (en) * | 2022-04-22 | 2023-07-28 | 马上消费金融股份有限公司 | Identification method and device for depth counterfeiting algorithm, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537743B (en) | Face image enhancement method based on generation countermeasure network | |
CN109543606B (en) | Human face recognition method with attention mechanism | |
CN110555434B (en) | Method for detecting visual saliency of three-dimensional image through local contrast and global guidance | |
CN110348376A (en) | A kind of pedestrian's real-time detection method neural network based | |
CN109359541A (en) | A kind of sketch face identification method based on depth migration study | |
CN108830252A (en) | A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic | |
CN109815867A (en) | A kind of crowd density estimation and people flow rate statistical method | |
CN112818862A (en) | Face tampering detection method and system based on multi-source clues and mixed attention | |
CN111242181B (en) | RGB-D saliency object detector based on image semantics and detail | |
CN112183240A (en) | Double-current convolution behavior identification method based on 3D time stream and parallel space stream | |
CN113486700A (en) | Facial expression analysis method based on attention mechanism in teaching scene | |
CN114067444A (en) | Face spoofing detection method and system based on meta-pseudo label and illumination invariant feature | |
CN113505719B (en) | Gait recognition model compression system and method based on local-integral combined knowledge distillation algorithm | |
CN113298018A (en) | False face video detection method and device based on optical flow field and facial muscle movement | |
CN117095128A (en) | Priori-free multi-view human body clothes editing method | |
CN116543269B (en) | Cross-domain small sample fine granularity image recognition method based on self-supervision and model thereof | |
CN115482595B (en) | Specific character visual sense counterfeiting detection and identification method based on semantic segmentation | |
CN114842524A (en) | Face false distinguishing method based on irregular significant pixel cluster | |
CN114333002A (en) | Micro-expression recognition method based on deep learning of image and three-dimensional reconstruction of human face | |
CN114360034A (en) | Method, system and equipment for detecting deeply forged human face based on triplet network | |
CN117095471B (en) | Face counterfeiting tracing method based on multi-scale characteristics | |
CN114429646A (en) | Gait recognition method based on deep self-attention transformation network | |
CN114155572A (en) | Facial expression recognition method and system | |
CN113221683A (en) | Expression recognition method based on CNN model in teaching scene | |
CN117351578A (en) | Non-interactive human face living body detection and human face verification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |