CN113705397A - Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation - Google Patents

Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation Download PDF

Info

Publication number
CN113705397A
CN113705397A CN202110940035.3A CN202110940035A CN113705397A CN 113705397 A CN113705397 A CN 113705397A CN 202110940035 A CN202110940035 A CN 202110940035A CN 113705397 A CN113705397 A CN 113705397A
Authority
CN
China
Prior art keywords
prnu
gan
image
network
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110940035.3A
Other languages
Chinese (zh)
Inventor
王金伟
曾可慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110940035.3A priority Critical patent/CN113705397A/en
Publication of CN113705397A publication Critical patent/CN113705397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于双流CNN结构融合PRNU的GAN生成人脸图像检测方法,包括以下步骤:(1)构建RGB流网络,利用随机擦除增强数据;(2)将预处理后的数据集输入CNN进行训练,得到GAN指纹特征;(3)构建PRNU流,通过图像降噪的方法来提取人脸的PRNU图像;(4)将提取到的PRNU图输入CNN进行训练,得到PRNU特征;(5)将GAN指纹特征与PRNU特征充分融合,输入到后续网络;(6)利用Softmax损失函数进行二分类,判断人脸图像的真假。本发明通过构建一个双流CNN网络模型来检测GAN生成的人脸。另外,本发明通过构建RGB流网络可以保证同种GAN生成的假图,有较高的检测率;通过构建PRNU流提高了模型的泛化能力,且对于诸如JPEG压缩、高斯噪声、高斯模糊等操作更加鲁棒。

Figure 202110940035

The invention discloses a face image detection method based on dual-stream CNN structure fusion PRNU generated by GAN. Input CNN for training to obtain GAN fingerprint features; (3) construct PRNU stream, extract the PRNU image of the face by the method of image noise reduction; (4) input the extracted PRNU image into CNN for training to obtain PRNU features; ( 5) Fully fuse the GAN fingerprint features and PRNU features, and input them into the subsequent network; (6) Use the Softmax loss function to perform binary classification to judge whether the face image is true or false. The present invention detects the face generated by GAN by constructing a two-stream CNN network model. In addition, the present invention can ensure that the false images generated by the same GAN can have a higher detection rate by constructing an RGB stream network; by constructing a PRNU stream, the generalization ability of the model is improved, and for JPEG compression, Gaussian noise, Gaussian blur, etc. The operation is more robust.

Figure 202110940035

Description

Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation
Technical Field
The invention relates to the technical field of digital image forensics, in particular to a face detection method based on a dual-flow CNN structure fusion PRNU (probabilistic neural network) generated by GAN (generic neural network).
Background
With the rapid development of machine learning and AI techniques (particularly GAN), it is no longer true that the eye is true. The most advanced GAN networks such as PGGAN, StyleGAN and StarGAN can easily assist ordinary users in creating high-quality composite pictures without having professional knowledge in photo editing. That is, nowadays the GAN-generated face images are increasingly clear and realistic and can even easily fool human beings.
Although artificial intelligence composite faces bring users with many novel experiences, these fake faces also cause fear and even panic to many people including celebrities, political people, and social media. Therefore, it is very important to develop an efficient and accurate human face synthesis image detection technique to reduce the negative social impact caused by false faces.
Nowadays, many researchers contribute to their thinking in AI generation of counterfeit face detection, and there are methods for taking evidence by referring to traditional images and methods based on deep learning. The traditional image-based evidence obtaining method mainly identifies the fake human face by extracting the statistical features of pixel level or color level, however, the method is easily affected by common attacks such as noise, compression and the like. The method based on deep learning is to regard false face detection as a binary problem and distinguish true and false samples by designing a new neural network or a loss function. However, the existing method only detects images generated by one GAN, and the detection effect of other GAN generated images is difficult to satisfy. Therefore, with the rapid emergence of new GAN models, how to improve the generalization and robustness of the models becomes increasingly important in such forensic problems.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a face generation detection method based on a dual-flow CNN structure fusion PRNU, which has strong generalization and high robustness.
The technical scheme is as follows: a face detection method based on a dual-flow CNN structure fusion PRNU (probabilistic neural network) GAN generation face detection method comprises the following steps:
(1) constructing an RGB stream network, and enhancing data by random erasure;
(2) inputting the preprocessed data set into CNN for training to obtain GAN fingerprint characteristics;
(3) constructing a PRNU flow, and extracting a PRNU image of a human face by an image denoising method;
(4) inputting the extracted PRNU image into a CNN for training to obtain PRNU characteristics;
(5) fully fusing the GAN fingerprint characteristics and the PRNU characteristics, and inputting the information into a subsequent network;
(6) and (5) performing secondary classification by using a Softmax loss function, and judging whether the face image is true or false.
Preferably, in the step (1), the RGB stream network is constructed by the specific process: selecting CelebA-HQ as a real face data set, and StyleGAN I as a fake face data set to train the network. The invention adopts an augmentation method, namely random erasing to carry out random shielding on a face image, randomly selects a rectangular area on an original image, and replaces pixels in the area with random values. In the process, the face images participating in training can be shielded to different degrees, so that the sample diversity is enhanced, and the network can be assisted to better pay attention to the difference of the image contents. The probability a of setting the random shielding is 0.5, and the area of the shielding rectangular frame is 0.02< S < 0.4.
Preferably, in the step (2), the preprocessed data set is input into a three-layer convolutional neural network for training, so that the network can fully explore the difference of true and false image contents per se and extract the GAN fingerprint features; of the three layer groups of the network, each layer group contains a convolutional layer, an activation function lreuu and a max pooling layer. The maximum pooling layer has the effects of reducing the size of the picture, increasing the receptive field of a convolution kernel, extracting high-level features, reducing the number of network parameters and preventing overfitting on the premise of keeping picture space information as much as possible. LRelu can not only improve the 'gradient vanishing' problem, but also enable the model to reach a convergence state quickly so as to save training time. The specific formula of the LReLU is expressed as follows:
Figure BDA0003212852990000021
wherein, yiRefers to the ith feature map.
Preferably, in step (3), the PRNU stream is constructed, and the PRNU image is extracted by an image denoising method, which specifically includes: firstly, a face image passes through a low-pass filter to filter additive noise, then the original image is used for subtracting the image after low-pass filtering to obtain a residual mode noise part, and the formula is expressed as follows:
n=I-F(I)
where n is the mode noise, I is the original image, and F (-) is the low pass filtering operation.
Preferably, in step (4), the extracted face PRNU image is input into a three-layer convolutional network for training, so that the network focuses on the change of the color image pixel values themselves to extract the PRNU feature, and in three layer groups of the network, each layer group includes a convolutional layer, an activation function lreuu, and a max pooling layer.
Preferably, in the step (5), the GAN fingerprint features and the PRNU features are sufficiently fused and input into a subsequent network, so as to facilitate final classification; the specific process is as follows: extracted PRNU features are fused well with GAN fingerprint features using a continate function and used for final classification. The formula is as follows:
z=concatenate(axis=2)([x.output,y.output])
wherein x is a PRNU feature, y is a GAN fingerprint feature, z is a fused feature, and axis is a splicing dimension.
The final output profiles are aggregated and then fed into two fully-connected layers, which are likewise equipped with the unsaturated activation function LReLu, consisting of 1024 and 512 units, respectively. In addition, the present invention also enables L2 regularization in the fully connected layer, where the parameter λ is 0.0005.
Preferably, in the step (6), the second classification is performed by using a Softmax loss function, so as to judge whether the face image is true or false and improve the detection precision.
Has the advantages that: compared with the prior art, the invention has the following remarkable effects: (1) compared with the existing deep learning method, the double-flow CNN model in the invention proves the effectiveness thereof with lower computing resource cost; (2) constructing RGB stream to explore the image content itself, so that the model focuses more on the difference between GAN fingerprint characteristics in real and forged faces; (3) the preprocessing operation is adopted, the sample is randomly erased and expanded, overfitting is prevented, and the robustness of the model is improved; (4) the PRNU stream is constructed for studying differences in image pixel value changes. The extracted face PRNU is directly taken as input to the stream, so that the network can focus more on the significant differences between the PRNU features, thus improving the versatility and robustness of the proposed method.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 illustrates a random erase embodiment of the present invention;
fig. 3 is a dual-flow CNN network structure of the present invention;
FIG. 4 is a process of face PRNU extraction according to the present invention;
FIG. 5 illustrates the generalization effect of detecting different GANs to generate faces according to the present invention;
fig. 6 illustrates the robustness effect of the present invention.
Detailed Description
The present invention will be described in detail with reference to examples.
The invention constructs a double-current CNN network to realize the face forgery detection task, wherein the double-current CNN network comprises an RGB (red, green and blue) stream and a PRNU (pseudo random number) stream. The RGB stream can ensure that the image generated by the same GAN network has higher detection precision. While the presence of the PRNU stream may direct the network to focus more on the changes in the color image pixel values themselves. At the beginning of the scheme, the two streams respectively play their own roles, and at the later stage, the two streams are fused to ensure that the proposed scheme shows obviously better generalization capability. More importantly, the fused network has remarkably improved resistance to various common attacks such as JPEG compression, Gaussian noise, Gaussian blur and the like.
As shown in fig. 1, the GAN generation face detection method based on the dual-stream CNN structure fusion PRNU includes the following steps: (1) constructing an RGB stream network, and enhancing data by random erasure;
selecting CelebA-HQ as a real face data set, and StyleGANI as a fake face data set to train the network. As shown in fig. 2, the present invention randomly masks a face image by using an augmentation method, i.e., random erasure, randomly selects a rectangular area in an original image, and replaces pixels in the rectangular area with random values. In the process, the face images participating in training can be shielded to different degrees, so that the sample diversity is enhanced, and the network can be assisted to better pay attention to the difference of the image contents. The probability a of setting the random shielding is 0.5, and the area of the shielding rectangular frame is 0.02< S < 0.4.
(2) Inputting the preprocessed data set into CNN for training to obtain GAN fingerprint characteristics;
because the fingerprint features belong to the bottom texture features of the image. The deeper and more complex networks are mainly used to extract semantic information of the image, which is contrary to the object of the present invention, so that constructing a shallow network is more beneficial to learning the extracted features. The network model of the invention refers to a discriminator network of a simple GAN network, and forms a final three-layer CNN model by adjusting a hierarchical structure, changing the number of characteristic diagrams of each layer in the network, the core size and the like, wherein each layer group of the three layer groups of the network comprises a convolution layer, an activation function LReLU and a maximum pooling layer. As shown in fig. 3. The model input is a color image with a size of 224 × 224 × 3. The image is then sent into three layer groups. Each layer group contains one convolutional layer (convolutional kernel size is 3 × 3, step size is 1 × 1) and one max pooling layer (kernel size is 2 × 2, step size is 2 × 2). The number of feature maps output from the first set of convolutional layers is 32, the output of the feature map becomes 222 × 222 × 32 after the first layer convolution, and the size of the feature map becomes 111 × 111 by halving after the maximum pooling. The number of output signatures for the other convolutional layers is twice the number of corresponding input signatures, i.e., 64 and 128. The invention is provided with a maximum pooled characteristic vector which can reduce the output of the convolution layer after each layer of convolution, thereby improving the training speed of the model and being not easy to over-fit, thereby greatly improving the training effect. Moreover, the present invention can not only improve the problem of gradient disappearance, but also make the model reach the convergence state faster to save the training time after the unsaturated activation function lreul is used for each convolution layer. The lreol is expressed as:
Figure BDA0003212852990000041
wherein, yiRefers to the ith feature map.
(3) Constructing a PRNU flow, and extracting a PRNU image of a human face by an image denoising method;
there have been many descriptions of image noise models in research, but the basic ideas are roughly the same, with m.chen et al analyzing the noise models most accurately and comprehensively. The pixel value of the image is composed of ideal pixel value, multiplicative noise and various additive noises, and can be approximately expressed by the following formula:
I=f((1+K)·O)+n
where I is the actual pixel value, O is the pixel value obtained by capturing the natural scene by the lens, n is the sum of additive noise generated during the processing of the image, f (·) is various camera operations, K is the PRNU multiplicative factor, and K · O is multiplicative noise, i.e., the theoretical expression of PRNU.
According to analysis, the PRNU is multiplicative noise, belongs to a high-frequency signal, is highly dependent on a pixel value, and is difficult to directly acquire, so that the PRNU is extracted by an image noise reduction method to ensure the integrity of the PRNU as much as possible, as shown in fig. 4. Firstly, a face image passes through a low-pass filter to filter additive noise, then the original image is used for subtracting the image after low-pass filtering to obtain a residual mode noise part, and the calculation formula is as follows:
n=I-F(I)
where n is the mode noise, I is the original image, and F (-) is the low pass filtering operation.
Since the face PRNU that is as complete and clear as possible is used for subsequent feature extraction, it is important to select a suitable map F (·). When complex images are processed, the traditional denoising methods such as Gaussian filtering and median filtering easily ignore the correlation among pixel points and destroy the texture structure of the images, and wavelet transformation has good time-frequency characteristics and can better depict image detail information such as edges and breakpoints. Therefore, the present invention adopts wavelet filtering to filter out additive noise. Firstly, a 'Sym 4' wavelet basis is selected to carry out wavelet decomposition to obtain a low-frequency component (LL) and 3 high-frequency components (HL, HH and LH), then, the high-frequency coefficients are set to be 0 through threshold quantization, and finally, the obtained wavelet coefficients are utilized to carry out image reconstruction. In order to make the denoising effect best, wavelet decomposition is performed twice.
(4) Inputting the extracted PRNU image into a CNN for training to obtain PRNU characteristics;
as shown in fig. 2, the extracted PRNU map is fed into the network for subsequent feature extraction. The size of the input image is still 224. The structure and parameters of this streaming network are consistent with the RGB stream and still consist of 3 packets. Each set consists of one convolutional layer (3 x 3 size, 1 x 1 step), equipped with LReLu and max pooling layer (2 x 2 size, 2 x 2 step). The number of output signature maps for the first set of convolutional layers is 32, and the number of output signature maps for the other convolutional layers is twice the number of corresponding input signature maps.
(5) And fully fusing the GAN fingerprint characteristics and the PRNU characteristics, and inputting the information into a subsequent network.
After the PRNU feature map and the RGB map have passed through the convolutional layer and the pooling layer, respectively, the two streams are merged, and the extracted PRNU features and GAN fingerprint features are fully fused using a concatemate function and used for final classification. The specific algorithm is expressed as follows:
z=concatenate(axis=2)([x.output,y.output])
wherein x is a PRNU feature, y is a GAN fingerprint feature, z is a fused feature, and axis is a splicing dimension.
The final output profiles are aggregated and then fed into two fully-connected layers, which are likewise equipped with the unsaturated activation function LReLu, consisting of 1024 and 512 units, respectively. In addition, the present invention also enables L2 regularization in the fully connected layer, where the parameter λ is 0.0005.
(6) And (5) performing secondary classification by using a Softmax loss function, and judging whether the face image is true or false.
The invention judges the authenticity of the face image, so the task is a binary problem. The result of the Softmax function corresponds to the probability distribution of the input image being divided into labels, where the label of the true image is set to 1 and the label of the tampered image is set to 0. The Softmax function is a monotone increasing function, and if an input picture is a true picture, the output numerical value is closer to 1; if the input picture is a tampered image, the output numerical value is closer to 0, so that the second classification can be completed by Softmax.
In summary, the GAN generated face detection method of the present invention fully exploits the differences between the genuine and counterfeit faces in the image content and pixel level, and uses GAN fingerprint features and PRNU features as important bases for detection. The constructed double-flow CNN network can not only ensure that the images generated by the same GAN have higher detection precision, but also ensure that the model of the invention still has better generalization performance for the images generated by other GANs, as shown in FIG. 5. We performed comparison experiments using images generated from 6 GANs, three of which were used as training data and the other three as test data. From the experimental effect, most of the proposed methods achieve the best results compared with the other four methods. More importantly, the method has better robustness in dealing with various common attacks such as sampling, JPEG compression, Gaussian noise, Gaussian blur and the like, as shown in FIG. 6. Compared with other methods, the method can ensure good detection performance under various attacks, and has the best stability under the condition that the attack strength is gradually strengthened.

Claims (6)

1.一种基于双流CNN结构融合PRNU的GAN生成人脸检测方法,其特征在于:包括以下步骤:1. A GAN generation face detection method based on dual-stream CNN structure fusion PRNU, is characterized in that: comprise the following steps: (1)构建RGB流网络,利用随机擦除增强数据;(1) Build an RGB stream network and enhance data with random erasure; (2)将预处理后的数据集输入CNN进行训练,得到GAN指纹特征;(2) Input the preprocessed dataset into CNN for training to obtain GAN fingerprint features; (3)构建PRNU流,通过图像降噪的方法来提取人脸的PRNU图像;(3) Constructing a PRNU stream, and extracting the PRNU image of the face by means of image noise reduction; (4)将提取到的PRNU图输入CNN进行训练,得到PRNU特征;(4) Input the extracted PRNU map into CNN for training to obtain PRNU features; (5)将GAN指纹特征与PRNU特征充分融合,输入到后续网络;(5) Fully fuse the GAN fingerprint feature with the PRNU feature and input it to the subsequent network; (6)利用Softmax损失函数进行二分类,判断人脸图像的真假。(6) Use the Softmax loss function to perform binary classification to judge whether the face image is true or false. 2.根据权利要求1所述的基于双流CNN结构融合PRNU的GAN生成人脸检测方法,其特征在于:步骤(1)中,所述构建RGB流网络,具体过程为:选取CelebA-HQ作为真实人脸数据集,StyleGAN I作为伪造人脸数据集来进行网络的训练;采用增广方法-随机擦除对人脸图像进行随机遮挡。2. the GAN generation face detection method based on dual-stream CNN structure fusion PRNU according to claim 1, is characterized in that: in step (1), described construction RGB flow network, concrete process is: choose CelebA-HQ as real Face data set, StyleGAN I is used as a fake face data set to train the network; the augmentation method-random erasure is used to randomly occlude the face image. 3.根据权利要求1所述的基于双流CNN结构融合PRNU的GAN生成人脸检测方法,其特征在于:步骤(2)中,所述将预处理后的数据集输入三层卷积神经网络以进行训练,提取GAN指纹特征;在网络的三个层组中,每个层组包含卷积层、激活函数LReLU以及最大池化层;所述的LReLU具体公式表达为:3. The GAN generation face detection method based on dual-stream CNN structure fusion PRNU according to claim 1, is characterized in that: in step (2), the described data set after preprocessing is input three-layer convolutional neural network with Perform training to extract GAN fingerprint features; in the three layer groups of the network, each layer group includes a convolution layer, an activation function LReLU and a maximum pooling layer; the specific formula of the LReLU is expressed as:
Figure FDA0003212852980000011
Figure FDA0003212852980000011
其中,yi是指第i张特征图。Among them, yi refers to the ith feature map.
4.根据权利要求1所述的基于双流CNN结构融合PRNU的GAN生成人脸检测方法,其特征在于:步骤(3)中,所述构建PRNU流,通过图像降噪的方法来提取人脸的PRNU图像,具体过程为:4. the GAN generation face detection method based on dual-stream CNN structure fusion PRNU according to claim 1, is characterized in that: in step (3), described constructs PRNU flow, by the method for image noise reduction, extracts the face detection method. PRNU image, the specific process is: 将人脸图像通过低通滤波器,滤除加性噪声,然后用原图减去低通滤波后的图像得到残余的模式噪声部分,公式表达为:Pass the face image through a low-pass filter to filter out the additive noise, and then subtract the low-pass filtered image from the original image to obtain the residual pattern noise part. The formula is expressed as: n=I-F(I)n=I-F(I) 其中,n为模式噪声,I为原图,F(·)为低通滤波操作。Among them, n is the pattern noise, I is the original image, and F(·) is the low-pass filtering operation. 5.根据权利要求1所述的基于双流CNN结构融合PRNU的GAN生成人脸检测方法,其特征在于:步骤(4)中,所述将提取到的PRNU图输入三层卷积网络进行训练。5. The GAN generation face detection method based on dual-stream CNN structure fusion PRNU according to claim 1, characterized in that: in step (4), the extracted PRNU image is input into a three-layer convolutional network for training. 6.根据权利要求1所述的基于双流CNN结构融合PRNU的GAN生成人脸检测方法,其特征在于:步骤(5)中,所述的将GAN指纹特征与PRNU特征充分融合,输入到后续网络,具体过程为:使用concatenate函数将提取到的PRNU特征与GAN指纹特征充分融合并用于最终的分类,公式为:6. The GAN generation face detection method based on dual-stream CNN structure fusion PRNU according to claim 1, is characterized in that: in step (5), described fully fuse GAN fingerprint feature and PRNU feature, input into subsequent network , the specific process is: use the concatenate function to fully fuse the extracted PRNU features with the GAN fingerprint features and use them for the final classification. The formula is: z=concatenate(axis=2)([x.output,y.output])z=concatenate(axis=2)([x.output,y.output]) 其中,x为PRNU特征,y为GAN指纹特征,z为融合后的特征,axis为拼接的维度。Among them, x is the PRNU feature, y is the GAN fingerprint feature, z is the fused feature, and axis is the dimension of the splicing.
CN202110940035.3A 2021-08-16 2021-08-16 Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation Pending CN113705397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110940035.3A CN113705397A (en) 2021-08-16 2021-08-16 Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110940035.3A CN113705397A (en) 2021-08-16 2021-08-16 Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation

Publications (1)

Publication Number Publication Date
CN113705397A true CN113705397A (en) 2021-11-26

Family

ID=78652869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110940035.3A Pending CN113705397A (en) 2021-08-16 2021-08-16 Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation

Country Status (1)

Country Link
CN (1) CN113705397A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241587A (en) * 2022-02-23 2022-03-25 中国科学院自动化研究所 Evaluation method and device for human face living body detection confrontation robustness

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319986A (en) * 2018-02-08 2018-07-24 深圳市华云中盛科技有限公司 The identification method and its system of image sources based on PRNU
CN110414350A (en) * 2019-06-26 2019-11-05 浙江大学 Face anti-counterfeiting detection method based on two-way convolutional neural network based on attention model
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN111861976A (en) * 2020-05-20 2020-10-30 西安理工大学 A method for identifying digital image source shooting equipment based on hardware fingerprint correlation
CN112381775A (en) * 2020-11-06 2021-02-19 厦门市美亚柏科信息股份有限公司 Image tampering detection method, terminal device and storage medium
CN112991345A (en) * 2021-05-11 2021-06-18 腾讯科技(深圳)有限公司 Image authenticity detection method and device, computer equipment and storage medium
CN112991278A (en) * 2021-03-01 2021-06-18 华南理工大学 Method and system for detecting Deepfake video by combining RGB (red, green and blue) space domain characteristics and LoG (LoG) time domain characteristics
WO2021134871A1 (en) * 2019-12-30 2021-07-08 深圳市爱协生科技有限公司 Forensics method for synthesized face image based on local binary pattern and deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319986A (en) * 2018-02-08 2018-07-24 深圳市华云中盛科技有限公司 The identification method and its system of image sources based on PRNU
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN110414350A (en) * 2019-06-26 2019-11-05 浙江大学 Face anti-counterfeiting detection method based on two-way convolutional neural network based on attention model
WO2021134871A1 (en) * 2019-12-30 2021-07-08 深圳市爱协生科技有限公司 Forensics method for synthesized face image based on local binary pattern and deep learning
CN111861976A (en) * 2020-05-20 2020-10-30 西安理工大学 A method for identifying digital image source shooting equipment based on hardware fingerprint correlation
CN112381775A (en) * 2020-11-06 2021-02-19 厦门市美亚柏科信息股份有限公司 Image tampering detection method, terminal device and storage medium
CN112991278A (en) * 2021-03-01 2021-06-18 华南理工大学 Method and system for detecting Deepfake video by combining RGB (red, green and blue) space domain characteristics and LoG (LoG) time domain characteristics
CN112991345A (en) * 2021-05-11 2021-06-18 腾讯科技(深圳)有限公司 Image authenticity detection method and device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FU Y等: ""Robust gan-face detection based on dual-channel cnn network"", 《BIOMEDICAL ENGINEERING AND INFORMATICS》, pages 1 - 5 *
WANG S Y等: ""CNN-generated images are surprisingly easy to spot for now"", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, pages 8695 - 8704 *
李旭嵘等: ""一种基于双流网络的Deepfakes检测技术"", 《信息安全学报》, vol. 5, no. 2, pages 84 - 91 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241587A (en) * 2022-02-23 2022-03-25 中国科学院自动化研究所 Evaluation method and device for human face living body detection confrontation robustness
CN114241587B (en) * 2022-02-23 2022-05-24 中国科学院自动化研究所 Evaluation method and device for human face living body detection confrontation robustness

Similar Documents

Publication Publication Date Title
Zhou et al. Learning rich features for image manipulation detection
CN112818862A (en) Face tampering detection method and system based on multi-source clues and mixed attention
CN111445454B (en) A method of image authenticity identification and its application in license identification
Abidin et al. Copy-move image forgery detection using deep learning methods: a review
CN113361474B (en) Dual-stream network image forgery detection method and system based on image block feature extraction
Xia et al. Towards DeepFake video forensics based on facial textural disparities in multi-color channels
CN114898437A (en) A deepfake face detection method based on frequency learning
CN114694220A (en) A dual-stream face forgery detection method based on Swin Transformer
CN111696021B (en) Image self-adaptive steganalysis system and method based on significance detection
CN114898438B (en) Cross-modal depth counterfeiting detection method based on self-adaptive fusion of time-frequency domain visual artifact characteristics
CN117496583B (en) A deep fake face detection and positioning method that can learn local differences
CN112560734A (en) Method, system, device and medium for detecting reacquired video based on deep learning
Goodwin et al. Blind video tamper detection based on fusion of source features
CN109034230A (en) A kind of single image camera source tracing method based on deep learning
Wang et al. GAN-generated fake face detection via two-stream CNN with PRNU in the wild
CN111754441B (en) Image copying, pasting and forging passive detection method
CN113705397A (en) Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation
CN113807237A (en) Training of in vivo detection model, in vivo detection method, computer device, and medium
Luo et al. Stereo super-resolution images detection based on multi-scale feature extraction and hierarchical feature fusion
Chetty et al. Nonintrusive image tamper detection based on fuzzy fusion
Wu et al. Review of imaging device identification based on machine learning
CN113609952B (en) Depth fake video frequency domain detection method based on dense convolutional neural network
Hsu et al. Deepfake algorithm using multiple noise modalities with two-branch prediction network
CN115222963A (en) GAN image evidence obtaining method fusing color space robust features
CN116824430A (en) Deep pseudo video evidence obtaining method based on semi-supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211126

RJ01 Rejection of invention patent application after publication