CN111899251A - Copy-move type forged image detection method for distinguishing forged source and target area - Google Patents

Copy-move type forged image detection method for distinguishing forged source and target area Download PDF

Info

Publication number
CN111899251A
CN111899251A CN202010781679.8A CN202010781679A CN111899251A CN 111899251 A CN111899251 A CN 111899251A CN 202010781679 A CN202010781679 A CN 202010781679A CN 111899251 A CN111899251 A CN 111899251A
Authority
CN
China
Prior art keywords
image
network
forged
copy
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010781679.8A
Other languages
Chinese (zh)
Inventor
李应灿
丁峰
杨建权
朱国普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010781679.8A priority Critical patent/CN111899251A/en
Publication of CN111899251A publication Critical patent/CN111899251A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a copy-move type forged image detection method for distinguishing forged sources and target areas. The method comprises the following steps: constructing a condition generation countermeasure network comprising a generation network and a judgment network and training by taking a condition meeting a set loss function as an optimization target, wherein in the training process, the input of the generation network is a given RGB image, the output is a detection result for identifying an image original region, a counterfeiting source region and a counterfeiting target region, the output of the given image and the generation network is used as a fake image pair, and the given image and a ground truth corresponding to the given image are input to the judgment network together as a true image pair; and taking the trained generation network as a detector for detecting copy-move type forged images, and taking the RGB images to be detected as input to obtain an image detection result. The invention can effectively distinguish the counterfeit source area and the counterfeit target area of the tampered image and improve the detection efficiency.

Description

Copy-move type forged image detection method for distinguishing forged source and target area
Technical Field
The invention relates to the technical field of information security, in particular to a copy-move type forged image detection method for distinguishing a forged source and a forged target area.
Background
The rapid development of information technology has led to the rapid popularization of portable digital devices, and digital images are becoming an important source for obtaining visual information. However, due to the advent of digital image editing processing software such as Photoshop, GIMP, american show, etc., even the general public without knowledge about image processing can easily modify digital images without leaving obvious visual traces. Copy-move forgery, which spoofs the public by copying or masking specific areas in the same image, is the simplest and most common method of image tampering. Once applied maliciously to the fields of news reports, forensic evidence, etc., these forged images may mislead public opinion, even interfere with judicial judgment. Therefore, it has become urgent to research an image forensic technology that can effectively detect forgery.
Copy-move type forged image detection can be classified into a conventional method and a deep learning-based method. The conventional copy-move forgery detection process generally includes an image preprocessing stage, a feature extraction stage, a feature matching stage, and a post-processing stage. The traditional method can be roughly subdivided into block-based detection and key point-based detection, aims at the detection of different forgery modes such as translation, rotation, scaling and the like of an image copy-move tampered area, and is suitable for different detection methods. However, in practical applications, it is impossible to accurately select a suitable forgery detection method for an image to be detected because it is impossible to know in advance which forgery method is specifically used. The invariance of the features extracted by the traditional method to specific conversion is not good enough, and the calculation cost in the feature matching stage is higher, so that the detection performance is limited.
With the continuous development of deep learning, the data-driven convolutional neural network is gradually applied to the field of digital forensics. Copy-move forgery detection based on a deep convolutional network is an end-to-end method without display feature extraction. However, most copy-move falsification detection methods based on deep learning currently only detect similar regions, and cannot well distinguish a falsification source region from a falsification target region, and effective distinguishing of the source region and the target region in copy-move falsification plays an important role in practical application.
Through analysis, the existing image copy-move forgery detection methods mainly have the following types:
1) block-based detection methods. Block-based detection methods can be subdivided into methods based on frequency conversion, texture features, moment invariance and the like. The low frequency component is extracted, for example, using a wavelet transform (DWT) algorithm, and then the low frequency subbands are divided into blocks. Finally, the Euclidean distance between the blocks is calculated by using a k-means algorithm. The method can effectively locate the forged area, but has higher calculation complexity.
2) A keypoint-based detection method. The detection methods are further classified into Scale Invariant Feature Transform (SIFT) based methods and Speeded Up Robust Feature (SURF) based methods.
3) And a detection method based on deep learning. For example, block features are extracted from an image using a convolutional neural network and autocorrelation between different blocks is calculated, then a feature extractor is used to locate matching points, and finally a counterfeit mask is reconstructed by a deconvolution network.
However, the existing copy-move type counterfeit detection method mainly has the following defects:
1) the conventional copy-move forgery detection method has insufficient characteristics, identification capability and invariance to specific transformation, which are manually extracted.
2) While forgery detection for specific types (Translation, scaling, Rotation, JPEG, Noise, etc.) in copy-move is excellent, it can only handle certain types of forgery. In practical applications, it is not possible to know which kind of tampering has been adopted without prior information.
3) And the calculation cost is higher in the characteristic matching stage, so that the practicability is limited.
4) Most copy-move counterfeiting detection methods detect only similar areas in an image, and cannot well distinguish a counterfeiting source area from a counterfeiting target area, and the effective distinguishing of the source area and the counterfeiting target area in copy-move counterfeiting plays an irreplaceable role in practical application.
In summary, the ease of acquisition and modification of digital images makes most images more or less subject to some post-processing operations. Copy-move forgery of the same image is one of the simplest and most common digital image falsification means, and is a forgery operation performed to copy or hide a specific area in an image. Because the forgery source area and the forgery target area are both from the same image, the information such as pattern noise, texture characteristics, brightness and the like is consistent with the original area of the image. Moreover, the tampering information cannot be accurately obtained due to the incompatibility of the statistical characteristics between the detection regions.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a copy-move type forged image detection method for distinguishing a forged source area and a forged target area, which is a novel technical scheme for detecting a copy-move type forged image by generating an anti-network based on conditions to distinguish the forged source area and the forged target area from the similarity of image areas serving as entry points.
According to an aspect of the present invention, there is provided a copy-move type forged image detection method of distinguishing a forged source and a target area. The method comprises the following steps: constructing a condition generation countermeasure network comprising a generation network and a judgment network and training by taking a condition meeting a set loss function as an optimization target, wherein in the training process, the input of the generation network is a given RGB image, the output is a detection result for identifying an image original region, a counterfeiting source region and a counterfeiting target region, the output of the given image and the generation network is used as a fake image pair, and the given image and a ground truth corresponding to the given image are input to the judgment network together as a true image pair; and taking the trained generation network as a detector for detecting copy-move type forged images, and taking the RGB images to be detected as input to obtain an image detection result.
Compared with the prior art, the method for detecting the counterfeit image has the advantages that the method for detecting the counterfeit image can not only locate the similar area in the image, but also effectively distinguish the counterfeit source area from the counterfeit target area, so that more visual counterfeit process information is obtained, and the method has stronger practicability.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a copy-move type forged image detection method of distinguishing forged source and target areas according to one embodiment of the present invention;
fig. 2 is an overall block diagram of a copy-move type forged image detection method of distinguishing forged source and target areas according to one embodiment of the present invention;
FIG. 3 is a graph of experimental results on a test set, according to one embodiment of the invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Aiming at the problem that the similar region in the image is usually detected and the forged source region and the forged target region cannot be well distinguished in the prior art, the method generates a countermeasure network (cGANs) through a training condition on the basis of a deep network to distinguish the forged source region and the forged target region in the copy-move forged image, and has important significance for the field of digital image forensics. In the description herein, the similar area refers to a suspected counterfeit area of an image, the counterfeit source area refers to a copied area, and the counterfeit target area refers to a pasted area.
For ease of understanding, the basic principles of generating a countermeasure network are first briefly introduced. The generation of countermeasure networks (GANs) includes two networks, namely a generation network G (also referred to as a generator or generation model) and a discrimination network D (also referred to as a discriminator or discrimination model). The effect of the generator and the discriminator is poor when the generation of the countermeasure network is started, and the performance is improved in the process of mutual game. In summary, the training process to generate the countermeasure network is: firstly, fixing a generated sample, training a discriminator and improving the performance of the discrimination network D; then, fixing the discriminator to make the data generated by the generating network and the real sample data fit to optimize the performance of the generator G; and finally, when the data probability distribution of the generator is coincident with the real data probability distribution, a stable state is achieved, the generator can generate pictures which are enough to be falsified and truthful, the judging network can not judge true and fake samples, the judging network D can be abandoned at the moment, and the generating network G can be used as a generator for generating the pictures. The input of the generator is random noise, and the output is a false picture G (z) generated by the generation network.
In the embodiment of the invention, a copy-move counterfeiting detection method for distinguishing the counterfeiting source area and the counterfeiting target area is designed based on the condition generation countermeasure network, so that the original area, the counterfeiting source area and the counterfeiting target area of the copy-move counterfeiting image can be effectively positioned.
In short, the basic principle of the detection method of the present invention is: the conditional generation countermeasure networks (cGANs) are based on generation countermeasure networks (GANs), and are trained by paired data so that inputs and outputs of the generation networks have corresponding relationships. The conditional generation countermeasure network is a mutual game process of the generator and the discriminator, and the final ideal state is that the generation network can output a data result which cannot be judged by the discrimination network and corresponds to the input. Aiming at the copy-move forgery detection task, the input of the generation network is the image to be detected, and the output of the generation network is a ground route infinitely close to the forgery information corresponding to the image to be detected. And the image to be detected and the generated network output are used as a false image pair, and the image to be detected and the corresponding ground pixel are used as a true image pair to be input into the discrimination network together. Finally, in an ideal state, the discriminator cannot judge the authenticity of the input image pair. Finally, the trained generated network can be used as a copy-move forgery detector.
Specifically, referring to fig. 1 in combination with fig. 2, the copy-move type forged image detection method for distinguishing a forged source and a target area provided by this embodiment includes the steps of:
step S110, constructing a condition generation countermeasure network including a generation network and a discrimination network.
In one embodiment, referring to fig. 2, for the provided conditional generation countermeasure networks (cGANs), the generation networks adopt a U-net structure, and jumper connections are provided on an Auto-encoder (Auto-encoder) structure. The input of the U-net structure goes through an encoding process (encoding) to a bottleneck layer (bottle), and then goes through a decoding process (decoding) to make the output of the U-net structure have the same dimension as the input. The encoding process is to perform multiple sets of convolution, activation, etc. operations on the input to generate the feature expressions, which are illustrated as e 1-e 8 in fig. 2. The corresponding decoding process is to deconvolute, activate, etc. the feature expression in order to obtain the false detection result with the same dimension as the input, which is illustrated as d1 to d8 in fig. 2. In addition, a jumper connection is provided that connects the output of the decoding ith layer with the output of the encoded nth-i layer as the decoding (i + 1) th layer input, where n represents the total number of layers in the network encoding or decoding process.
In one embodiment, the discrimination network can adopt a typical classification neural network, and after multiple groups of convolution, batch normalization and activation function processing, full connection operation is performed, and finally a sigmoid function is used as an activation layer to output a classification result.
Still referring to fig. 2, for copy-move detection problem to distinguish the counterfeit source and target area, the details of design in the model structure include that the size of the image to be detected and the size of the ground route image are 256 × 256 × 3 pixels. In the encoding process, the image to be detected is subjected to 8 groups of activation, convolution and Batch Normalization (BN) operations (marked as e1, … and e8) in sequence, and the characteristic dimension output by e8 is 1 × 1 × 512. During decoding, the first layer (d1) is a deconvolution process of the e8 output characteristics, and then the output of the d1 layer is connected with the output of the e7 layer as an input to the d2 layer; the output of the d2 layer is connected with the output of the e6 layer as the input of the d3 layer; by analogy, the output of decoding the last layer d8 is a spurious positioning result of 256 × 256 × 3. For example, the convolution kernel size can be set to (5,5) in both convolution and deconvolution, and the convolution step size is (2, 2). The discrimination network adopts a 4-layer stack structure of convolution + BN + activation function, and finally, the true image pair and the false image pair are subjected to two-classification discrimination through full connection and an activation layer.
And step S120, acquiring a sample data set for generating the countermeasure network under the training condition to detect the image similar area, the counterfeiting source area and the counterfeiting target area.
In the embodiment of the present invention, a detection and location method for copy-move forgery can be used to distinguish an image original region, a forgery source region, and a forgery target region. The conditions (conditional) for conditional generation of competing networks (cGANs) are reflected in the difference in the inputs to the generating networks, compared to the common generating competing networks (GANs). The input to the generators in GANs is typically random noise, whereas the input to the generators in cGANs may be given by the user. Training cGANs uses paired data sets (paired data), and the generation network does not generate data according to random noise any more, but learns the mapping from input to output, and establishes the corresponding relation between the generator output data and the input data.
In one embodiment, for copy-move forgery detection task for distinguishing forgery source area and forgery target area, the input of the generation network is given RGB image, and the output is detection result corresponding to image original area, forgery source area and forgery target area. For example, the image original region is identified by blue, the forgery source region is identified by green, and the forgery target region is identified by red, respectively. As the number of training rounds increases, the output of the generation network should be closer to the ground route. The decision network is judging whether the input data is true or false, the GANs is judging whether the input picture is true data or data generated by the generator, and the cGANs is judging whether the input image pair is a true image pair or a false image pair. The real image pair is an image pair formed by the to-be-detected image and the ground truth image, and the false image pair is an image pair formed by the to-be-detected image and a detection result output by the generation network.
The invention generates countermeasure networks (cGANs) based on conditions, and realizes copy-move forgery detection task for distinguishing source and target areas. Deep learning models are data-driven, and a large number of paired data sets (also called sample data sets, or data sets or training sets for short) are needed for training the network structure provided by the invention.
For example, the data set used in the present invention is 100000 copy-move forged images and their corresponding groudtuth, that is, 100000 pairs of forged images, which can be derived from the USCISI-CMFD data set provided in the prior art. Wherein, the training set is 80000 copy-move forged images and their corresponding group truth image pairs. The verification set comprises 10000 copy-move forged images and corresponding group truth image pairs. And the remaining 10000 copy-move forged images and their corresponding group truth image pairs are used as a test set.
In the process of generating the training model of the countermeasure network under the test condition, errors occur in partial detection results, and other objects except the forged area in the image are located. In order to guide the generation network in the form of data without performing target segmentation, a certain amount of images which are not copy-move forged and corresponding image pairs of which the ground route is full blue are preferably added into the training set, so that the detection performance is improved. This is because there is also a target in the non-falsified image but there is no need to detect, and by using a proper amount of the image which is not copy-move falsified and the corresponding group truth image pair, the network can be guided to detect and distinguish the falsification source and falsification target area in the image only, and there is no need to detect all objects. Therefore, by using a proper amount of weakly supervised samples, i.e. samples without falsified image (i.e. samples without falsified all areas in the image), a single category of supervision information can be provided, whereas in the prior art, such weakly supervised samples are usually ignored, but they also carry supervision information, and the performance of creating a countermeasure network can be effectively improved by using a proper amount reasonably.
Specifically, the following preprocessing operations are performed on the data pairs used: first, all image data is adjusted to 256 × 256 size, and a resize function is used. Secondly, the given image and the corresponding ground route are connected (concat) to be used as the input of the whole generated countermeasure network, so that the network is very convenient to judge when the image is judged to be true or false, and the image and the ground route are well matched. When the generated network is input, the data of the channel corresponding to the image is taken out, and the input data and the data output by the generated network are reconnected to form a false image pair. Finally, in order to increase the diversity of the training data, it is preferable to perform data enhancement processing. For example, the data enhancement strategy used is a probability of 1/3 using the original training data pairs, a probability of 1/3 flipping the training data pairs left and right, and a probability of 1/3 flipping the training data pairs up and down, respectively.
And S130, based on the sample data set, training conditions to generate a confrontation network by taking the set loss function as an optimization target.
In this step, the loss function of the cGANs model is optimized for the specificity of copy-move detection tasks that distinguish counterfeit source regions from counterfeit target regions.
The main idea of generating the countermeasure network is that the generator and the discriminator play games with each other, and the countermeasure loss function is as follows:
Figure BDA0002620474930000083
wherein, the real image is represented, z represents the noise of the input generation network, E (. + -.) represents the expected value of the distribution function, pdata(x) Representing the distribution of real samples, pz(z) represents noise distribution, g (z) represents an image generated by the generation network, and d (x) represents the probability of judging whether the real image is real by the network. D (g (z)) is the probability that the discrimination network determines that the image generated by the generation network is true. The purpose of the discrimination network is to distinguish between the real image and the generated image as much as possible. For a discriminant network, x is real data, so the closer D (x) is to 1, the better; g (z) is the generated false data, so D (g (z)) should be as small as possible. In other words, for a discriminant network, the larger V (G, D) the better, i.e., it is desirable to maximize the optimization objective. The purpose of generating the network is to make the generated image as close to the real image as possible, the smaller V (G, D) the better, i.e. to minimize the optimization objective.
In the embodiment of the invention, the cGANs model introduces user input image data on the basis of the GANs, and the discriminator discriminates that the image is not true or false any more but the image pair is true or false. The loss function of the condition generation countermeasure network is shown in formula (2), and the objective function is shown in formula (3):
Figure BDA0002620474930000081
Figure BDA0002620474930000082
where x, z represent the image of the user input generation network and the random noise, respectively. D (x, y) represents the probability that the discrimination network judges the real image pair to be real. D (x, G (x, z)) is the probability of the discrimination network determining the false image pair to be true. Unlike the optimization goal of classical generation of a countermeasure network, in the above-described embodiment, it is the probability that an image pair is judged to be true, not the probability that an image is judged to be true.
Further, in order to effectively distinguish the counterfeit source area from the counterfeit target area, the detection result output by the generation network needs to be able to deceive the discrimination network as much as possible, and needs to be close to the ground route in value as much as possible. For this reason, the L1 loss is increased on the basis of the countermeasure loss described above, so that the forgery location result is closer to groudtuth in the L1 sense.
In addition, in many cases, the influence of false judgment of a forged region as not being forged (false detection) is far greater than the influence of false judgment of an original region of an image as a forged region (false detection), and preferably, an L which is more concerned about a forging source and a forging target region is also added in designing a loss functionmaskAnd (4) loss. This loss is realized by introducing element-wise (element-wise) operation on the basis of the distance between the detection result generated by the generation network and L1 of the ground channel, focusing only on the detection error of the forged area. The L1 loss introduced before integration is equivalent to the weighted loss introduced into the original region of the image, the region of the forgery source, and the region of the forgery target. Therefore, the objective function of the present invention for generating a countermeasure network (cGANs) model preferably considers the countermeasure loss, the L1 loss, and the L comprehensively based on the detection task of distinguishing the counterfeit origin and the counterfeit target regionmaskAnd (4) loss. Wherein the L1 loss is shown in formula (4), LmaskThe loss is shown in equation (5), and the final objective function is shown in equation (6).
Figure BDA0002620474930000091
Figure BDA0002620474930000092
Figure BDA0002620474930000093
Wherein the parameter lambda1、λmaskCan be adjusted according to the specific task.
In summary, for copy-move forgery detection task, in the above embodiment, the loss function of the network model is optimized, and in addition to countering the loss, the conventional loss and the forgery measurement based on the L1 distance are introducedL of source and target area errorsmaskAnd (4) loss. In addition, during training, a fake-image-free sample is introduced, so that the training of the network model can be better constrained.
Step S140, the trained generation network is used to distinguish the counterfeit source area and the counterfeit target area of the image to be detected.
The training process of generating the countermeasure network under the condition can be performed offline at a server side or a cloud side. The trained generation network can be used as a copy-move counterfeiting detector, and an RGB image to be detected is used as input to obtain an image detection result for distinguishing a counterfeiting source area and a counterfeiting target area. The detection mode can not only detect the similar area, but also effectively distinguish the counterfeiting source area from the counterfeiting target area, and different colors are used for representing in the counterfeiting positioning result. For example, blue is used to indicate an original region of an image, green is used to indicate a forgery source region, and red is used to indicate a forgery target region, that is, a green region in the image is copied, processed for a certain time, and pasted to a red region.
Compared with the prior art that only similar areas can be realized (namely, only the original area of the image and the suspected counterfeit area of the image can be detected). The invention further divides the similar area (the suspected fake area of the image) into a fake source area (the copied area) and a fake target area (the pasted area), expands copy-move fake detection into three classification problems and can obtain more evidence obtaining information. The detection method for distinguishing the counterfeiting source and the counterfeiting target area based on the conditional generation countermeasure network is an Image-to-Image Translation process (Image-to-Image Translation), can directly solve various problems, and can be applied to the visual fields of scene conversion, label-to-Image generation, edge graph generation actual graphs and the like. Repeated experiments show that the copy-move counterfeiting detection method provided by the invention can obtain a better positioning result under the conditions of proper and sufficient data set, reasonable network structure and comprehensive consideration of the objective function.
To further verify the effect of the present invention, the provided detection method was implemented based on the TensorFlow deep learning framework using python programming language. The verification process comprises the steps of acquiring and processing a data set, generating a structure of a countermeasure network according to design conditions, optimizing a loss function, adjusting parameters of a training process (so as to avoid under-fitting and prevent over-fitting), and the like.
The experimental parameters were chosen as follows: epoch 30, batch size 1, λ1=100,λmask100, the first 6 epochlr is 0.0002, and the 7 th to 30 th epochs lr is 0.0001. Specifically, in the experiment, the conditional generation countermeasure network model proposed by the present invention is trained, 80000 pairs of forged image data and 10000 pairs of non-forged image data are used as a training set, the conditional generation countermeasure model is obtained through training of 30 epochs, and the model is tested on 10000 test sets. Fig. 3 shows a part of the experimental results on the test set, where fig. 3(a) is the test image, fig. 3(b) is the ground route, and fig. 3(c) is the test result. As can be seen from fig. 3, the present invention can distinguish the counterfeit source area (corresponding to the elephant on the right side of the figure) and the counterfeit target area (corresponding to the elephant on the right side of the figure) and the original area (i.e. the remaining area) in the test image more accurately by using different colors. The invention integrates all test results, has ideal detection results when copy-move falsification of specific target translation and scaling types in the image is distinguished, and achieves the established target. The test result shows that the method can be used as a copy-move forgery detection algorithm for potentially distinguishing image forgery source and target area.
It should be noted that, under the condition that training data is sufficient, the network model provided by the invention can adjust the loss function for a specific task, and can also be used in other application scenarios; the invention can distinguish the image copy-move counterfeiting source area from the image copy-move counterfeiting target area, and can be used as a principle part of an image copy-move counterfeiting detection system in the subsequent development. Further, those skilled in the art may make appropriate modifications or variations to the above-described embodiments without departing from the spirit and scope of the present invention, for example, the present invention may scale the image to be detected to 256 × 256 size as an input, but is not limited to this input size; as another example, the loss function need not include the loss of distance L1 and LmaskLoss; as another example, generating a network and determining the number of layers in a networkThe size of the convolution kernel and the like can be designed appropriately according to actual needs.
In summary, the copy-move forgery detection method for generating the countermeasure network based on the condition provided by the invention can not only locate the similar area in the image, but also effectively distinguish the forgery source and the forgery target area. And aiming at a cGANs-based network model, a loss function is optimally designed, a foot data set is used for training the network model for copy-move counterfeiting detection, an image original region, a counterfeiting source region and a counterfeiting target region are distinguished by three different colors, and visual counterfeiting process information can be obtained, so that the method has stronger practicability. The trained generation network can be used for directly inputting digital images and outputting detection results. Compared with the prior art, the detection speed of the invention is obviously improved, and the invention can be used for electronic equipment such as mobile phones with relatively limited processing capacity, and has stronger practical value.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A copy-move type forged image detection method of distinguishing a forged source and a target area, comprising the steps of:
constructing a condition generation countermeasure network comprising a generation network and a judgment network and training by taking a condition meeting a set loss function as an optimization target, wherein in the training process, the input of the generation network is a given RGB image, the output is a detection result for identifying an image original region, a counterfeiting source region and a counterfeiting target region, the output of the given image and the generation network is used as a fake image pair, and the given image and a ground truth corresponding to the given image are input to the judgment network together as a true image pair;
and taking the trained generation network as a detector for detecting copy-move type forged images, and taking the RGB image to be detected as input to obtain an image detection result for distinguishing a forged source area and a forged target area.
2. The method according to claim 1, wherein the image detection result distinguishes an image original region, a forgery source region, and a forgery target region in different colors.
3. The method of claim 1, wherein the sample data set used for training the conditional generation countermeasure network comprises copy-move forged images and their corresponding group truth, images without copy-move forging used for guiding the conditional generation countermeasure network to detect and distinguish only a forging source area and a forging target area, and their corresponding group truth.
4. The method of claim 1, wherein the loss functions include a penalty loss, an L1 distance loss, and an LmaskLoss of L ofmaskThe loss is used to measure the errors of the counterfeit source area and the counterfeit target area.
5. The method of claim 4, wherein the loss function is represented as:
Figure FDA0002620474920000011
where the L1 loss is:
Figure FDA0002620474920000012
Lmaskthe loss is:
Figure FDA0002620474920000013
the challenge loss is:
Figure FDA0002620474920000021
where x, z represent the image and random noise input into the generation network, respectively, E (×) represents the expected value of the distribution function, pdata(x) Representing the distribution of real samples, pz(z) represents the noise distribution, D (x, y) represents the probability of judging the true image pair as true by the discrimination network, D (x, G (x, z)) represents the probability of judging the false image pair as true by the discrimination network, LmaskThe loss is realized by introducing element-by-element operation, lambda1Weight, λ, representing the loss of L1maskRepresents LmaskThe lost weight.
6. The method of claim 1, wherein the generation network is a U-net structure comprising a plurality of coding layers, a bottleneck layer and a plurality of decoding layers, and a jumper connection is provided on the self-encoder structure, the jumper connection connecting an output of the decoding ith layer and an output of the coded n-i layer as a decoding (i + 1) th layer input, n representing a total number of layers of the coding or decoding process.
7. The method of claim 3, wherein the sample data set is extended by: 1/3 probability turns the training image pair left and right respectively using the original training image pair, 1/3 probability turns the training image pair up and down respectively using 1/3 probability.
8. The method of claim 1, the discrimination network for outputting a two-class discrimination belonging to a true image pair or a false image pair, the discrimination network comprising a multi-layer stack structure for feature extraction, a fully-connected layer, and an activation layer, each layer of the stack structure comprising a convolution process, a batch normalization process, and an activation process.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as claimed in claim 1.
10. A computer device comprising a memory and a processor, on which memory a computer program is stored which is executable on the processor, characterized in that the steps of the method as claimed in claim 1 are implemented when the processor executes the program.
CN202010781679.8A 2020-08-06 2020-08-06 Copy-move type forged image detection method for distinguishing forged source and target area Pending CN111899251A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010781679.8A CN111899251A (en) 2020-08-06 2020-08-06 Copy-move type forged image detection method for distinguishing forged source and target area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010781679.8A CN111899251A (en) 2020-08-06 2020-08-06 Copy-move type forged image detection method for distinguishing forged source and target area

Publications (1)

Publication Number Publication Date
CN111899251A true CN111899251A (en) 2020-11-06

Family

ID=73245939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010781679.8A Pending CN111899251A (en) 2020-08-06 2020-08-06 Copy-move type forged image detection method for distinguishing forged source and target area

Country Status (1)

Country Link
CN (1) CN111899251A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116592A (en) * 2020-11-19 2020-12-22 北京瑞莱智慧科技有限公司 Image detection method, training method, device and medium of image detection model
CN112560579A (en) * 2020-11-20 2021-03-26 中国科学院深圳先进技术研究院 Obstacle detection method based on artificial intelligence
CN112927219A (en) * 2021-03-25 2021-06-08 支付宝(杭州)信息技术有限公司 Image detection method, device and equipment
CN114359144A (en) * 2021-12-01 2022-04-15 阿里巴巴(中国)有限公司 Image detection method and method for obtaining image detection model
CN116912184A (en) * 2023-06-30 2023-10-20 哈尔滨工业大学 Weak supervision depth restoration image tampering positioning method and system based on tampering area separation and area constraint loss

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks
CN109543740A (en) * 2018-11-14 2019-03-29 哈尔滨工程大学 A kind of object detection method based on generation confrontation network
WO2019136946A1 (en) * 2018-01-15 2019-07-18 中山大学 Deep learning-based weakly supervised salient object detection method and system
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN111179219A (en) * 2019-12-09 2020-05-19 中国科学院深圳先进技术研究院 Copy-move counterfeiting detection method based on generation of countermeasure network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks
WO2019136946A1 (en) * 2018-01-15 2019-07-18 中山大学 Deep learning-based weakly supervised salient object detection method and system
CN109543740A (en) * 2018-11-14 2019-03-29 哈尔滨工程大学 A kind of object detection method based on generation confrontation network
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN111179219A (en) * 2019-12-09 2020-05-19 中国科学院深圳先进技术研究院 Copy-move counterfeiting detection method based on generation of countermeasure network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张继威 等: "基于改进DeepLabv3 +的拼接篡改定位检测技术", 《北京邮电大学学报》, vol. 42, no. 1, pages 68 - 73 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116592A (en) * 2020-11-19 2020-12-22 北京瑞莱智慧科技有限公司 Image detection method, training method, device and medium of image detection model
CN112116592B (en) * 2020-11-19 2021-04-02 北京瑞莱智慧科技有限公司 Image detection method, training method, device and medium of image detection model
CN112560579A (en) * 2020-11-20 2021-03-26 中国科学院深圳先进技术研究院 Obstacle detection method based on artificial intelligence
CN112927219A (en) * 2021-03-25 2021-06-08 支付宝(杭州)信息技术有限公司 Image detection method, device and equipment
CN112927219B (en) * 2021-03-25 2022-05-13 支付宝(杭州)信息技术有限公司 Image detection method, device and equipment
CN114359144A (en) * 2021-12-01 2022-04-15 阿里巴巴(中国)有限公司 Image detection method and method for obtaining image detection model
CN116912184A (en) * 2023-06-30 2023-10-20 哈尔滨工业大学 Weak supervision depth restoration image tampering positioning method and system based on tampering area separation and area constraint loss
CN116912184B (en) * 2023-06-30 2024-02-23 哈尔滨工业大学 Weak supervision depth restoration image tampering positioning method and system based on tampering area separation and area constraint loss

Similar Documents

Publication Publication Date Title
Wu et al. Busternet: Detecting copy-move image forgery with source/target localization
Li et al. Identification of deep network generated images using disparities in color components
CN111709408B (en) Image authenticity detection method and device
Yang et al. Source camera identification based on content-adaptive fusion residual networks
CN111899251A (en) Copy-move type forged image detection method for distinguishing forged source and target area
Cozzolino et al. Splicebuster: A new blind image splicing detector
Wang et al. Detection and localization of image forgeries using improved mask regional convolutional neural network
Mandelli et al. CNN-based fast source device identification
Yang et al. Spatiotemporal trident networks: detection and localization of object removal tampering in video passive forensics
CN111160313A (en) Face representation attack detection method based on LBP-VAE anomaly detection model
Chen et al. SNIS: A signal noise separation-based network for post-processed image forgery detection
Mazumdar et al. Universal image manipulation detection using deep siamese convolutional neural network
Hakimi et al. Image-splicing forgery detection based on improved lbp and k-nearest neighbors algorithm
Bennabhaktula et al. Camera model identification based on forensic traces extracted from homogeneous patches
Elsharkawy et al. New and efficient blind detection algorithm for digital image forgery using homomorphic image processing
Chen et al. Image splicing forgery detection using simplified generalized noise model
Anwar et al. Image forgery detection by transforming local descriptors into deep-derived features
Alkhowaiter et al. Evaluating perceptual hashing algorithms in detecting image manipulation over social media platforms
Ganeshan et al. Autoregressive-elephant herding optimization based generative adversarial network for copy-move forgery detection with interval type-2 fuzzy clustering
Mazumdar et al. Siamese convolutional neural network‐based approach towards universal image forensics
Fouad et al. Detection and localization enhancement for satellite images with small forgeries using modified GAN-based CNN structure
Bedi et al. Estimating cover image for universal payload region detection in stego images
Xuan et al. Scalable fine-grained generated image classification based on deep metric learning
Elmaci et al. A comparative study on the detection of image forgery of tampered background or foreground
Shukla et al. A survey on digital image forensic methods based on blind forgery detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination