CN114092477A - Image tampering detection method, device and equipment - Google Patents
Image tampering detection method, device and equipment Download PDFInfo
- Publication number
- CN114092477A CN114092477A CN202210069050.XA CN202210069050A CN114092477A CN 114092477 A CN114092477 A CN 114092477A CN 202210069050 A CN202210069050 A CN 202210069050A CN 114092477 A CN114092477 A CN 114092477A
- Authority
- CN
- China
- Prior art keywords
- feature map
- weighted
- image
- weighted feature
- attention weight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 54
- 238000011176 pooling Methods 0.000 claims abstract description 107
- 238000010586 diagram Methods 0.000 claims abstract description 58
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 27
- 238000000605 extraction Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 abstract description 7
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000004927 fusion Effects 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000008713 feedback mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000011842 forensic investigation Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application discloses an image tampering detection method, device and equipment, relating to the technical field of image processing and comprising the following steps: inputting an image to be detected into a trained convolution network model constructed based on an annular residual error U-shaped network to obtain a first characteristic diagram; calculating attention weight of the first feature map, and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map; performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map; performing deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map; and splicing the second weighted characteristic diagram and the first weighted characteristic diagram, and obtaining an image tampering detection result of the image to be detected through a full connection layer and a Softmax layer. According to the method and the device, an attention mechanism is added into a convolution network model created based on the annular residual U-shaped network, so that information of a tampered area of the image is paid more attention, and the efficiency and the accuracy of image tampering detection are improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for detecting image tampering.
Background
With the rapid development of multimedia technology and network communication technology, the potential safety hazard of digital images is becoming more serious, so that the information safety problem and the social safety problem can be brought. At present, malicious tampering of image information for the purpose of unauthorized purposes is a great safety hazard of information digitization. Therefore, forensic investigation of digital image tampering is very important.
The image tampering detection has very important significance in the fields of military defense, judicial identification, image anti-counterfeiting and the like. Currently common image tampering techniques include: (1) splicing and tampering technology; (2) copy-paste tamper technology; (3) the tamper technique is removed. All of the above methods are to modify the image content, which is more misleading. Other tampering techniques such as blurring, compression, enhancement, scaling, filtering, etc. do not modify the content of the image, mostly post-processing operations to mask the evidence of tampering.
At present, no method can identify the image area tampered by the three tampering technologies based on the traditional method. In the traditional image tampering detection, an image is generally divided into overlapped image blocks with a certain size, the features of the image blocks are extracted, then discrete cosine transform coefficients between the image blocks are calculated to form feature vectors, all the feature vectors are sequenced based on a dictionary sequencing detection algorithm, and two blocks with similar positions in a dictionary are a copied-pasted source region and a tampered region.
Image tampering methods based on deep learning have become popular in recent years. At present, an image tampering method based on deep learning needs to extract rich features and learn, the main extracted features include RGB (Red Green Blue, i.e. optical three primary colors) stream features to obtain strong contrast and unnatural tampering boundary information, in addition, the noise stream features analyze the inconsistent features of the noise of a tampered region and a real image region, or use the two feature information to analyze simultaneously. The current image tampering method based on deep learning has good detection effect on splicing tampering, but has poor detection effect on copying-pasting.
In summary, the current image tampering faces the following problems: (1) the complexity of tampering increases. The current tampering technology is more advanced and complex, and a good robustness result is difficult to obtain only by depending on the regional edge anomaly and the image color channel characteristics. (2) Deep learning is a contradiction in image tampering. The current generation countermeasure network generates images in image restoration, and the authenticity is difficult to distinguish; a deep learning network predicts a false image based on the correlation between a real area and a defect area, and a good detection method does not exist at present. (3) The contradiction between the complexity of the model and the narrow tampered area. The existing effective model is complex, the region characteristics of image tampering are tiny and fragile, deep learning needs to capture image characteristics in a fine mode and fusion of multiple characteristics, so that the model parameters are large, and training and reasoning are time-consuming.
Therefore, how to effectively identify a tampered region and multiple tampering types in an image and improve the efficiency and accuracy of image tampering detection are still problems to be further solved at present.
Disclosure of Invention
In view of this, an object of the present application is to provide an image tampering detection method, apparatus and device, which can effectively identify a tampered region and a tampering type in an image, and improve efficiency and accuracy of image tampering detection. The specific scheme is as follows:
in a first aspect, the present application discloses an image tampering detection method, including:
inputting an image to be detected into a trained convolutional network model so as to perform feature extraction on the image to be detected through the convolutional network model to obtain a first feature map; the convolution network model is obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set;
calculating attention weight of the first feature map, and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map;
performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map;
performing deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map;
and splicing the second weighted characteristic diagram and the first weighted characteristic diagram, and enabling a target characteristic diagram obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected.
Optionally, the calculating an attention weight of the first feature map and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map includes:
calculating a channel attention weight of the first feature map, and performing weighting processing on the first feature map by using the channel attention weight to obtain a channel weighted feature map;
calculating the spatial attention weight of the channel weighted feature map, and performing weighting processing on the channel weighted feature map by using the spatial attention weight to obtain a spatial weighted feature map;
and fusing the spatial weighting characteristic diagram and the first characteristic diagram to obtain a first weighting characteristic diagram.
Optionally, the calculating the channel attention weight of the first feature map includes:
carrying out average pooling operation on the first characteristic diagram to obtain a first average pooling result;
performing maximum pooling operation on the first feature map to obtain a first maximum pooling result;
passing the first average pooling result and the first maximum pooling result through a full connection layer to obtain corresponding first channel attention and second channel attention;
and fusing the first channel attention and the second channel attention to obtain a channel attention weight.
Optionally, the calculating the spatial attention weight of the channel weighted feature map includes:
carrying out average pooling operation on the channel weighting characteristic diagram to obtain a second average pooling result;
performing maximum pooling operation on the channel weighting characteristic diagram to obtain a second maximum pooling result;
and performing convolution operation on the second average pooling result and the second maximum pooling result, and obtaining a spatial attention weight through a full-connection layer.
Optionally, after the calculating the attention weight of the first feature map and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map, the method further includes:
and feeding back the first weighted feature map to the convolutional network model by using a preset activation function so as to train the convolutional network model through the first weighted feature map.
Optionally, after performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map, the method further includes:
inputting the pooling weighted feature map serving as a new image to be detected into the convolution network model according to preset convolution times to obtain a second feature map, and calculating the attention weight of the second feature map;
weighting the second feature map by using the attention weight of the second feature map to obtain a second weighted feature map;
and performing pooling operation on the second weighted feature map to obtain a new pooled weighted feature map.
Optionally, the image tampering detection method further includes:
performing deconvolution operation on the new pooled weighted feature map according to the preset convolution times and the size and the step length of a preset convolution kernel to obtain a corresponding third weighted feature map;
and splicing the third weighted feature map and the new pooled weighted feature map.
In a second aspect, the present application discloses an image tampering detection apparatus, comprising:
the characteristic extraction module is used for inputting the image to be detected into the trained convolution network model so as to extract the characteristics of the image to be detected through the convolution network model to obtain a first characteristic diagram; the convolution network model is obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set;
the attention weight calculation module is used for calculating the attention weight of the first feature map and carrying out weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map;
the pooling module is used for pooling the first weighted feature map to obtain a pooled weighted feature map;
the deconvolution module is used for carrying out deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map;
and the characteristic diagram splicing module is used for splicing the second weighted characteristic diagram with the first weighted characteristic diagram, and enabling a target characteristic diagram obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected.
In a third aspect, the present application discloses an electronic device comprising a processor and a memory; wherein the processor implements the aforementioned image tampering detection method when executing the computer program stored in the memory.
Therefore, the image to be detected is firstly input into a convolution network model obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set, so that the image to be detected is subjected to feature extraction through the convolution network model to obtain a first feature map, then the attention weight of the first feature map is calculated, and the first feature map is subjected to weighting processing by using the attention weight to obtain a first weighted feature map; performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map; and performing deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map, splicing the second weighted feature map and the first weighted feature map, and enabling a target feature map obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected. Therefore, the attention mechanism is added into the convolution network model created based on the annular residual U-shaped network, the context associated information of the image and the information of the tampered area are concerned more, and the efficiency and the accuracy of image tampering detection are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of an image tampering detection method disclosed herein;
FIG. 2 is a flow chart of a specific image tampering detection method disclosed herein;
FIG. 3 is a schematic network structure diagram of a specific ring residual U-shaped network disclosed in the present application;
FIG. 4 is a flow chart of a specific channel attention weight calculation method disclosed herein;
FIG. 5 is a flowchart of a specific channel attention weight acquisition method disclosed herein;
FIG. 6 is a flow chart of a specific spatial attention weight acquisition method disclosed herein;
FIG. 7 is a schematic structural diagram of an image tampering detection apparatus disclosed in the present application;
fig. 8 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses an image tampering detection method, which is shown in fig. 1 and comprises the following steps:
step S11: inputting an image to be detected into a trained convolutional network model so as to perform feature extraction on the image to be detected through the convolutional network model to obtain a first feature map; the convolution network model is obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set.
In this embodiment, an image to be detected is first input into a convolutional network model obtained by training an initial network model constructed based on an annular Residual U-network (RRU-Net) by using a collected image tampering data set, where the convolutional network model extracts features of the image to be detected and outputs an extracted feature map, that is, the first feature map. Wherein the image tampering type in the image tampering data set includes but is not limited to splicing type tampering, copy-paste type tampering, removal type tampering, and the like.
Step S12: and calculating the attention weight of the first feature map, and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map.
In this embodiment, an image to be detected is input into a trained convolutional network model, and feature extraction is performed on the image to be detected through the convolutional network model to obtain a first feature map, and then, in order to obtain a larger receptive field and context information of the image to be detected, an attention mechanism is introduced. Specifically, the attention weight of the first feature map is calculated, and the weighted feature map, that is, the first weighted feature map is obtained by weighting the first feature map using the calculated attention weight. The attention weight may be one or more, including but not limited to a spatial attention weight, a channel attention weight, and the like.
Further, after the calculating the attention weight of the first feature map and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map, the method may further include: and feeding back the first weighted feature map to the convolutional network model by using a preset activation function so as to train the convolutional network model through the first weighted feature map. In this embodiment, in order to further optimize the convolution network model and prevent blurring of image features after passing through the convolution network model, that is, prevent diffusion of image feature information after passing through a convolution operation, a feedback mechanism may be added to feed back the first weighted feature map to the convolution network model, and the first weighted feature map is used as a new input to train the convolution network model. The preset feedback function adopted by the feedback mechanism includes, but is not limited to, a Sigmoid (S-shaped growth curve) function.
Step S13: and performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map.
In this embodiment, after calculating the attention weight of the first feature map and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map, in order to increase the calculation speed and prevent overfitting, pooling operation is performed on the first weighted feature map to obtain a pooled feature map, that is, the pooled weighted feature map. Wherein the Pooling operation includes, but is not limited to, Average Pooling (Average Pooling) operation and Max Pooling (Max Pooling) operation, etc.
Step S14: and carrying out deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map.
In this embodiment, after pooling operation is performed on the first weighted feature map to obtain a pooled weighted feature map, deconvolution operation is performed on the pooled weighted feature map after pooling operation to obtain a deconvolved feature map, that is, the second weighted feature map.
Step S15: and splicing the second weighted characteristic diagram and the first weighted characteristic diagram, and enabling a target characteristic diagram obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected.
In this embodiment, after the deconvolution operation is performed on the pooled weighted feature map to obtain a second weighted feature map, the second weighted feature map and the first weighted feature map are subjected to a splicing operation, that is, the size of the feature map is not changed, the number of channels is doubled, and then the target feature map obtained after the splicing operation passes through a full Connected layers (FC) and a Softmax layer, so that the features of the target feature map are classified and output in a probability expression form through the full Connected layers and the Softmax layer, and the output result is the image tampering detection result of the image to be detected. It will be appreciated that after passing through the fully-connected layer and the Softmax layer, the output is converted into probability values, with probability values of different sizes representing the size of the likelihood of belonging to different classes.
As can be seen, in the embodiment of the present application, an image to be detected is input into a convolutional network model which is obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set, so that feature extraction is performed on the image to be detected through the convolutional network model to obtain a first feature map, then, an attention weight of the first feature map is calculated, and the first feature map is subjected to weighting processing by using the attention weight to obtain a first weighted feature map; performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map; and performing deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map, splicing the second weighted feature map and the first weighted feature map, and enabling a target feature map obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected. Therefore, an attention mechanism is added to a target network model created based on the annular residual U-shaped network, image context associated information and information of a tampered area are paid more attention to, and efficiency and accuracy of image tampering detection are improved.
The embodiment of the application discloses a specific image tampering detection method, which is shown in fig. 2 and comprises the following steps:
step S21: inputting an image to be detected into a trained convolutional network model so as to perform feature extraction on the image to be detected through the convolutional network model to obtain a first feature map; the convolution network model is obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set.
In this embodiment, an image to be detected is input into a convolutional network model obtained by training an initial network model constructed based on an annular residual U-type network in advance using an acquired image tampering data set, feature extraction is performed on the image to be detected using the convolutional network model, and an extracted feature map is output. Referring to fig. 3, fig. 3 shows a network structure of a ring-shaped residual U-type network including two convolutional layers. Specifically, the features in the image to be detected may be extracted according to the following convolution calculation formula:
wherein,representing the image to be detected,the first characteristic diagram is shown in a representation,representing the residual map to be learned,represents the weight of the i-th layer,representing an activation function.
Specifically, if the image to be detected is I, a corresponding characteristic diagram can be obtained after passing through the U-type network with the annular residual error, and can be represented by the following formula:
wherein,representing the characteristic diagram output after the residual error convolution operation of the image I to be detected and usingIt is shown that,and representing the residual error convolution operation of the image I to be detected.
Step S22: and calculating the channel attention weight of the first feature map, and performing weighting processing on the first feature map by using the channel attention weight to obtain a channel weighted feature map.
In this embodiment, after the image to be detected is input into the trained convolutional network model and the feature of the image to be detected is extracted through the convolutional network model to obtain the first feature map, in order to solve the problem of gradient degradation caused by the convolutional network model, an attention mechanism may be added. Specifically, a channel attention weight of the first feature map is calculated, and the first feature map is weighted by the calculated channel attention weight to obtain a corresponding channel weighted feature map.
In this embodiment, referring to fig. 4, the calculating the channel attention weight of the first feature map may specifically include:
step S31: carrying out average pooling operation on the first characteristic diagram to obtain a first average pooling result;
step S32: performing maximum pooling operation on the first feature map to obtain a first maximum pooling result;
step S33: passing the first average pooling result and the first maximum pooling result through a full connection layer to obtain corresponding first channel attention and second channel attention;
step S34: and fusing the first channel attention and the second channel attention to obtain a channel attention weight.
In this embodiment, as described with reference to fig. 5, after feature extraction is performed on the image to be detected by using the convolution network model to obtain a first feature map, in order to ensure the completeness of feature information of the image to be detected, an average pooling operation and a maximum pooling operation are respectively performed on the first feature map to obtain a corresponding first average pooling result and a corresponding first maximum pooling result, and then the first average pooling result and the first maximum pooling result are passed through a full connection layer to obtain two channel attention values, that is, the first channel attention and the second channel attention, and then the first channel attention and the second channel attention are fused, and the two channel attention weights may be fused by a direct addition method. Specifically, the channel attention weight of the first feature map may be calculated according to the following channel attention calculation formula:
wherein,indicating that the Sigmoid-activated function,showing the first characteristic diagramThe channel attention weight of (a) is,a fully-connected layer is shown,for the first characteristicThe average pooling operation of (a) is performed,for the first characteristicMaximum pooling operation. The channel attention weight is then added to the first profileAbove, it can be represented by the following formula:
Step S23: and calculating the spatial attention weight of the channel weighted feature map, and performing weighting processing on the channel weighted feature map by using the spatial attention weight to obtain the spatial weighted feature map.
In this embodiment, after the channel attention weight of the first feature map is calculated and the first feature map is weighted by the channel attention weight to obtain a channel weighted feature map, the channel weighted feature map is used as an input of spatial attention calculation to calculate a spatial attention weight of the channel weighted feature map, and the channel weighted feature map is weighted by the calculated spatial attention weight to obtain a spatial weighted feature map.
In this embodiment, the calculating the spatial attention weight of the channel weighted feature map specifically may include: carrying out average pooling operation on the channel weighting characteristic diagram to obtain a second average pooling result; performing maximum pooling operation on the channel weighting characteristic diagram to obtain a second maximum pooling result; and performing convolution operation on the second average pooling result and the second maximum pooling result, and obtaining a spatial attention weight through a full-connection layer. Specifically, referring to fig. 6, the channel weighted feature map is used as an input, an average pooling operation and a maximum pooling operation are performed on the channel weighted feature map, a convolution operation is performed on a second average pooling result and a second maximum pooling result obtained after the pooling operation, and a spatial attention weight of the channel weighted feature map is obtained through a full connection layer. Specifically, the spatial attention weight of the channel weighted feature map may be calculated according to the following spatial attention calculation formula:
wherein,is shown aboveThe spatial attention weight of the feature map,represents a convolution operation with a convolution kernel size of 7 x 7,representing a Sigmoid activation function. The space attention weight and the channel weighted feature map are fused to obtain the spaceThe weighted feature map can be represented by the following formula:
Step S24: and fusing the spatial weighting characteristic diagram and the first characteristic diagram to obtain a first weighting characteristic diagram.
In this embodiment, the spatial attention weight of the channel weighted feature map is calculated, and the channel weighted feature map is weighted by using the spatial attention weight to obtain a spatial weighted feature map, and then the spatial weighted feature map and the first feature map are fused to obtain a first weighted feature map. Wherein the Fusion can be performed by the method of Sum Fusion. Specifically, it can be represented by the following formula:
wherein,the feature fusion is performed by adding the convolution feature map and the feature map fused with the mixed attention weight.
Further, the above-mentioned results are obtainedAnd then, the signal can be input into the convolution network model constructed based on the ring residual U-shaped network through a Sigmoid activation function.
Step S25: and performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map.
In this embodiment, after the spatial weighted feature map and the first feature map are fused to obtain a first weighted feature map, the first weighted feature map may be further pooled to obtain a pooled weighted feature map.
In this embodiment, after performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map, the method specifically includes: inputting the pooling weighted feature map serving as a new image to be detected into the convolution network model according to preset convolution times to obtain a second feature map, and calculating the attention weight of the second feature map; weighting the second feature map by using the attention weight of the second feature map to obtain a second weighted feature map; and performing pooling operation on the second weighted feature map to obtain a new pooled weighted feature map. It can be understood that, in the process of image detection on an image through a neural network model, multiple convolution operations are generally required to extract features in the image, and in this embodiment, the pooling weighted feature map may be convolved according to a preset number of convolution operations. For example, the feature map obtained after pooling may be subjected to the same operation as the pooling weighted feature map obtaining process for 3 times, that is, the feature map obtained after pooling is used as a new image to be detected to obtain a new feature map, that is, the second feature map, then the attention weight of the second feature map is calculated, the second feature map is weighted by using the attention weight of the second feature map to obtain a second weighted feature map, and then the pooling operation is performed on the second weighted feature map to obtain a new pooling weighted feature map.
Further, after performing pooling operation on the second weighted feature map to obtain a new pooled weighted feature map, the method specifically includes: performing deconvolution operation on the new pooled weighted feature map according to the preset convolution times and the size and the step length of a preset convolution kernel to obtain a corresponding third weighted feature map; and splicing the third weighted feature map and the new pooled weighted feature map. It can be understood that after the convolution operation, corresponding deconvolution operation needs to be performed on the new pooled weighted feature map obtained after the convolution operation, the feature map after the convolution operation for the preset number of times is subjected to the operation opposite to the convolution operation, the new pooled weighted feature map can be subjected to deconvolution operation based on the preset convolution kernel size and the step length to obtain a feature map with the same size as the corresponding convolution block, that is, the third weighted feature map, and the third weighted feature map and the new pooled weighted feature map are subjected to splicing operation, that is, the feature map has the same size, and the dimension is doubled.
Step S26: and carrying out deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map.
Step S27: and splicing the second weighted characteristic diagram and the first weighted characteristic diagram, and enabling a target characteristic diagram obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected.
For more specific processing procedures of the steps S26 and S27, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
It can be seen that, in the embodiment of the present application, after an image to be detected is input into a convolutional network model which is trained on an initial network model constructed based on an annular residual U-shaped network by using an image tampering dataset to obtain a first feature map, a channel attention weight of the first feature map is calculated, the first feature map is weighted by using the channel attention weight to obtain a channel weighted feature map, a spatial attention weight of the channel weighted feature map is calculated, the channel weighted feature map is weighted by using the spatial attention weight to obtain a spatial weighted feature map, the spatial weighted feature map is fused with the first feature map to obtain a first weighted feature map, the first weighted feature map is pooled to obtain a pooled weighted feature map, and finally the second weighted feature map is spliced with the first weighted feature map, and the spliced target characteristic graph passes through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected. Therefore, two attentions, namely channel attentions and space attentions, are added into a target network model created based on an annular residual U-shaped network, and an attention mechanism is added to a space domain and a channel domain of image features to be detected, so that the difference between a tampered region and an untampered region can be better captured, and the detection efficiency and accuracy are improved.
Correspondingly, the embodiment of the present application further discloses an image tampering detection device, as shown in fig. 7, the device includes:
the feature extraction module 11 is configured to input an image to be detected into the trained convolutional network model, so as to perform feature extraction on the image to be detected through the convolutional network model, and obtain a first feature map; the convolution network model is obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set;
an attention weight calculation module 12, configured to calculate an attention weight of the first feature map, and perform weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map;
a pooling module 13, configured to perform pooling operation on the first weighted feature map to obtain a pooled weighted feature map;
a deconvolution module 14, configured to perform a deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map;
and the feature map splicing module 15 is configured to splice the second weighted feature map and the first weighted feature map, and pass through a full connection layer and a Softmax layer on a target feature map obtained after splicing to obtain an image tampering detection result of the image to be detected.
For the specific work flow of each module, reference may be made to corresponding content disclosed in the foregoing embodiments, and details are not repeated here.
In the embodiment of the application, an image to be detected is input into a convolution network model obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set, so that the image to be detected is subjected to feature extraction through the convolution network model to obtain a first feature map, then the attention weight of the first feature map is calculated, and the first feature map is subjected to weighting processing by using the attention weight to obtain a first weighted feature map; performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map; and performing deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map, splicing the second weighted feature map and the first weighted feature map, and enabling a target feature map obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected. Therefore, an attention mechanism is added to a convolution network model created based on an annular residual U-shaped network, image context associated information and information of a tampered area are paid more attention to, and efficiency and accuracy of image tampering detection are improved.
In some specific embodiments, the attention weight calculating module 12 may specifically include:
the channel attention weight calculation unit is used for calculating the channel attention weight of the first feature map and performing weighting processing on the first feature map by using the channel attention weight to obtain a channel weighted feature map;
the spatial attention weight calculation unit is used for calculating the spatial attention weight of the channel weighted feature map and carrying out weighting processing on the channel weighted feature map by using the spatial attention weight to obtain a spatial weighted feature map;
and the first fusion unit is used for fusing the spatial weighting characteristic diagram and the first characteristic diagram to obtain a first weighting characteristic diagram.
In some specific embodiments, the channel attention weight calculation unit may specifically include:
the first average pooling unit is used for carrying out average pooling operation on the first characteristic diagram to obtain a first average pooling result;
the first maximum pooling unit is used for performing maximum pooling operation on the first feature map to obtain a first maximum pooling result;
a first full-connection unit, configured to pass the first average pooling result and the first maximum pooling result through a full-connection layer to obtain corresponding first channel attention and second channel attention;
and the second fusion unit is used for fusing the first channel attention and the second channel attention to obtain a channel attention weight.
In some specific embodiments, the spatial attention weight calculation unit may specifically include:
the second average pooling unit is used for carrying out average pooling operation on the channel weighting characteristic diagram to obtain a second average pooling result;
the second maximum pooling unit is used for performing maximum pooling operation on the channel weighting characteristic diagram to obtain a second maximum pooling result;
and the first convolution operation unit is used for performing convolution operation on the second average pooling result and the second maximum pooling result and obtaining the spatial attention weight through a full connection layer.
In some specific embodiments, the attention weight calculating module 12 may further include:
the first feedback unit is used for feeding the first weighted feature map back to the convolutional network model by using a preset activation function so as to train the convolutional network model through the first weighted feature map.
In some embodiments, after the pooling module 13, the method may further include:
the image input unit is used for inputting the pooling weighted feature map serving as a new image to be detected into the convolution network model to obtain a second feature map according to the preset convolution times, and calculating the attention weight of the second feature map;
the feature map weighting unit is used for weighting the second feature map by using the attention weight of the second feature map to obtain a second weighted feature map;
and the pooling unit is used for performing pooling operation on the second weighted feature map to obtain a new pooled weighted feature map.
In some specific embodiments, the image tampering detection apparatus may further include:
the deconvolution unit is used for performing deconvolution operation on the new pooled weighted feature map according to the preset convolution times and the size and the step length of a preset convolution kernel to obtain a corresponding third weighted feature map;
and the feature map splicing unit is used for splicing the third weighted feature map and the new pooled weighted feature map.
Further, an electronic device is disclosed in the embodiments of the present application, and fig. 8 is a block diagram of the electronic device 20 according to an exemplary embodiment, which should not be construed as limiting the scope of the application.
Fig. 8 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. The memory 22 is used for storing a computer program, and the computer program is loaded and executed by the processor 21 to implement the relevant steps in the image tampering detection method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the storage 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device on the electronic device 20 and the computer program 222, and may be Windows Server, Netware, Unix, Linux, or the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the image tampering detection method performed by the electronic device 20 disclosed in any of the foregoing embodiments.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The image tampering detection method, device and equipment provided by the present application are described in detail above, and a specific example is applied in the present application to explain the principle and implementation of the present application, and the description of the above embodiment is only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (9)
1. An image tampering detection method, comprising:
inputting an image to be detected into a trained convolutional network model so as to perform feature extraction on the image to be detected through the convolutional network model to obtain a first feature map; the convolution network model is obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set;
calculating attention weight of the first feature map, and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map;
performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map;
performing deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map;
and splicing the second weighted characteristic diagram and the first weighted characteristic diagram, and enabling a target characteristic diagram obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected.
2. The image tampering detection method according to claim 1, wherein the calculating an attention weight of the first feature map and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map includes:
calculating a channel attention weight of the first feature map, and performing weighting processing on the first feature map by using the channel attention weight to obtain a channel weighted feature map;
calculating the spatial attention weight of the channel weighted feature map, and performing weighting processing on the channel weighted feature map by using the spatial attention weight to obtain a spatial weighted feature map;
and fusing the spatial weighting characteristic diagram and the first characteristic diagram to obtain a first weighting characteristic diagram.
3. The image tampering detection method of claim 2, wherein said calculating a channel attention weight of the first feature map comprises:
carrying out average pooling operation on the first characteristic diagram to obtain a first average pooling result;
performing maximum pooling operation on the first feature map to obtain a first maximum pooling result;
passing the first average pooling result and the first maximum pooling result through a full connection layer to obtain corresponding first channel attention and second channel attention;
and fusing the first channel attention and the second channel attention to obtain a channel attention weight.
4. The image tampering detection method of claim 2, wherein said computing the spatial attention weight of the channel weighted feature map comprises:
carrying out average pooling operation on the channel weighting characteristic diagram to obtain a second average pooling result;
performing maximum pooling operation on the channel weighting characteristic diagram to obtain a second maximum pooling result;
and performing convolution operation on the second average pooling result and the second maximum pooling result, and obtaining a spatial attention weight through a full-connection layer.
5. The image tampering detection method according to claim 1, wherein after calculating the attention weight of the first feature map and performing weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map, the method further comprises:
and feeding back the first weighted feature map to the convolutional network model by using a preset activation function so as to train the convolutional network model through the first weighted feature map.
6. The image tampering detection method according to claim 1, wherein after performing pooling operation on the first weighted feature map to obtain a pooled weighted feature map, the method further comprises:
inputting the pooling weighted feature map serving as a new image to be detected into the convolution network model according to preset convolution times to obtain a second feature map, and calculating the attention weight of the second feature map;
weighting the second feature map by using the attention weight of the second feature map to obtain a second weighted feature map;
and performing pooling operation on the second weighted feature map to obtain a new pooled weighted feature map.
7. The image tampering detection method according to claim 6, further comprising:
performing deconvolution operation on the new pooled weighted feature map according to the preset convolution times and the size and the step length of a preset convolution kernel to obtain a corresponding third weighted feature map;
and splicing the third weighted feature map and the new pooled weighted feature map.
8. An image tampering detection apparatus, comprising:
the characteristic extraction module is used for inputting the image to be detected into the trained convolution network model so as to extract the characteristics of the image to be detected through the convolution network model to obtain a first characteristic diagram; the convolution network model is obtained by training an initial network model constructed based on an annular residual U-shaped network by using an image tampering data set;
the attention weight calculation module is used for calculating the attention weight of the first feature map and carrying out weighting processing on the first feature map by using the attention weight to obtain a first weighted feature map;
the pooling module is used for pooling the first weighted feature map to obtain a pooled weighted feature map;
the deconvolution module is used for carrying out deconvolution operation on the pooled weighted feature map to obtain a second weighted feature map;
and the characteristic diagram splicing module is used for splicing the second weighted characteristic diagram with the first weighted characteristic diagram, and enabling a target characteristic diagram obtained after splicing to pass through a full connection layer and a Softmax layer to obtain an image tampering detection result of the image to be detected.
9. An electronic device comprising a processor and a memory; wherein the processor implements the image tampering detection method of any of claims 1 to 7 when executing the computer program stored in the memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210069050.XA CN114092477A (en) | 2022-01-21 | 2022-01-21 | Image tampering detection method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210069050.XA CN114092477A (en) | 2022-01-21 | 2022-01-21 | Image tampering detection method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114092477A true CN114092477A (en) | 2022-02-25 |
Family
ID=80309099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210069050.XA Pending CN114092477A (en) | 2022-01-21 | 2022-01-21 | Image tampering detection method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114092477A (en) |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191472A (en) * | 2018-08-28 | 2019-01-11 | 杭州电子科技大学 | Based on the thymocyte image partition method for improving U-Net network |
CN109903252A (en) * | 2019-02-27 | 2019-06-18 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110084794A (en) * | 2019-04-22 | 2019-08-02 | 华南理工大学 | A kind of cutaneum carcinoma image identification method based on attention convolutional neural networks |
CN110751018A (en) * | 2019-09-03 | 2020-02-04 | 上海交通大学 | Group pedestrian re-identification method based on mixed attention mechanism |
CN111080629A (en) * | 2019-12-20 | 2020-04-28 | 河北工业大学 | Method for detecting image splicing tampering |
CN111161273A (en) * | 2019-12-31 | 2020-05-15 | 电子科技大学 | Medical ultrasonic image segmentation method based on deep learning |
CN111401480A (en) * | 2020-04-27 | 2020-07-10 | 上海市同济医院 | Novel breast MRI (magnetic resonance imaging) automatic auxiliary diagnosis method based on fusion attention mechanism |
CN111539899A (en) * | 2020-05-29 | 2020-08-14 | 深圳市商汤科技有限公司 | Image restoration method and related product |
CN111754534A (en) * | 2020-07-01 | 2020-10-09 | 杭州脉流科技有限公司 | CT left ventricle short axis image segmentation method and device based on deep neural network, computer equipment and storage medium |
US20200357143A1 (en) * | 2019-05-09 | 2020-11-12 | Sri International | Semantically-aware image-based visual localization |
CN111986202A (en) * | 2020-10-26 | 2020-11-24 | 平安科技(深圳)有限公司 | Glaucoma auxiliary diagnosis device, method and storage medium |
CN112418027A (en) * | 2020-11-11 | 2021-02-26 | 青岛科技大学 | Remote sensing image road extraction method for improving U-Net network |
CN112508864A (en) * | 2020-11-20 | 2021-03-16 | 昆明理工大学 | Retinal vessel image segmentation method based on improved UNet + |
CN112580654A (en) * | 2020-12-25 | 2021-03-30 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Semantic segmentation method for ground objects of remote sensing image |
CN112818943A (en) * | 2021-03-05 | 2021-05-18 | 上海眼控科技股份有限公司 | Lane line detection method, device, equipment and storage medium |
CN113436166A (en) * | 2021-06-24 | 2021-09-24 | 深圳市铱硙医疗科技有限公司 | Intracranial aneurysm detection method and system based on magnetic resonance angiography data |
CN113706544A (en) * | 2021-08-19 | 2021-11-26 | 天津师范大学 | Medical image segmentation method based on complete attention convolution neural network |
CN113782190A (en) * | 2021-09-22 | 2021-12-10 | 河北工业大学 | Depression diagnosis method based on multi-stage space-time characteristics and mixed attention network |
CN113793348A (en) * | 2021-09-24 | 2021-12-14 | 河北大学 | Retinal vessel segmentation method and device |
CN113888550A (en) * | 2021-09-27 | 2022-01-04 | 太原理工大学 | Remote sensing image road segmentation method combining super-resolution and attention mechanism |
CN113901802A (en) * | 2021-09-29 | 2022-01-07 | 浪潮云信息技术股份公司 | Short text similarity matching method for CRNN (CrNN) network fusion attention mechanism |
-
2022
- 2022-01-21 CN CN202210069050.XA patent/CN114092477A/en active Pending
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191472A (en) * | 2018-08-28 | 2019-01-11 | 杭州电子科技大学 | Based on the thymocyte image partition method for improving U-Net network |
CN109903252A (en) * | 2019-02-27 | 2019-06-18 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110084794A (en) * | 2019-04-22 | 2019-08-02 | 华南理工大学 | A kind of cutaneum carcinoma image identification method based on attention convolutional neural networks |
US20200357143A1 (en) * | 2019-05-09 | 2020-11-12 | Sri International | Semantically-aware image-based visual localization |
CN110751018A (en) * | 2019-09-03 | 2020-02-04 | 上海交通大学 | Group pedestrian re-identification method based on mixed attention mechanism |
CN111080629A (en) * | 2019-12-20 | 2020-04-28 | 河北工业大学 | Method for detecting image splicing tampering |
CN111161273A (en) * | 2019-12-31 | 2020-05-15 | 电子科技大学 | Medical ultrasonic image segmentation method based on deep learning |
CN111401480A (en) * | 2020-04-27 | 2020-07-10 | 上海市同济医院 | Novel breast MRI (magnetic resonance imaging) automatic auxiliary diagnosis method based on fusion attention mechanism |
CN111539899A (en) * | 2020-05-29 | 2020-08-14 | 深圳市商汤科技有限公司 | Image restoration method and related product |
CN111754534A (en) * | 2020-07-01 | 2020-10-09 | 杭州脉流科技有限公司 | CT left ventricle short axis image segmentation method and device based on deep neural network, computer equipment and storage medium |
CN111986202A (en) * | 2020-10-26 | 2020-11-24 | 平安科技(深圳)有限公司 | Glaucoma auxiliary diagnosis device, method and storage medium |
CN112418027A (en) * | 2020-11-11 | 2021-02-26 | 青岛科技大学 | Remote sensing image road extraction method for improving U-Net network |
CN112508864A (en) * | 2020-11-20 | 2021-03-16 | 昆明理工大学 | Retinal vessel image segmentation method based on improved UNet + |
CN112580654A (en) * | 2020-12-25 | 2021-03-30 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Semantic segmentation method for ground objects of remote sensing image |
CN112818943A (en) * | 2021-03-05 | 2021-05-18 | 上海眼控科技股份有限公司 | Lane line detection method, device, equipment and storage medium |
CN113436166A (en) * | 2021-06-24 | 2021-09-24 | 深圳市铱硙医疗科技有限公司 | Intracranial aneurysm detection method and system based on magnetic resonance angiography data |
CN113706544A (en) * | 2021-08-19 | 2021-11-26 | 天津师范大学 | Medical image segmentation method based on complete attention convolution neural network |
CN113782190A (en) * | 2021-09-22 | 2021-12-10 | 河北工业大学 | Depression diagnosis method based on multi-stage space-time characteristics and mixed attention network |
CN113793348A (en) * | 2021-09-24 | 2021-12-14 | 河北大学 | Retinal vessel segmentation method and device |
CN113888550A (en) * | 2021-09-27 | 2022-01-04 | 太原理工大学 | Remote sensing image road segmentation method combining super-resolution and attention mechanism |
CN113901802A (en) * | 2021-09-29 | 2022-01-07 | 浪潮云信息技术股份公司 | Short text similarity matching method for CRNN (CrNN) network fusion attention mechanism |
Non-Patent Citations (5)
Title |
---|
EFFICIENT VISUAL TRACKING WITH STACKED CHANNEL-SPATIAL ATTENTION: "Md. Maklachur Rahman et al", 《IEEE ACCESS》 * |
S MANJUNATHA ET AL: "Deep learning-based Technique for Image Tamper Detection", 《ICICV》 * |
李杭著: "《伪造数字图像盲检测技术研究》", 31 January 2016 * |
赵晓聘: "基于深度网络的图像语义分割研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
郭浩龙等: "使用自适应阈值的图像篡改检测与定位算法", 《光电子·激光》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Detection and localization of image forgeries using improved mask regional convolutional neural network | |
Guo et al. | Fake face detection via adaptive manipulation traces extraction network | |
Hu et al. | SPAN: Spatial pyramid attention network for image manipulation localization | |
Wu et al. | Deep matching and validation network: An end-to-end solution to constrained image splicing localization and detection | |
Lin et al. | Image manipulation detection by multiple tampering traces and edge artifact enhancement | |
Chen et al. | SNIS: A signal noise separation-based network for post-processed image forgery detection | |
CN109902617B (en) | Picture identification method and device, computer equipment and medium | |
Hosny et al. | Copy‐for‐duplication forgery detection in colour images using QPCETMs and sub‐image approach | |
Li et al. | Image manipulation localization using attentional cross-domain CNN features | |
Singh et al. | SiteForge: Detecting and localizing forged images on microblogging platforms using deep convolutional neural network | |
Zhang et al. | Improved Fully Convolutional Network for Digital Image Region Forgery Detection. | |
CN110942456B (en) | Tamper image detection method, device, equipment and storage medium | |
Thajeel et al. | A Novel Approach for Detection of Copy Move Forgery using Completed Robust Local Binary Pattern. | |
Yang et al. | Design of cyber-physical-social systems with forensic-awareness based on deep learning | |
Li et al. | Image manipulation localization using multi-scale feature fusion and adaptive edge supervision | |
Gu et al. | FBI-Net: Frequency-based image forgery localization via multitask learning With self-attention | |
CN115273123A (en) | Bill identification method, device and equipment and computer storage medium | |
Mazumdar et al. | Two-stream encoder–decoder network for localizing image forgeries | |
Prabakar et al. | Hybrid deep learning model for copy move image forgery detection | |
CN112861960A (en) | Image tampering detection method, system and storage medium | |
Chen et al. | A novel general blind detection model for image forensics based on DNN | |
CN114092477A (en) | Image tampering detection method, device and equipment | |
CN112434547A (en) | User identity auditing method and device | |
Kumar et al. | A convnet based procedure for image copy-move forgery detection | |
Zhu et al. | SEINet: semantic-edge interaction network for image manipulation localization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220225 |
|
RJ01 | Rejection of invention patent application after publication |