CN113450330B - Image copying-pasting tampering detection method based on segmentation and depth convolution network - Google Patents

Image copying-pasting tampering detection method based on segmentation and depth convolution network Download PDF

Info

Publication number
CN113450330B
CN113450330B CN202110729864.7A CN202110729864A CN113450330B CN 113450330 B CN113450330 B CN 113450330B CN 202110729864 A CN202110729864 A CN 202110729864A CN 113450330 B CN113450330 B CN 113450330B
Authority
CN
China
Prior art keywords
image
segmentation
features
copy
outputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110729864.7A
Other languages
Chinese (zh)
Other versions
CN113450330A (en
Inventor
王成优
李倩雯
周晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202110729864.7A priority Critical patent/CN113450330B/en
Publication of CN113450330A publication Critical patent/CN113450330A/en
Application granted granted Critical
Publication of CN113450330B publication Critical patent/CN113450330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image copying-pasting tampering detection method based on segmentation and depth convolution network, comprising the following steps: acquiring an image to be detected; building an image segmentation model, training and segmenting the image to be detected to obtain segmentation weight parameters and boundary pixel direction information of the image, and acquiring a segmented image; extracting the characteristics of the image to be detected based on a depth convolution network, and outputting the image characteristics; combining the segmentation image and the image characteristics to perform autocorrelation matching to obtain image matching characteristics; inputting the image matching characteristics into a classification model to obtain a primary tampered area detection image; extracting an edge information image according to the boundary pixel direction information of the image; and constructing a detail optimization model, inputting the preliminary tampered region detection image and the edge information image, and outputting a tampered detection image.

Description

Image copying-pasting tampering detection method based on segmentation and depth convolution network
Technical Field
The disclosure belongs to the technical field of image tampering detection, and particularly relates to an image copying-pasting tampering detection method based on segmentation and depth convolution networks.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Along with the popularization and easy operation of image editing software, the tampered images are in endless. If the tampered image appears in the fields of news media, medical diagnosis, military reconnaissance, judicial evidence obtaining and the like, huge hidden dangers are brought to information safety, and harmony and political stability of the society are seriously threatened. Therefore, the authenticity identification of the image is very important, and copy-paste tampering detection is a great important direction of the authenticity identification, and the image tampering condition and positioning can be identified according to the content of the image. However, the current copy-paste tamper detection method still has deficiencies in detection accuracy and positioning accuracy, and the performance needs to be further improved.
The convolution network is used for extracting image features, can avoid the limitation of artificial design features, and is widely applied to the field of images. Most of the current image copying-pasting tampering detection methods based on the convolutional network extract image features by using the convolutional network, obtain an area with extremely high similarity in an image by using correlation matching, and detect copying-pasting tampering.
Prior documents Wu Y, Abd-Almaged W, Natarajan P.BusterNet: Detecting copy-move image for use with source/target localization [ C ] Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany,2018:170 & 186. and Zhu Y, Chen CF, Yan G, Guo YC, Dong YF.AR-Net: Adaptive evaluation and residual recovery for use with copy-move detection [ J ]. IEEE 6714 on Industrial information, 2020,16(10): 6723. convolution depth based image copy-paste detection method is proposed. However, since the detail information of the image is lost after the depth convolution, the detection result accuracy, especially the tamper edge detection effect of the existing method is poor. Therefore, how to effectively utilize the image detail information to improve the accuracy of the tamper detection method is a problem to be solved at present.
Disclosure of Invention
In order to overcome the defects of the prior art, the image copying-pasting tampering detection method based on the segmentation and depth convolution network is provided, and the detection accuracy and the positioning accuracy of the copying-pasting tampering detection method are improved by using the block classification information and the detail information of the image segmentation result.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
a first aspect of the present disclosure provides an image copy-paste tamper detection method based on segmentation and deep convolutional networks.
An image copy-paste tamper detection method based on segmentation and deep convolution networks comprises the following steps:
acquiring an image to be detected;
building an image segmentation model, training the image segmentation model to obtain segmentation weight parameters and boundary pixel direction information of an image, and performing segmentation processing on the image to be detected to obtain a segmented image;
extracting the characteristics of the image to be detected based on a depth convolution network, and outputting the image characteristics;
combining the segmentation image and the image characteristics to perform autocorrelation matching to obtain image matching characteristics;
inputting the image matching characteristics into a classification model to obtain a primary tampered area detection image;
extracting an edge information image according to the boundary pixel direction information of the image;
and constructing a detail optimization model, inputting the preliminary tampered region detection image and the edge information image, and outputting a tampered detection image.
A second aspect of the present disclosure provides an image copy-paste tamper detection system based on segmentation and deep convolutional networks.
The image copy-paste tamper detection system based on the segmentation and depth convolution network adopts the image copy-paste tamper detection method based on the segmentation and depth convolution network, and comprises the following steps:
the image acquisition module is used for acquiring an image to be detected;
the segmentation module is used for building an image segmentation model, training the image segmentation model to obtain segmentation weight parameters and boundary pixel direction information of an image, and performing segmentation processing on the image to be detected to obtain a segmented image;
the characteristic extraction module is used for extracting the characteristics of the image to be detected based on the depth convolution network and outputting the image characteristics;
the self-correlation matching module is used for carrying out self-correlation matching by combining the segmentation image and the image characteristics to obtain image matching characteristics;
and the detail optimization module is used for inputting the image matching characteristics into a classification model to obtain a preliminary tampered area detection image, extracting an edge information image according to the boundary pixel direction information of the image, constructing a detail optimization model, inputting the preliminary tampered area detection image and the edge information image, and outputting the tampered detection image.
A third aspect of the disclosure provides a computer-readable storage medium.
A computer readable storage medium, having stored thereon a program which, when executed by a processor, implements the steps in the segmentation and depth convolution network based image copy-paste tamper detection method according to the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides an electronic device.
An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the steps in the segmentation and deep convolutional network based image copy-paste tamper detection method according to the first aspect of the present disclosure when executing the program.
Compared with the prior art, the beneficial effect of this disclosure is:
compared with the existing neural network-based method or the traditional manual characteristic method, the method has higher tampering detection accuracy, can effectively detect the conditions of large-scale scaling, multiple times of tampering and the like which are difficult to detect by the traditional manual method, accurately judges the authenticity of the image and accurately positions a tampered region.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a block diagram of a structure of an image copy-paste tamper detection method based on a segmentation and depth convolution network in a first embodiment of the present disclosure;
FIG. 2 is a network diagram of an image copy-paste tamper detection method based on a segmentation and depth convolution network in a first embodiment of the disclosure;
FIG. 3 is a network diagram of an image segmentation module in an image copy-paste tamper detection method based on segmentation and deep convolutional networks according to a first embodiment of the disclosure;
FIG. 4 is a network diagram of a feature extraction module in an image copy-paste tamper detection method based on a segmentation and depth convolution network according to a first embodiment of the present disclosure;
FIG. 5 is a network diagram of an autocorrelation matching module in an image copy-paste tamper detection method based on a segmentation and depth convolution network according to a first embodiment of the disclosure;
FIG. 6 is a network diagram of a classification module in an image copy-paste tamper detection method based on a segmentation and deep convolutional network according to a first embodiment of the present disclosure;
FIG. 7 is a network diagram of a detail optimization module in an image copy-paste tamper detection method based on a segmentation and depth convolution network according to a first embodiment of the disclosure;
fig. 8 is a detection result diagram of an image copy-paste tamper detection method based on a segmentation and depth convolution network in the first embodiment of the present disclosure.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
Example one
The embodiment of the disclosure provides an image copying-pasting tampering detection method based on a segmentation and depth convolution network.
The image is divided into irregular blocks in advance by using an image segmentation technology. Due to the characteristic of copy-paste tampering, the similarity between the copied area and the tampered area is extremely high, and the copied area and the tampered area can be divided into the same type of blocks, so that the division processing is performed in advance, the connection in the same type of blocks is enhanced, and the matching accuracy can be improved. The image feature extraction is carried out by utilizing the deep convolutional network, and the traditional manual feature is replaced by the automatic learning feature, so that the extracted feature is more suitable for image copy-paste tampering detection, and the limitation of manually extracting the feature is avoided. During feature extraction, a feature pyramid structure is constructed, and multi-scale information and detail information are considered, so that the method can effectively detect large-scale scaling and small-area tampering. And fusing the image features and the segmented features, and matching by using an autocorrelation matching module to obtain the relation among the pixels in the image. And classifying and distinguishing the obtained correlation matrix through a convolution network, screening out a repeated region in the image, and roughly positioning the image copying and tampering region. The detected tampering approximate region is optimized in detail by using image edge information obtained from Boundary-to-Pixel Direction (BPD) information, and a more refined tampering detection result is obtained. Based on the deep convolutional network, the method has higher detection accuracy and detection result accuracy by utilizing the edge information in the connection and segmentation process in the segmented image blocks, and can effectively detect image copy-paste tampering and positioning.
Specifically, as shown in fig. 1 and fig. 2, the image copy-paste tamper detection method based on the segmentation and depth convolution network includes the following steps:
step S01: an image segmentation model shown in fig. 3 is built, and tampered images with the size of h × w × 3 are output through the image segmentation model to obtain h × w × 2 BPD information.
Step S02: any 7605 images in the Pascal Context dataset were subjected to image segmentation as shown in FIG. 3, while the remaining 3716 images were used to test network validity.
In the process of image segmentation, a loss function L is trainedsegIs composed of
Figure GDA0003488064590000071
Wherein Ω represents the image domain; dpAnd
Figure GDA0003488064590000072
actual and predicted BPD information, respectively; w (p) represents the adaptive weight of the pixel p,
Figure GDA0003488064590000073
Gprepresenting the true segment size with pixel p; α represents a hyperparameter for balancing the loss term (in the present embodiment, α ═ 1 is selected); other training parameter settings are shown in table 1 below. And storing the segmentation weight parameters trained by the segmentation module, and training the rest modules after not participating in the segmentation.
TABLE 1 image segmentation Module training parameter settings
Figure GDA0003488064590000074
By the step S02, the image segmentation model is constructed and trained in this embodiment.
Step S03: the BPD information of the tampered image is obtained by passing the image through the image segmentation model shown in fig. 3, using the weight parameters stored in step S02.
Here, the BPD information may be used not only to segment the image but also to extract detailed edge information therefrom (step S08) for use in optimizing details of the detection result later (step S09) to improve the detection accuracy.
Step S04: the accuracy and speed of image segmentation were compromised using the Super-BPD method (ref: Wan J, Liu Y, Wei D, Bai X, Xu Y. Super-BPD: Super boundary-to-pixel direction for fast image segmentation [ C ]. Proceedings of the 33th IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA,2020:9253 + 9262.) to obtain a h × w × 1 segmented image based on the BPD information obtained in step S03.
Steps S01 to S04 in this embodiment are part of the image segmentation module in fig. 1, and are intended to irregularly segment the image. Due to the characteristic of copy-paste tampering, the similarity between a copy area and a tampered area is extremely high, and the copy area and the tampered area are divided into the same type of blocks, so that the division processing is performed in advance, the inter-block connection of the same type of blocks is enhanced, the intra-block matching weight is increased in the matching process, the irrelevant inter-block interference is reduced, and the detection accuracy is improved.
S05: and (3) constructing a feature extraction model shown in FIG. 4, and performing deep convolution network on the tampered image with the size of h multiplied by w multiplied by 3 in FIG. 4 to obtain image features of (h/8) multiplied by (w/8) multiplied by 1024.
The method comprises the following steps of performing feature extraction of an image to be detected based on a depth convolution network and outputting image features, wherein the specific process is as follows:
step S501: building a VGG16 network after removing the full connection layer, namely part (I) in FIG. 4; simultaneously, the global characteristics output by a deep network of the image and the local detail information output by a shallow network are considered to obtain the original characteristics of the image;
step S502: the original features of the image in the step S501 are subjected to an empty space Pyramid Pooling (ASPP) layer (4 AtrousConv blocks in fig. 3), and multi-scale features of the image are extracted, which is helpful for considering different object proportions and improving detection accuracy and robustness against scaling attack;
step S503: unifying the original features of the 3-layer image output in the step S501 (output from the fourth step in FIG. 4) and the multi-scale information of the image output in the step S502 into 4 (h/8) x (w/8) x 256 image features (part (II) in FIG. 4) by 1 x 1 convolution and bilinear interpolation;
step S504: the 4 image features output in step S503 are combined to constitute an image feature of (h/8) × (w/8) × 1024.
In this embodiment, step S05 is a feature extraction module in fig. 1, and is intended to extract comprehensive and suitable image features, and consider multi-scale features (ASPP layer output) and local detail information (shallow layer network output of VGG 16) discarded due to an increase in the depth network receptive field, so as to form a feature pyramid, thereby improving detection accuracy, robustness against scaling attack, and detection detail accuracy.
Step S06: an autocorrelation matching model as shown in fig. 5 is constructed, and the image features output in step S05 and the segmented image output in step S04 are subjected to the autocorrelation matching model to obtain image matching features of (h/8) × (w/8) × k (in the present embodiment, k is 128).
The specific process for obtaining the image matching characteristics comprises the following steps:
step S601: performing size conversion and 1 × 1 convolution on the segmented image result output in the step S04 to adjust the size to be (h/8) × (w/8) × 256, and combining the size with the image features output in the step S05 to form features to be matched, wherein the features to be matched are (h/8) × (w/8) × 1280;
step S602: combining the rows and the columns of the features to be matched based on a row-column combination rule to form a two-dimensional feature matrix M of (hw/64) x 1280, wherein each feature vector has 1280 dimensions and contains 1/5 intra-block associated features;
step S603: carrying out standardization processing on the two-dimensional feature matrix M to enable each feature vector module to be 1;
step S604: computing a correlation matrix McorWherein M iscor=M·MT,[·]TIs a transposed matrix; obtaining a correlation matrix of (hw/64) × (hw/64) for representing the similarity between all the feature vectors; the closer the similarity value of the correlation matrix is to 1, the more similar the two features are, the higher the possibility of tampering of the region described by the feature vector is.
Step S605: and dividing the related matrix according to the row-column combination rule of the step S602 to obtain the matched features of (h/8) × (w/8) × (hw/64).
Step S606: sorting the 3 rd dimension hw/64 features of the matched features from big to small;
step S607: intercepting the 2 nd to (k + 1) th features of the sorted features to obtain image matching features of (h/8) × (w/8) × k; the reason for discarding the1 st maximum similarity feature is: the maximum similarity is the similarity between the characteristic and the characteristic itself, is infinitely close to 1, and is meaningless for searching a tampered area and judging a matching area after interference occurs.
In this embodiment, step S06 is the autocorrelation matching module in fig. 1, and is intended to determine feature similarity by using the similarity matrix, and essentially search for a copy and paste area with extremely high similarity. After the correlation matrix is obtained, the corresponding position matched with the correlation matrix is not searched, and whether the region has similar characteristic vectors or not is judged. The method has no mapping searching process, reduces the complexity of the method, and has certain advantages for the condition of multiple copying-pasting tampering.
Step S07: building a classification model as shown in fig. 6, and matching the image output in step S06 with the features, and obtaining an h × w × 1 preliminary tampered area detection image through the classification model.
In this embodiment, step S07 is a classification module in fig. 1, and is intended to distinguish the obtained matching result by using the classification function of the convolutional network, and determine whether the matching result is a copy-paste tampered area.
Step S08: using the BPD information output in step S03, detailed edge information is extracted, resulting in an h × w × 1 edge information image:
Figure GDA0003488064590000111
step S09: a detail optimization model shown in fig. 7 is built, and the preliminary falsified region detection image output in step S07 and the edge information image output in step S08 are subjected to the detail optimization model to obtain a falsification detection image of h × w × 1.
The specific process of outputting the tamper detection image is as follows:
step S901: respectively carrying out 1 × 1 convolution kernel dimension expansion on the preliminary tampered area detection image and the edge information image, and increasing feature representation dimensions;
step S902: and performing feature fusion on the expanded primary tamper area detection image features and the edge information image features to obtain h multiplied by w multiplied by 256 fusion features, inputting the fusion features into a detail optimization model for optimization, optimizing a detection result through a 4-layer convolution network, performing feature dimension compression through 1 multiplied by 1 convolution kernel, and outputting a visual tamper detection image.
In this embodiment, step S09 is a detail optimization module in fig. 1, and optimizes the edge of the detection result by using the edge information generated in the segmentation process, so as to improve the accuracy of the detection result.
Step S10: feature extraction models (as shown in fig. 4), autocorrelation matching models (as shown in fig. 5), classification models (as shown in fig. 6), and detail optimization models (as shown in fig. 7) were trained using 80000 images of the training dataset in the USCISI-CMFD dataset. Since the image tampering detection is essentially two-classification of tampering or non-tampering to pixels in an image, the training Loss function is Binary Cross Entropy Loss (BCELoss) LBCE
Figure GDA0003488064590000121
Wherein, ypE {0,1} represents the real tampering or non-tampering mark of the pixel point;
Figure GDA0003488064590000122
expressing the condition that the network predicts the pixel point tampering; other training parameter settings are shown in table 2; and storing the training parameters after the network training.
Table 2 tamper detection branch 4 module training parameter settings
Figure GDA0003488064590000123
By the step S10, all models in the present embodiment are constructed and trained, and the segmentation weight parameters in the step S02 and the training weight parameters in the step S10 are stored in different weight files, respectively.
Step S11: when copy-paste tamper detection is performed on a picture, a visual tamper detection image is obtained by using the trained weight parameters of step S02 and step S010 through the process shown in fig. 1, i.e., step S01, step S03 to step S09.
The method in the embodiment has better copy-paste tampering detection accuracy and detection result accuracy, is suitable for image authenticity identification, and can accurately position the image copy-paste tampering area.
To demonstrate the effectiveness of the method in this example, the USCISI-CMFD test set (20000 sheets), CASIA II data set (1310 sheets), and comofid (5000 sheets) data set were used to verify the detection performance of this example against copy-paste tampering.
The method of this example is compared to AR based on conventional blocking methods (Ryu S J, Lee M J, Lee H K. detection of copy-rotate-move for using Zernike movements [ C ]. Proceedings of the12th Information Hiding Conference, Calgary, Canada,2010: 51-65.), conventional keypoint based methods (Cozzolino D, Poggi G, Verdoliva L. influence dense-field copy-move for detection [ J ]. IEEE Transactions on Information forms and Security,2015,10(11): 2284-. And (4) taking the accuracy p, the recall rate r and the accuracy F as evaluation indexes to reflect the accuracy and precision of the detection result. p, r, F are defined as:
Figure GDA0003488064590000131
wherein N isTPCorrectly predicting tampering as the number of pixels tampered; n is a radical ofFPPredicting an untampered error as a tampered pixel number; n is a radical ofFNIs the number of pixels for which tampering errors are predicted as not tampered. The closer p, r, and F are to 1, the better the detection effect. Tables 3 and 4 show 5 kinds ofThe method compares the accuracy p, recall r and F values of the tampering detection results on the CASIA II dataset and the CoMoFoD dataset. As can be seen from tables 3 and 4, the tamper detection method in the present embodiment has higher detection accuracy.
TABLE 3 comparison of detection results of different tamper detection methods on CASIA II datasets
Figure GDA0003488064590000132
TABLE 4 comparison of detection results of different tamper detection methods on a CoMoFoD dataset
Figure GDA0003488064590000141
In order to more intuitively show the tampering detection effect of the embodiment, fig. 8 shows the partial zooming and multi-tampering image detection results, and it can be seen that the tampering region can be clearly detected in the embodiment when only the translation, the small-scale zooming, the large-scale zooming, and the multiple tampering are performed.
Based on segmentation and deep convolutional networks, the embodiment provides an image copy-paste tamper detection method. Compared with the existing neural network-based method or the traditional manual characteristic method, the method has higher tamper detection accuracy, and can effectively detect the conditions of large-scale scaling, multiple tampering and the like which are difficult to detect by the traditional manual method. By applying the method, the authenticity of the image can be accurately judged, and the tampered area can be accurately positioned.
Example two
The second embodiment of the present disclosure provides an image copy-paste tamper detection system based on a segmentation and depth convolution network, which adopts the image copy-paste tamper detection method based on the segmentation and depth convolution network provided in the first embodiment, and includes:
the image acquisition module is used for acquiring an image to be detected;
the segmentation module is used for building an image segmentation model, training the image segmentation model to obtain segmentation weight parameters and boundary pixel direction information of an image, and performing segmentation processing on the image to be detected to obtain a segmented image;
the characteristic extraction module is used for extracting the characteristics of the image to be detected based on the depth convolution network and outputting the image characteristics;
the self-correlation matching module is used for carrying out self-correlation matching by combining the segmentation image and the image characteristics to obtain image matching characteristics;
and the detail optimization module is used for inputting the image matching characteristics into a classification model to obtain a preliminary tampered area detection image, extracting an edge information image according to the boundary pixel direction information of the image, constructing a detail optimization model, inputting the preliminary tampered area detection image and the edge information image, and outputting the tampered detection image.
EXAMPLE III
A third embodiment of the present disclosure provides a computer-readable storage medium, on which a program is stored, where the program, when executed by a processor, implements the steps in the image copy-paste tamper detection method based on a segmentation and depth convolution network according to the first embodiment of the present disclosure.
The detailed steps are the same as those of the image copy-paste tamper detection method based on the segmentation and depth convolution network provided in the first embodiment, and are not described herein again.
Example four
The fourth embodiment of the present disclosure provides an electronic device, which includes a memory, a processor, and a program stored in the memory and executable on the processor, where the processor implements the steps in the image copy-paste tamper detection method based on the segmentation and deep convolutional network according to the first embodiment of the present disclosure when executing the program.
The detailed steps are the same as those of the image copy-paste tamper detection method based on the segmentation and depth convolution network provided in the first embodiment, and are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (7)

1. An image copy-paste tamper detection method based on segmentation and deep convolution networks is characterized by comprising the following steps:
acquiring an image to be detected;
building an image segmentation model, training the image segmentation model to obtain segmentation weight parameters and boundary pixel direction information of an image, and performing segmentation processing on the image to be detected to obtain a segmented image;
extracting the characteristics of the image to be detected based on a depth convolution network, and outputting the image characteristics;
combining the segmentation image and the image characteristics to perform autocorrelation matching to obtain image matching characteristics;
inputting the image matching characteristics into a classification model to obtain a primary tampered area detection image;
extracting an edge information image according to the boundary pixel direction information of the image;
constructing a detail optimization model, inputting the preliminary tampered region detection image and the edge information image, and outputting a tampered detection image;
the method comprises the following steps of carrying out compromise on the accuracy and the speed of image segmentation by a Super-BPD method, and obtaining a segmented image based on the boundary pixel direction information of the image;
the method comprises the following steps of performing feature extraction of an image to be detected based on a depth convolution network and outputting image features, wherein the specific process comprises the following steps:
simultaneously, the global characteristics output by a deep network of the image and the local detail information output by a shallow network are considered to obtain the original characteristics of the image;
extracting multi-scale features of the image through a void space pyramid pooling layer;
outputting a plurality of image features through 1 x 1 convolution and bilinear interpolation transformation based on the original features of the image and the multi-scale features of the image;
fusing the image features and outputting image features;
the specific process of obtaining the image matching characteristics is as follows:
carrying out size transformation and convolution adjustment on the segmented image, and combining the segmented image with the output image characteristics to obtain characteristics to be matched;
combining rows and columns of the features to be matched to obtain a two-dimensional feature matrix based on a row and column combination rule;
standardizing the two-dimensional feature matrix to enable each feature vector module to be 1;
calculating a correlation matrix based on the two-dimensional feature matrix and a transposed matrix of the two-dimensional feature matrix;
separating the rows and the columns of the correlation matrix based on a row-column combination rule to obtain matched features;
sorting the third dimension features of the matched features;
truncating the second to the second of the sorted matched featureskAnd +1, obtaining image matching characteristics.
2. The image copy-paste tamper detection method based on segmentation and depth convolutional network as claimed in claim 1, wherein the format of the acquired image to be detected is converted into JPEG by the highest quality factor when the image to be detected is acquired.
3. The image copy-paste tamper detection method based on segmentation and deep convolutional networks as claimed in claim 1, wherein in the process of obtaining the segmentation weight parameters, a training loss function is constructed, various parameters are adjusted in the training process to obtain the segmentation weight parameters, and the obtained segmentation weight parameters are saved.
4. The image copy-paste tamper detection method based on segmentation and depth convolution networks as claimed in claim 1, wherein said outputting a tamper detection image comprises the specific processes of:
performing convolution kernel dimension expansion processing on the preliminary tampered region detection image and the edge information image respectively;
and performing feature fusion on the expanded primary tampering region detection image features and the edge information image features, inputting the feature fusion into a detail optimization model for optimization, performing feature dimension compression through a convolution kernel, and outputting a visual tampering detection image.
5. An image copy-paste tamper detection system based on a segmentation and depth convolution network, which adopts the image copy-paste tamper detection method based on the segmentation and depth convolution network of any one of claims 1 to 4, and is characterized by comprising the following steps:
the image acquisition module is used for acquiring an image to be detected;
the segmentation module is used for building an image segmentation model, training the image segmentation model to obtain segmentation weight parameters and boundary pixel direction information of an image, and performing segmentation processing on the image to be detected to obtain a segmented image;
the characteristic extraction module is used for extracting the characteristics of the image to be detected based on the depth convolution network and outputting the image characteristics;
the self-correlation matching module is used for carrying out self-correlation matching by combining the segmentation image and the image characteristics to obtain image matching characteristics;
and the detail optimization module is used for inputting the image matching characteristics into a classification model to obtain a preliminary tampered area detection image, extracting an edge information image according to the boundary pixel direction information of the image, constructing a detail optimization model, inputting the preliminary tampered area detection image and the edge information image, and outputting the tampered detection image.
6. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the steps of the segmentation and depth convolution network based image copy-paste tamper detection method according to any one of claims 1 to 4.
7. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor implements the steps of the segmentation-and depth-convolution network-based image copy-paste tamper detection method according to any one of claims 1 to 4 when executing the program.
CN202110729864.7A 2021-06-29 2021-06-29 Image copying-pasting tampering detection method based on segmentation and depth convolution network Active CN113450330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110729864.7A CN113450330B (en) 2021-06-29 2021-06-29 Image copying-pasting tampering detection method based on segmentation and depth convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729864.7A CN113450330B (en) 2021-06-29 2021-06-29 Image copying-pasting tampering detection method based on segmentation and depth convolution network

Publications (2)

Publication Number Publication Date
CN113450330A CN113450330A (en) 2021-09-28
CN113450330B true CN113450330B (en) 2022-03-18

Family

ID=77814286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729864.7A Active CN113450330B (en) 2021-06-29 2021-06-29 Image copying-pasting tampering detection method based on segmentation and depth convolution network

Country Status (1)

Country Link
CN (1) CN113450330B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063373A (en) * 2022-06-24 2022-09-16 山东省人工智能研究院 Social network image tampering positioning method based on multi-scale feature intelligent perception
CN117407562B (en) * 2023-12-13 2024-04-05 杭州海康威视数字技术股份有限公司 Image recognition method, system and electronic equipment
CN117456171B (en) * 2023-12-26 2024-03-22 中国海洋大学 Replication mobile tampering detection method and system based on related area mining inhibition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622489A (en) * 2017-10-11 2018-01-23 广东工业大学 A kind of distorted image detection method and device
CN107657259A (en) * 2017-09-30 2018-02-02 平安科技(深圳)有限公司 Distorted image detection method, electronic installation and readable storage medium storing program for executing
CN112750122A (en) * 2021-01-21 2021-05-04 山东省人工智能研究院 Image tampering area positioning method based on double-current boundary perception neural network
CN112801960A (en) * 2021-01-18 2021-05-14 网易(杭州)网络有限公司 Image processing method and device, storage medium and electronic equipment
CN112907598A (en) * 2021-02-08 2021-06-04 东南数字经济发展研究院 Method for detecting falsification of document and certificate images based on attention CNN

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657259A (en) * 2017-09-30 2018-02-02 平安科技(深圳)有限公司 Distorted image detection method, electronic installation and readable storage medium storing program for executing
CN107622489A (en) * 2017-10-11 2018-01-23 广东工业大学 A kind of distorted image detection method and device
CN112801960A (en) * 2021-01-18 2021-05-14 网易(杭州)网络有限公司 Image processing method and device, storage medium and electronic equipment
CN112750122A (en) * 2021-01-21 2021-05-04 山东省人工智能研究院 Image tampering area positioning method based on double-current boundary perception neural network
CN112907598A (en) * 2021-02-08 2021-06-04 东南数字经济发展研究院 Method for detecting falsification of document and certificate images based on attention CNN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图像分割网络的深度假脸视频篡改检测;胡永健等;《电子与信息学报》;20200722;第43卷(第1期);全文 *
面向图像篡改取证的多特征融合U形深度网络;路东升等;《计算机工程》;20210526;全文 *

Also Published As

Publication number Publication date
CN113450330A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
Wu et al. Deep matching and validation network: An end-to-end solution to constrained image splicing localization and detection
CN113450330B (en) Image copying-pasting tampering detection method based on segmentation and depth convolution network
Babu et al. Efficient detection of copy-move forgery using polar complex exponential transform and gradient direction pattern
Li et al. Fast and effective image copy-move forgery detection via hierarchical feature point matching
Chen et al. A serial image copy-move forgery localization scheme with source/target distinguishment
Walia et al. Digital image forgery detection: a systematic scrutiny
Cozzolino et al. Image forgery detection through residual-based local descriptors and block-matching
Emam et al. PCET based copy-move forgery detection in images under geometric transforms
Meena et al. A copy-move image forgery detection technique based on Gaussian-Hermite moments
CN109815956B (en) License plate character recognition method based on self-adaptive position segmentation
Uliyan et al. Copy move image forgery detection using Hessian and center symmetric local binary pattern
Al-Qershi et al. Enhanced block-based copy-move forgery detection using k-means clustering
Thajeel et al. A Novel Approach for Detection of Copy Move Forgery using Completed Robust Local Binary Pattern.
Jaberi et al. Improving the detection and localization of duplicated regions in copy-move image forgery
Shelke et al. Multiple forgery detection and localization technique for digital video using PCT and NBAP
CN103164856A (en) Video copy and paste blind detection method based on dense scale-invariant feature transform stream
Soni et al. Image forensic using block-based copy-move forgery detection
CN114078132A (en) Image copying-pasting tampering detection algorithm based on autocorrelation characteristic pyramid network
Ustubioglu et al. A novel keypoint based forgery detection method based on local phase quantization and SIFT
Kumar et al. Detection of Copy-Move Forgery Using Euclidean Distance and Texture Features.
Muniappan et al. An Evaluation of Convolutional Neural Network (CNN) Model for Copy-Move and Splicing Forgery Detection
Sujin et al. Copy-Move Geometric Tampering Estimation Through Enhanced SIFT Detector Method.
Ikhlayel et al. A study of copy-move forgery detection scheme based on segmentation
CN112419238A (en) Copy-paste counterfeit image evidence obtaining method based on end-to-end deep neural network
Uliyan et al. A forensic scheme for revealing post-processed region duplication forgery in suspected images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant