CN112508849A - Digital image splicing detection method and device - Google Patents
Digital image splicing detection method and device Download PDFInfo
- Publication number
- CN112508849A CN112508849A CN202011242471.5A CN202011242471A CN112508849A CN 112508849 A CN112508849 A CN 112508849A CN 202011242471 A CN202011242471 A CN 202011242471A CN 112508849 A CN112508849 A CN 112508849A
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- network model
- training
- splicing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 67
- 238000012549 training Methods 0.000 claims abstract description 81
- 238000003062 neural network model Methods 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000013527 convolutional neural network Methods 0.000 claims description 52
- 238000004590 computer program Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000011423 initialization method Methods 0.000 claims description 9
- 238000012795 verification Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 5
- 238000013135 deep learning Methods 0.000 abstract description 4
- 238000000605 extraction Methods 0.000 abstract description 4
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a digital image splicing detection method and a digital image splicing detection device, wherein the method comprises the following steps: traversing the image to be detected by a sliding window and a step length with preset sizes to obtain a plurality of image blocks; respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block; if the first convolution neural network model judges that the number of the spliced image blocks is greater than a preset threshold value in proportion to the total number of the image blocks, the image to be detected is a spliced image; and the first convolution neural network model is obtained after training according to the known splicing result as the image block of the label. According to the method, the image to be detected is traversed through the sliding window and the step length with the preset sizes to obtain a plurality of image blocks, a relatively simple strategy can be used for training, and the calculation efficiency of the model is improved. In addition, the detection accuracy on the complex color image is greatly improved by utilizing the characteristic extraction and learning mode of deep learning, and the generalization capability is strong.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a digital image splicing detection method and device.
Background
Image stitching operations replace a portion of an image primarily with other portions from other images or the image itself. The method may further include operations such as scaling, rotating, shearing of the replacement part, irregularity of the spliced edge, blurring processing, and the like. The digital image splicing detection means that whether the digital image is spliced and tampered or not is judged by using an algorithm. Correspondingly, digital image splicing positioning refers to positioning a splicing tampering area of a digital image.
At present, a digital image splicing detection and positioning algorithm is mainly a digital image splicing detection and positioning algorithm based on the introduction of characteristics of an image acquisition source, and the method is based on the modeling noise introduced in the processes of acquisition, storage, transmission and the like of a digital image or the periodic mutual relation existing among pixels, and judges the consistency of local characteristics and overall characteristics by calculating the local and overall image characteristics of the image, so that the detection and the positioning of image splicing tampering operation can be realized.
The stitching detection and location algorithm based on the acquired source is equally effective for a variety of tampering operations. However, this type of algorithm has a certain dependence on the digital image acquisition device. In view of the characteristics of various types and different standards of digital image acquisition equipment, it is very difficult to establish a relatively comprehensive equipment mode library. Therefore, the generalization capability of the algorithm is relatively poor.
Disclosure of Invention
The embodiment of the invention provides a digital image splicing detection method and a digital image splicing detection device, which are used for solving the problems in the prior art.
The embodiment of the invention provides a digital image splicing detection method, which comprises the following steps: traversing the image to be detected by a sliding window and a step length with preset sizes to obtain a plurality of image blocks; respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block; if the first convolution neural network model judges that the number of spliced image blocks is greater than a preset threshold value in proportion to the total number of image blocks, the image to be detected is a spliced image; and the first convolution neural network model is obtained after training according to the known splicing result as an image block of the label.
According to the digital image stitching detection method of one embodiment of the invention, after judging that the image to be detected is the stitched image, the method further comprises the following steps: carrying out edge detection on the image to be detected to obtain an image edge; traversing the edge area of the image by using a sliding window with a preset size to obtain a plurality of edge image blocks; respectively inputting the edge image blocks into a preset second convolutional neural network model to obtain a detection result of whether each edge image block is the edge of the splicing area; obtaining an image splicing area according to all image blocks judged as the edge of the splicing area; and the second convolutional neural network model is obtained after training according to the known splicing result as the edge image block of the label.
According to the digital image stitching detection method of an embodiment of the present invention, before the image blocks are respectively input to a preset first convolution neural network model, the method further includes: randomly clipping the sample image of the known splicing result according to the size of the sliding window; performing label identification according to an actual splicing result of the image block obtained by cutting to obtain a first training sample; and training the constructed initial convolutional neural network model by using the image blocks in the first training sample to obtain the first convolutional neural network model.
According to the digital image stitching detection method of an embodiment of the present invention, the training of the constructed initial convolutional neural network model by using the image blocks in the first training sample to obtain the first convolutional neural network model includes: taking samples in a preset proportion in the first training samples as a training set, and taking the rest of the first training samples as a verification set; performing iterative training on the constructed convolutional neural network model by using the training set; and in the model after each round of training, selecting the model with the minimum loss as the first convolution neural network model according to the average loss of the model on the verification set.
According to the digital image stitching detection method of an embodiment of the present invention, before the edge image blocks are respectively input to a preset second convolutional neural network model, the method further includes: cutting image blocks along the edge of the sample image in a fixed size to obtain a plurality of edge image blocks; labeling according to an actual splicing result of the edge image blocks obtained by cutting to obtain second training samples of the edge image blocks with splicing areas and without splicing areas; and training the constructed initial convolutional neural network model by using the edge image blocks in the second training sample to obtain a second convolutional neural network model.
According to the digital image stitching detection method of an embodiment of the present invention, before training the constructed initial convolutional neural network model by using the edge image blocks in the second training sample, the method further includes: parameters of the first layer convolutional layer were initialized using the Spam initialization method.
According to the digital image stitching detection method, initializing the parameters of the first layer of convolution layer by using a Spam initialization method comprises the following steps: according to five high-pass filters 1st, 2nd, 3rd, SQUARE3x3, SQUARE5x5, EDGE3x3 and EDGE5x5 for image steganalysis, 30 weight coefficient matrixes of 5x5 are obtained through a surrounding zero filling method; and determining the convolution kernel weight coefficient of the first layer of convolution layer according to the weight coefficient matrix.
The embodiment of the invention also provides a digital image splicing detection device, which comprises: the sliding window processing module is used for traversing the image to be detected by a sliding window with a preset size and a step length to obtain a plurality of image blocks; the splicing analysis module is used for respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block; the splicing judgment module is used for judging that the image blocks to be spliced are the spliced images if the first convolution neural network model judges that the number of the spliced image blocks accounts for a ratio larger than a preset threshold value; and the first convolution neural network model is obtained after training according to the known splicing result as an image block of the label.
The embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor implements any of the steps of the digital image stitching detection method when executing the program.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any of the digital image stitching detection methods described above.
According to the digital image splicing detection method and device provided by the embodiment of the invention, the image to be detected is traversed through the sliding window and the step length with the preset sizes to obtain the plurality of image blocks, a relatively simple strategy can be used for training, and the calculation efficiency of the model is improved. In addition, the detection accuracy on the complex color image is greatly improved by utilizing the characteristic extraction and learning mode of deep learning, and the generalization capability is strong.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a digital image stitching detection method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a digital image stitching detection apparatus provided in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The digital image stitching detection method and apparatus according to the embodiment of the invention are described below with reference to fig. 1 to 3. Fig. 1 is a schematic flow chart of a digital image stitching detection method provided in an embodiment of the present invention, and as shown in fig. 1, the embodiment of the present invention provides a digital image stitching detection method, including:
101. and traversing the image to be detected by a sliding window and step length with preset sizes to obtain a plurality of image blocks.
For example, a sliding window with a preset size of 40 × 40 and a step size of 20 is used to traverse the center of the picture, i.e. the area outside the sliding window is not considered, so as to obtain a plurality of 40 × 40 image blocks.
102. And respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block.
And (4) taking the image block obtained by the sliding window as the input of the convolutional neural network, and judging whether the image block belongs to the spliced image. The size of the image block corresponds to the size of the first convolutional neural network model. For example, a convolutional neural network model with a fixed input size of 40 × 3 is constructed, and the specific structure can be seen in table 1. Wherein the first layer of the convolutional layer is initialized by using an Xavier initialization method.
TABLE 1
Correspondingly, before the first convolution neural network model is used, the image blocks known according to the splicing result are used as training data, and the training is completed.
103. And if the convolutional neural network model judges that the number of the image blocks to be spliced is greater than a preset threshold value in proportion to the total number of the image blocks, determining that the image to be detected is a spliced image.
And after all the image blocks are calculated by the first convolutional neural network to obtain an output result, finally determining whether the whole color image is a spliced color image or not in a voting mode of each image block. Namely, when the number of image blocks which are judged to belong to the spliced image by the convolutional neural network in the range of the sliding window is larger than a certain specific threshold value in proportion to the number of all the image blocks in the range of the sliding window, the whole color digital image is judged to be the spliced image, and otherwise, the image is judged to be a real image which is not spliced and tampered.
According to the digital image splicing detection method provided by the embodiment of the invention, the image to be detected is traversed through the sliding window and the step length with the preset sizes to obtain the plurality of image blocks, a relatively simple strategy can be used for training, and the calculation efficiency of the model is improved. In addition, the detection accuracy on the complex color image is greatly improved by utilizing the characteristic extraction and learning mode of deep learning, and the generalization capability is strong.
Based on the content of the foregoing embodiment, as an optional embodiment, after determining that the image to be detected is a stitched image, the method further includes: carrying out edge detection on the image to be detected to obtain an image edge; traversing the edge area of the image by using a sliding window with a preset size to obtain a plurality of edge image blocks; respectively inputting the edge image blocks into a preset second convolutional neural network model to obtain a detection result of whether each edge image block is the edge of the splicing area; and obtaining an image splicing area according to all image blocks judged as the edge of the splicing area.
And if the image is judged to be the spliced image, performing edge detection and extracting the image edge, otherwise, skipping the step. As an alternative embodiment, edge detection is performed using a fast edge detection algorithm based on random forests.
And traversing the edge area of the image to be detected by using a sliding window with a preset size (preferably, the sliding window with the same size as that in the step 101 can also be different). The edge region is a region where a boundary is conspicuous in the image, for example, an additional region of a boundary between the sky and a mountain, a region near a boundary of an object and the environment, and the like. And inputting the edge image block obtained by the sliding window into a second convolutional neural network for classification, and judging whether the edge image block is the edge of the splicing region. Correspondingly, the second convolutional neural network model is obtained after training according to the known splicing result as the edge image block of the label.
And all outlines formed by the image blocks at the edges of the judged spliced areas are the outlines of the image spliced areas, so that the detection of the spliced areas is realized.
According to the digital image splicing detection method, the image splicing area is obtained according to all the edge image blocks judged as the splicing area, a relatively simple strategy can be used for training, and the calculation efficiency of the model is improved. In addition, the splicing positioning method based on the convolutional neural network can realize effective positioning on the splicing area in the color complex image.
Based on the content of the foregoing embodiment, as an optional embodiment, before the step of respectively inputting the plurality of image blocks into a preset first convolutional neural network model, the method further includes: randomly clipping the sample image of the known splicing result according to the size of the sliding window; performing label identification according to an actual splicing result of the image block obtained by cutting to obtain a first training sample; and training the constructed initial convolutional neural network model by using the image blocks in the first training sample to obtain the first convolutional neural network model.
A plurality of sample images with known results are obtained, including stitched and non-stitched images, and accordingly, stitched regions are also known, which takes into account that there are also regions in the stitched images that are not stitched, and these regions should be marked as non-stitched during training after cropping.
And randomly cutting the images in the sample image data set to obtain a plurality of image blocks with the size of 40 × 40. The image blocks from the untampered image are identified as 0 and the image blocks from the stitched image are identified as 1. And (3) training the neural network model by using Adam with an initial learning rate of 0.01, wherein the loss function can be a binary cross entropy function. And obtaining a first convolution neural network model after the training is finished.
Based on the content of the foregoing embodiment, as an optional embodiment, training the constructed initial convolutional neural network model by using the image blocks in the first training sample to obtain the first convolutional neural network model, including: taking samples in a preset proportion in the first training samples as a training set, and taking the rest of the first training samples as a verification set; performing iterative training on the constructed convolutional neural network model by using the first training set; and in the model after each round of training, selecting the model with the minimum loss as the first convolution neural network model according to the average loss of the model on the verification set.
In order to improve the accuracy of the model, the embodiment of the invention screens out the model with high accuracy through average loss. In the training process, after the two image blocks are subjected to quantity equalization processing, 80% of the two image blocks are taken as a training set, and 20% of the two image blocks are taken as a verification set. And selecting the model with the minimum loss from the training models in each round as a final model according to the average loss of the model on the verification set, so as to obtain a first convolution neural network model.
Based on the content of the foregoing embodiment, as an optional embodiment, before the edge image blocks are respectively input to a preset second convolutional neural network model, the method further includes: cutting image blocks along the edge of the sample image in a fixed size to obtain a plurality of edge image blocks; labeling according to an actual splicing result of the edge image blocks obtained by cutting to obtain second training samples of the edge image blocks with splicing areas and without splicing areas; and training the constructed initial convolutional neural network model by using the edge image blocks in the second training sample to obtain a second convolutional neural network model.
Based on the actual task, a second convolutional neural network model with fixed input size, e.g., 40 × 3, was constructed, and the specific structure can be referred to table 1. And for the sample image, cutting an image block in a fixed size along the edge of the sample image, and marking the image block as 1 when the image block belongs to the edge of the splicing area, otherwise marking the image block as 0. That is, all image blocks are from the edge portion of the stitched image, and 1 or 0 is marked according to whether the positions of the image blocks belong to the edge portion of the stitched area in the stitched area identification image. And (4) carrying out quantity equalization processing on the image blocks of the two identifications, and taking 80% as a training set and 20% as a test set. And (3) training the neural network model by using Adam with an initial learning rate of 0.01, wherein the loss function can be a binary cross entropy function. And selecting the model with the minimum loss from the training models in each round as a final model according to the average loss of the model on the verification set, so as to obtain a second volume of neural network models.
Based on the content of the foregoing embodiment, as an optional embodiment, before training the constructed initial convolutional neural network model by using the edge image block in the second training sample, the method further includes: parameters of the first layer convolutional layer were initialized using the Spam initialization method.
Specifically, a second convolutional neural network model in which the first layer convolutional layers are initialized using the Spam initialization method. The initialization method injects prior knowledge into the convolutional neural network, and by using the prior knowledge, a certain pointing effect is provided for the training process of the convolutional neural network, so that the training result of the convolutional neural network can better fit the design purpose of the model. Specifically, the first layer convolutional layer has the ability to extract the relationship between pixels of the input image without training by initializing the convolutional kernels of the first layer convolutional layer to different high-pass filters. Therefore, the convolution neural network can extract relatively effective characteristics at the beginning of training, and the convergence rate of model training is accelerated. In addition, the prior knowledge is injected into the convolutional neural network in the mode, and the occurrence of the local optimal problem can be avoided to a certain extent.
Based on the content of the foregoing embodiment, as an optional embodiment, the initializing the parameter of the first layer convolution layer by using a Spam initialization method includes: according to five high-pass filters 1st, 2nd, 3rd, SQUARE3x3, SQUARE5x5, EDGE3x3 and EDGE5x5 for image steganalysis, 30 weight coefficient matrixes of 5x5 are obtained through a surrounding zero filling method; and determining the convolution kernel weight coefficient of the first layer of convolution layer according to the weight coefficient matrix.
Five high-pass filters for image steganalysis were designed in 2012 using Jessica Fridrich: 1st, 2nd, 3rd, SQUARE3x3, SQUARE5x5, EDGE3x3, and EDGE5x5, examples of these filters using matrix representations are as follows:
the 1st type is not symmetrical about the center, so 8 high-pass filters are in total, the 2nd type is symmetrical about the center, so 4 high-pass filters are in total, the SQUARE3x3 type has rotation invariance, so only 1 high-pass filter is in total, the EDGE3x3 type is a part of the SQUARE3x3 type high-pass filter cut along the horizontal direction and the vertical direction, and 4 high-pass filters are in total. Similarly, there are 8 high pass filters for the 3rd type, 1 high pass filter for the SQUARE5x5 type, and 4 high pass filters for the EDGE5x5 type. Thus, 30 high-pass filters are obtained.
For a high-pass filter with size 3 × 3, it is transformed into 5 × 5 by a method of filling 0 around, thereby obtaining 30 matrices of 5 × 5 coefficients. These matrices are arranged in the order of 1st, 2nd, 3rd, SQUARE3x3, EDGE3x3, SQUARE5x5, EDGE5x5 to obtain a matrix set S. Assuming that a convolutional layer of the convolutional neural network has 30 convolutional kernels with the size of 5 × 5, and the number of data channels input to the convolutional layer is 3, the coefficients of all the convolutional kernels of the convolutional layer may form a matrix of 5 × 3 × 30, and the corresponding coefficient matrix is determined by selecting from the 30 matrices of 5 × 5.
Specifically, determining the convolution kernel coefficients of the first layer convolution layer according to the coefficient matrix includes:
let the coefficient of the i-th convolution kernel be WiThe k-th matrix in S is SkThen the coefficient matrix for each convolution kernel is:
Wi=[Sk-2 Sk-1 Sk]
where k is 3 ((i-1)% 10+ 1).
The digital image stitching detection device provided by the embodiment of the invention is described below, and the digital image stitching detection device described below and the digital image stitching detection method described above can be referred to correspondingly.
Fig. 2 is a schematic structural diagram of a digital image stitching detection apparatus according to an embodiment of the present invention, and as shown in fig. 2, the digital image stitching detection apparatus includes a sliding window processing module 201, a stitching analysis module 202, and a stitching judgment module 203. The sliding window processing module 201 is configured to traverse an image to be detected with a sliding window and a step length of a preset size to obtain a plurality of image blocks; the stitching analysis module 202 is configured to input the plurality of image blocks into a preset first convolutional neural network model respectively to obtain a stitching detection result of each image block; the stitching judgment module 203 is configured to determine that the image to be detected is a stitched image if the first convolutional neural network model determines that the number of stitched image blocks is greater than a preset threshold in proportion to the total number of image blocks; and the first convolution neural network model is obtained after training according to the known splicing result as an image block of the label.
The device embodiment provided in the embodiments of the present invention is for implementing the above method embodiments, and for details of the process and the details, reference is made to the above method embodiments, which are not described herein again.
According to the digital image splicing detection device provided by the embodiment of the invention, the image to be detected is traversed through the sliding window and the step length with the preset size to obtain the plurality of image blocks, a relatively simple strategy can be used for training, and the calculation efficiency of the model is improved. In addition, the detection accuracy on the complex color image is greatly improved by utilizing the characteristic extraction and learning mode of deep learning, and the generalization capability is strong.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 3, the electronic device may include: a processor (processor)301, a communication Interface (communication Interface)302, a memory (memory)303 and a communication bus 304, wherein the processor 301, the communication Interface 302 and the memory 303 complete communication with each other through the communication bus 304. Processor 301 may invoke logic instructions in memory 303 to perform a digital image stitching detection method comprising: traversing the image to be detected by a sliding window and a step length with preset sizes to obtain a plurality of image blocks; respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block; if the first convolution neural network model judges that the number of spliced image blocks is greater than a preset threshold value in proportion to the total number of image blocks, the image to be detected is a spliced image; and the first convolution neural network model is obtained after training according to the known splicing result as an image block of the label.
In addition, the logic instructions in the memory 303 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, an embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer, the computer can execute the digital image stitching detection method provided by the above-mentioned method embodiments, where the method includes: traversing the image to be detected by a sliding window and a step length with preset sizes to obtain a plurality of image blocks; respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block; if the first convolution neural network model judges that the number of spliced image blocks is greater than a preset threshold value in proportion to the total number of image blocks, the image to be detected is a spliced image; and the first convolution neural network model is obtained after training according to the known splicing result as an image block of the label.
In another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented by a processor to perform the digital image stitching detection method provided in the foregoing embodiments, and the method includes: traversing the image to be detected by a sliding window and a step length with preset sizes to obtain a plurality of image blocks; respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block; if the first convolution neural network model judges that the number of spliced image blocks is greater than a preset threshold value in proportion to the total number of image blocks, the image to be detected is a spliced image; and the first convolution neural network model is obtained after training according to the known splicing result as an image block of the label.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A digital image splicing detection method is characterized by comprising the following steps:
traversing the image to be detected by a sliding window and a step length with preset sizes to obtain a plurality of image blocks;
respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block;
if the first convolution neural network model judges that the number of spliced image blocks is greater than a preset threshold value in proportion to the total number of image blocks, the image to be detected is a spliced image;
and the first convolution neural network model is obtained after training according to the known splicing result as an image block of the label.
2. The digital image stitching detection method according to claim 1, wherein after determining that the image to be detected is a stitched image, the method further comprises:
carrying out edge detection on the image to be detected to obtain an image edge;
traversing the edge area of the image by using a sliding window with a preset size to obtain a plurality of edge image blocks;
respectively inputting the edge image blocks into a preset second convolutional neural network model to obtain a detection result of whether each edge image block is the edge of the splicing area;
obtaining an image splicing area according to all image blocks judged as the edge of the splicing area;
and the second convolutional neural network model is obtained after training according to the known splicing result as the edge image block of the label.
3. The digital image stitching detection method according to claim 1, wherein before the image blocks are respectively input to a preset first convolutional neural network model, the method further comprises:
randomly clipping the sample image of the known splicing result according to the size of the sliding window;
performing label identification according to an actual splicing result of the image block obtained by cutting to obtain a first training sample;
and training the constructed initial convolutional neural network model by using the image blocks in the first training sample to obtain the first convolutional neural network model.
4. The digital image stitching detection method according to claim 3, wherein the training the constructed initial convolutional neural network model by using the image blocks in the first training sample to obtain the first convolutional neural network model comprises:
taking samples in a preset proportion in the first training samples as a training set, and taking the rest of the first training samples as a verification set;
performing iterative training on the constructed convolutional neural network model by using the first training set;
and in the model after each round of training, selecting the model with the minimum loss as the first convolution neural network model according to the average loss of the model on the verification set.
5. The digital image stitching detection method according to claim 2, wherein before the edge image blocks are respectively input to a preset second convolutional neural network model, the method further comprises:
cutting image blocks along the edge of the sample image in a fixed size to obtain a plurality of edge image blocks;
labeling according to an actual splicing result of the edge image blocks obtained by cutting to obtain second training samples of the edge image blocks with splicing areas and without splicing areas;
and training the constructed initial convolutional neural network model by using the edge image blocks in the second training sample to obtain a second convolutional neural network model.
6. The digital image stitching detection method according to claim 5, wherein before the training of the constructed initial convolutional neural network model by using the edge image blocks in the second training sample, the method further comprises:
parameters of the first layer convolutional layer were initialized using the Spam initialization method.
7. The digital image stitching detection method of claim 6, wherein initializing the parameters of the first layer of convolutional layers using a Spam initialization method comprises:
according to five high-pass filters 1st, 2nd, 3rd, SQUARE3x3, SQUARE5x5, EDGE3x3 and EDGE5x5 for image steganalysis, 30 weight coefficient matrixes of 5x5 are obtained through a surrounding zero filling method;
and determining the convolution kernel weight coefficient of the first layer of convolution layer according to the weight coefficient matrix.
8. A digital image stitching detection device, comprising:
the sliding window processing module is used for traversing the image to be detected by a sliding window with a preset size and a step length to obtain a plurality of image blocks;
the splicing analysis module is used for respectively inputting the plurality of image blocks into a preset first convolution neural network model to obtain a splicing detection result of each image block;
the splicing judgment module is used for judging that the image blocks to be spliced are the spliced images if the first convolution neural network model judges that the number of the spliced image blocks accounts for a ratio larger than a preset threshold value;
and the first convolution neural network model is obtained after training according to the known splicing result as an image block of the label.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the digital image stitching detection method according to any one of claims 1 to 7 are implemented when the program is executed by the processor.
10. A non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the steps of the digital image stitching detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011242471.5A CN112508849A (en) | 2020-11-09 | 2020-11-09 | Digital image splicing detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011242471.5A CN112508849A (en) | 2020-11-09 | 2020-11-09 | Digital image splicing detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112508849A true CN112508849A (en) | 2021-03-16 |
Family
ID=74955736
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011242471.5A Pending CN112508849A (en) | 2020-11-09 | 2020-11-09 | Digital image splicing detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112508849A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113793382A (en) * | 2021-08-04 | 2021-12-14 | 北京旷视科技有限公司 | Video image splicing seam searching method and video image splicing method and device |
CN114119438A (en) * | 2021-11-11 | 2022-03-01 | 清华大学 | Training method and device of image collage model and image collage method and device |
CN114596263A (en) * | 2022-01-27 | 2022-06-07 | 阿丘机器人科技(苏州)有限公司 | Deep learning mainboard appearance detection method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106920215A (en) * | 2017-03-06 | 2017-07-04 | 长沙全度影像科技有限公司 | A kind of detection method of panoramic picture registration effect |
CN109726739A (en) * | 2018-12-04 | 2019-05-07 | 深圳大学 | A kind of object detection method and system |
CN110463176A (en) * | 2017-03-10 | 2019-11-15 | 高途乐公司 | Image quality measure |
CN111080629A (en) * | 2019-12-20 | 2020-04-28 | 河北工业大学 | Method for detecting image splicing tampering |
US20200320369A1 (en) * | 2018-03-30 | 2020-10-08 | Tencent Technology (Shenzhen) Company Limited | Image recognition method, apparatus, electronic device and storage medium |
-
2020
- 2020-11-09 CN CN202011242471.5A patent/CN112508849A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106920215A (en) * | 2017-03-06 | 2017-07-04 | 长沙全度影像科技有限公司 | A kind of detection method of panoramic picture registration effect |
CN110463176A (en) * | 2017-03-10 | 2019-11-15 | 高途乐公司 | Image quality measure |
US20200320369A1 (en) * | 2018-03-30 | 2020-10-08 | Tencent Technology (Shenzhen) Company Limited | Image recognition method, apparatus, electronic device and storage medium |
CN109726739A (en) * | 2018-12-04 | 2019-05-07 | 深圳大学 | A kind of object detection method and system |
CN111080629A (en) * | 2019-12-20 | 2020-04-28 | 河北工业大学 | Method for detecting image splicing tampering |
Non-Patent Citations (1)
Title |
---|
BAOLE WEI 等: "Deep-BIF: Blind image forensics based on deep learning", 《2019 IEEE CONFERENCE ON DEPENDABLE AND SECURE COMPUTING (DSC)》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113793382A (en) * | 2021-08-04 | 2021-12-14 | 北京旷视科技有限公司 | Video image splicing seam searching method and video image splicing method and device |
CN114119438A (en) * | 2021-11-11 | 2022-03-01 | 清华大学 | Training method and device of image collage model and image collage method and device |
CN114596263A (en) * | 2022-01-27 | 2022-06-07 | 阿丘机器人科技(苏州)有限公司 | Deep learning mainboard appearance detection method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11727670B2 (en) | Defect detection method and apparatus | |
CN112508849A (en) | Digital image splicing detection method and device | |
CN111680690B (en) | Character recognition method and device | |
CN104504669B (en) | A kind of medium filtering detection method based on local binary patterns | |
CN109934826A (en) | A kind of characteristics of image dividing method based on figure convolutional network | |
CN110675339A (en) | Image restoration method and system based on edge restoration and content restoration | |
CN106971399B (en) | Image-mosaics detection method and device | |
CN113610862B (en) | Screen content image quality assessment method | |
CN111696046A (en) | Watermark removing method and device based on generating type countermeasure network | |
CN111179196B (en) | Multi-resolution depth network image highlight removing method based on divide-and-conquer | |
CN107945122A (en) | Infrared image enhancing method and system based on self-adapting histogram segmentation | |
CN111597845A (en) | Two-dimensional code detection method, device and equipment and readable storage medium | |
CN107578011A (en) | The decision method and device of key frame of video | |
CN116228804A (en) | Mineral resource identification method based on image segmentation | |
CN111192241A (en) | Quality evaluation method and device of face image and computer storage medium | |
CN118397367A (en) | Tampering detection method based on convolution vision Mamba | |
CN114841974A (en) | Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium | |
CN112669204B (en) | Image processing method, training method and device of image processing model | |
CN112200789B (en) | Image recognition method and device, electronic equipment and storage medium | |
CN117541546A (en) | Method and device for determining image cropping effect, storage medium and electronic equipment | |
Chen et al. | Image quality assessment guided deep neural networks training | |
CN113936133B (en) | Self-adaptive data enhancement method for target detection | |
CN116152162A (en) | Digitizing method and device for appearance quality residual injury index of cured tobacco leaves | |
CN111461139B (en) | Multi-target visual saliency layered detection method in complex scene | |
CN107862316A (en) | Convolution operation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210316 |